All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
@ 2011-05-20  2:59 ` KOSAKI Motohiro
  0 siblings, 0 replies; 13+ messages in thread
From: KOSAKI Motohiro @ 2011-05-20  2:59 UTC (permalink / raw)
  To: linux-mm, yinghan, linux-kernel; +Cc: kosaki.motohiro

> Hmm, got Nick's email wrong.
> 
> --Ying

Ping.
Can you please explain current status? When I can see your answer?



> 
> On Tue, Apr 26, 2011 at 6:15 PM, Ying Han <yinghan@google.com> wrote:
>> On Tue, Apr 26, 2011 at 5:47 PM, KOSAKI Motohiro
>> <kosaki.motohiro@jp.fujitsu.com> wrote:
>>>> > > \x1a{
>>>> > > \x1a \x1a \x1a struct xfs_mount *mp;
>>>> > > \x1a \x1a \x1a struct xfs_perag *pag;
>>>> > > \x1a \x1a \x1a xfs_agnumber_t \x1aag;
>>>> > > \x1a \x1a \x1a int \x1a \x1a \x1a \x1a \x1a \x1a reclaimable;
>>>> > > + \x1a \x1a int nr_to_scan = sc->nr_slab_to_reclaim;
>>>> > > + \x1a \x1a gfp_t gfp_mask = sc->gfp_mask;
>>>> >
>>>> > And, this very near meaning field .nr_scanned and .nr_slab_to_reclaim
>>>> > poped up new question.
>>>> > Why don't we pass more clever slab shrinker target? Why do we need pass
>>>> > similar two argument?
>>>> >
>>>>
>>>> I renamed the nr_slab_to_reclaim and nr_scanned in shrink struct.
>>>
>>> Oh no. that's not naming issue. example, Nick's previous similar patch pass
>>> zone-total-pages and how-much-scanned-pages. (ie shrink_slab don't calculate
>>> current magical target scanning objects anymore)
>>> \x1a \x1a \x1a \x1aie, \x1a"4 * \x1amax_pass \x1a* (scanned / nr- lru_pages-in-zones)"
>>>
>>> Instead, individual shrink_slab callback calculate this one.
>>> see git://git.kernel.org/pub/scm/linux/kernel/git/npiggin/linux-npiggin.git
>>>
>>> I'm curious why you change the design from another guy's previous very similar effort and
>>> We have to be convinced which is better.
>>
>> Thank you for the pointer. My patch is intended to consolidate all
>> existing parameters passed from reclaim code
>> to the shrinker.
>>
>> Talked w/ Nick and Andrew from last LSF, \x1awe agree that this patch
>> will be useful for other extensions later which allows us easily
>> adding extensions to the shrinkers without shrinker files. Nick and I
>> talked about the effort later to pass the nodemask down to the
>> shrinker. He is cc-ed in the thread. Another thing I would like to
>> repost is to add the reclaim priority down to the shrinker, which we
>> won't throw tons of page caches pages by reclaiming one inode slab
>> object.
>>
>> --Ying



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
@ 2011-05-20  2:59 ` KOSAKI Motohiro
  0 siblings, 0 replies; 13+ messages in thread
From: KOSAKI Motohiro @ 2011-05-20  2:59 UTC (permalink / raw)
  To: linux-mm, yinghan, linux-kernel; +Cc: kosaki.motohiro

> Hmm, got Nick's email wrong.
> 
> --Ying

Ping.
Can you please explain current status? When I can see your answer?



> 
> On Tue, Apr 26, 2011 at 6:15 PM, Ying Han <yinghan@google.com> wrote:
>> On Tue, Apr 26, 2011 at 5:47 PM, KOSAKI Motohiro
>> <kosaki.motohiro@jp.fujitsu.com> wrote:
>>>> > > \x1a{
>>>> > > \x1a \x1a \x1a struct xfs_mount *mp;
>>>> > > \x1a \x1a \x1a struct xfs_perag *pag;
>>>> > > \x1a \x1a \x1a xfs_agnumber_t \x1aag;
>>>> > > \x1a \x1a \x1a int \x1a \x1a \x1a \x1a \x1a \x1a reclaimable;
>>>> > > + \x1a \x1a int nr_to_scan = sc->nr_slab_to_reclaim;
>>>> > > + \x1a \x1a gfp_t gfp_mask = sc->gfp_mask;
>>>> >
>>>> > And, this very near meaning field .nr_scanned and .nr_slab_to_reclaim
>>>> > poped up new question.
>>>> > Why don't we pass more clever slab shrinker target? Why do we need pass
>>>> > similar two argument?
>>>> >
>>>>
>>>> I renamed the nr_slab_to_reclaim and nr_scanned in shrink struct.
>>>
>>> Oh no. that's not naming issue. example, Nick's previous similar patch pass
>>> zone-total-pages and how-much-scanned-pages. (ie shrink_slab don't calculate
>>> current magical target scanning objects anymore)
>>> \x1a \x1a \x1a \x1aie, \x1a"4 * \x1amax_pass \x1a* (scanned / nr- lru_pages-in-zones)"
>>>
>>> Instead, individual shrink_slab callback calculate this one.
>>> see git://git.kernel.org/pub/scm/linux/kernel/git/npiggin/linux-npiggin.git
>>>
>>> I'm curious why you change the design from another guy's previous very similar effort and
>>> We have to be convinced which is better.
>>
>> Thank you for the pointer. My patch is intended to consolidate all
>> existing parameters passed from reclaim code
>> to the shrinker.
>>
>> Talked w/ Nick and Andrew from last LSF, \x1awe agree that this patch
>> will be useful for other extensions later which allows us easily
>> adding extensions to the shrinkers without shrinker files. Nick and I
>> talked about the effort later to pass the nodemask down to the
>> shrinker. He is cc-ed in the thread. Another thing I would like to
>> repost is to add the reclaim priority down to the shrinker, which we
>> won't throw tons of page caches pages by reclaiming one inode slab
>> object.
>>
>> --Ying


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-05-20  2:59 ` KOSAKI Motohiro
  (?)
@ 2011-05-20  3:23 ` Ying Han
  2011-05-20 12:30     ` KOSAKI Motohiro
  -1 siblings, 1 reply; 13+ messages in thread
From: Ying Han @ 2011-05-20  3:23 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: linux-mm, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2417 bytes --]

On Thu, May 19, 2011 at 7:59 PM, KOSAKI Motohiro <
kosaki.motohiro@jp.fujitsu.com> wrote:

> > Hmm, got Nick's email wrong.
> >
> > --Ying
>
> Ping.
> Can you please explain current status? When I can see your answer?
>

The patch has been merged into mmotm-04-29-16-25. Sorry if there is a
question that I missed ?

--Ying

>
>
> >
> > On Tue, Apr 26, 2011 at 6:15 PM, Ying Han <yinghan@google.com> wrote:
> >> On Tue, Apr 26, 2011 at 5:47 PM, KOSAKI Motohiro
> >> <kosaki.motohiro@jp.fujitsu.com> wrote:
> >>>> > >  {
> >>>> > >       struct xfs_mount *mp;
> >>>> > >       struct xfs_perag *pag;
> >>>> > >       xfs_agnumber_t  ag;
> >>>> > >       int             reclaimable;
> >>>> > > +     int nr_to_scan = sc->nr_slab_to_reclaim;
> >>>> > > +     gfp_t gfp_mask = sc->gfp_mask;
> >>>> >
> >>>> > And, this very near meaning field .nr_scanned and
> .nr_slab_to_reclaim
> >>>> > poped up new question.
> >>>> > Why don't we pass more clever slab shrinker target? Why do we need
> pass
> >>>> > similar two argument?
> >>>> >
> >>>>
> >>>> I renamed the nr_slab_to_reclaim and nr_scanned in shrink struct.
> >>>
> >>> Oh no. that's not naming issue. example, Nick's previous similar patch
> pass
> >>> zone-total-pages and how-much-scanned-pages. (ie shrink_slab don't
> calculate
> >>> current magical target scanning objects anymore)
> >>>        ie,  "4 *  max_pass  * (scanned / nr- lru_pages-in-zones)"
> >>>
> >>> Instead, individual shrink_slab callback calculate this one.
> >>> see git://
> git.kernel.org/pub/scm/linux/kernel/git/npiggin/linux-npiggin.git
> >>>
> >>> I'm curious why you change the design from another guy's previous very
> similar effort and
> >>> We have to be convinced which is better.
> >>
> >> Thank you for the pointer. My patch is intended to consolidate all
> >> existing parameters passed from reclaim code
> >> to the shrinker.
> >>
> >> Talked w/ Nick and Andrew from last LSF,  we agree that this patch
> >> will be useful for other extensions later which allows us easily
> >> adding extensions to the shrinkers without shrinker files. Nick and I
> >> talked about the effort later to pass the nodemask down to the
> >> shrinker. He is cc-ed in the thread. Another thing I would like to
> >> repost is to add the reclaim priority down to the shrinker, which we
> >> won't throw tons of page caches pages by reclaiming one inode slab
> >> object.
> >>
> >> --Ying
>
>
>

[-- Attachment #2: Type: text/html, Size: 3688 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-05-20  3:23 ` Ying Han
@ 2011-05-20 12:30     ` KOSAKI Motohiro
  0 siblings, 0 replies; 13+ messages in thread
From: KOSAKI Motohiro @ 2011-05-20 12:30 UTC (permalink / raw)
  To: yinghan; +Cc: linux-mm, linux-kernel, akpm

(2011/05/20 12:23), Ying Han wrote:
> On Thu, May 19, 2011 at 7:59 PM, KOSAKI Motohiro<
> kosaki.motohiro@jp.fujitsu.com>  wrote:
>
>>> Hmm, got Nick's email wrong.
>>>
>>> --Ying
>>
>> Ping.
>> Can you please explain current status? When I can see your answer?
>>
>
> The patch has been merged into mmotm-04-29-16-25. Sorry if there is a
> question that I missed ?

I know. I know you haven't fix my pointed issue and you haven't answer
my question over two week. :-/

As far as I can remember now, at least I pointed out

  - nr_slab_to_reclaim is wrong name.
    you misunderstand shrinker->shrink() interface.
  - least-recently-us is typo
  - don't exporse both nr_scanned and nr_slab_to_reclaim.
    Instead, calculate proper argument in shrink_slab.


Andrew, I've fixed it. please apply.


 From 104ad1af66b57e4030b2b3bce5e35d2d3ec29e41 Mon Sep 17 00:00:00 2001
From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Date: Fri, 20 May 2011 20:54:01 +0900
Subject: [PATCH] vmscan: fix up new shrinker API

Current new shrinker API submission has some easy mistake. Fix it up.

- remove nr_scanned field from shrink_control.
   we don't have to expose vmscan internal to shrinkers.
- rename nr_slab_to_reclaim to nr_to_scan.
   to_reclaim is very wrong name. shrinker API allow shrinker
   don't reclaim an slab object if they were recently accessed.
- typo: least-recently-us

This patch also make do_shrinker_shrink() helper function. It
increase code readability a bit.

Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
  arch/x86/kvm/mmu.c                   |    2 +-
  drivers/gpu/drm/i915/i915_gem.c      |    2 +-
  drivers/gpu/drm/ttm/ttm_page_alloc.c |    2 +-
  drivers/staging/zcache/zcache.c      |    2 +-
  fs/dcache.c                          |    2 +-
  fs/drop_caches.c                     |    7 ++---
  fs/gfs2/glock.c                      |    2 +-
  fs/inode.c                           |    2 +-
  fs/mbcache.c                         |    2 +-
  fs/nfs/dir.c                         |    2 +-
  fs/quota/dquot.c                     |    2 +-
  fs/xfs/linux-2.6/xfs_buf.c           |    2 +-
  fs/xfs/linux-2.6/xfs_sync.c          |    2 +-
  fs/xfs/quota/xfs_qm.c                |    2 +-
  include/linux/mm.h                   |   12 +++++-----
  mm/memory-failure.c                  |    3 +-
  mm/vmscan.c                          |   36 +++++++++++++++++----------------
  17 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 4cf6c15..bd14bb4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3549,7 +3549,7 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc)
  {
  	struct kvm *kvm;
  	struct kvm *kvm_freed = NULL;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;

  	if (nr_to_scan == 0)
  		goto out;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e4f3f6c..ec3d98c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4100,7 +4100,7 @@ i915_gem_inactive_shrink(struct shrinker *shrinker, struct shrink_control *sc)
  			     mm.inactive_shrinker);
  	struct drm_device *dev = dev_priv->dev;
  	struct drm_i915_gem_object *obj, *next;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	int cnt;

  	if (!mutex_trylock(&dev->struct_mutex))
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index 02ccf6f..d948575 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -402,7 +402,7 @@ static int ttm_pool_mm_shrink(struct shrinker *shrink,
  	unsigned i;
  	unsigned pool_offset = atomic_add_return(1, &start_pool);
  	struct ttm_page_pool *pool;
-	int shrink_pages = sc->nr_slab_to_reclaim;
+	int shrink_pages = sc->nr_to_scan;

  	pool_offset = pool_offset % NUM_POOLS;
  	/* select start pool in round robin fashion */
diff --git a/drivers/staging/zcache/zcache.c b/drivers/staging/zcache/zcache.c
index 135851a..77ac2d4 100644
--- a/drivers/staging/zcache/zcache.c
+++ b/drivers/staging/zcache/zcache.c
@@ -1185,7 +1185,7 @@ static int shrink_zcache_memory(struct shrinker *shrink,
  				struct shrink_control *sc)
  {
  	int ret = -1;
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if (nr >= 0) {
diff --git a/fs/dcache.c b/fs/dcache.c
index f70abf2..8926cd8 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1234,7 +1234,7 @@ EXPORT_SYMBOL(shrink_dcache_parent);
  static int shrink_dcache_memory(struct shrinker *shrink,
  				struct shrink_control *sc)
  {
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if (nr) {
diff --git a/fs/drop_caches.c b/fs/drop_caches.c
index 440999c..e0a2906 100644
--- a/fs/drop_caches.c
+++ b/fs/drop_caches.c
@@ -42,12 +42,11 @@ static void drop_slab(void)
  	int nr_objects;
  	struct shrink_control shrink = {
  		.gfp_mask = GFP_KERNEL,
-		.nr_scanned = 1000,
  	};

-	do {
-		nr_objects = shrink_slab(&shrink, 1000);
-	} while (nr_objects > 10);
+	do
+		nr_objects = shrink_slab(&shrink, 1000, 1000);
+	while (nr_objects > 10);
  }

  int drop_caches_sysctl_handler(ctl_table *table, int write,
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index f3c9b17..2792a79 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -1352,7 +1352,7 @@ static int gfs2_shrink_glock_memory(struct shrinker *shrink,
  	struct gfs2_glock *gl;
  	int may_demote;
  	int nr_skipped = 0;
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;
  	LIST_HEAD(skipped);

diff --git a/fs/inode.c b/fs/inode.c
index ce61a1b..fadba5a 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -752,7 +752,7 @@ static void prune_icache(int nr_to_scan)
  static int shrink_icache_memory(struct shrinker *shrink,
  				struct shrink_control *sc)
  {
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if (nr) {
diff --git a/fs/mbcache.c b/fs/mbcache.c
index 19a2666..8c32ef3 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -168,7 +168,7 @@ mb_cache_shrink_fn(struct shrinker *shrink, struct shrink_control *sc)
  	struct mb_cache *cache;
  	struct mb_cache_entry *entry, *tmp;
  	int count = 0;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	mb_debug("trying to free %d entries", nr_to_scan);
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index 9dee703..424e477 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -2048,7 +2048,7 @@ int nfs_access_cache_shrinker(struct shrinker *shrink,
  	LIST_HEAD(head);
  	struct nfs_inode *nfsi, *next;
  	struct nfs_access_entry *cache;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if ((gfp_mask & GFP_KERNEL) != GFP_KERNEL)
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index b780ee0..5b572c8 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -694,7 +694,7 @@ static void prune_dqcache(int count)
  static int shrink_dqcache_memory(struct shrinker *shrink,
  				 struct shrink_control *sc)
  {
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;

  	if (nr) {
  		spin_lock(&dq_list_lock);
diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index 04b9558..ddac2ec 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1406,7 +1406,7 @@ xfs_buftarg_shrink(
  	struct xfs_buftarg	*btp = container_of(shrink,
  					struct xfs_buftarg, bt_shrinker);
  	struct xfs_buf		*bp;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	LIST_HEAD(dispose);

  	if (!nr_to_scan)
diff --git a/fs/xfs/linux-2.6/xfs_sync.c b/fs/xfs/linux-2.6/xfs_sync.c
index 3fa9aae..2460114 100644
--- a/fs/xfs/linux-2.6/xfs_sync.c
+++ b/fs/xfs/linux-2.6/xfs_sync.c
@@ -1028,7 +1028,7 @@ xfs_reclaim_inode_shrink(
  	struct xfs_perag *pag;
  	xfs_agnumber_t	ag;
  	int		reclaimable;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	mp = container_of(shrink, struct xfs_mount, m_inode_shrink);
diff --git a/fs/xfs/quota/xfs_qm.c b/fs/xfs/quota/xfs_qm.c
index 2954330..c31a7ae 100644
--- a/fs/xfs/quota/xfs_qm.c
+++ b/fs/xfs/quota/xfs_qm.c
@@ -2012,7 +2012,7 @@ xfs_qm_shake(
  	struct shrink_control *sc)
  {
  	int	ndqused, nfree, n;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if (!kmem_shake_allow(gfp_mask))
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 72ba1f5..02be595 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1134,19 +1134,18 @@ static inline void sync_mm_rss(struct task_struct *task, struct mm_struct *mm)
   * We consolidate the values for easier extention later.
   */
  struct shrink_control {
-	unsigned long nr_scanned;
  	gfp_t gfp_mask;

-	/* How many slab objects shrinker() should reclaim */
-	unsigned long nr_slab_to_reclaim;
+	/* How many slab objects shrinker() should scan and try to reclaim */
+	unsigned long nr_to_scan;
  };

  /*
   * A callback you can register to apply pressure to ageable caches.
   *
- * 'sc' is passed shrink_control which includes a count 'nr_slab_to_reclaim'
- * and a 'gfpmask'.  It should look through the least-recently-us
- * 'nr_slab_to_reclaim' entries and attempt to free them up.  It should return
+ * 'sc' is passed shrink_control which includes a count 'nr_to_scan'
+ * and a 'gfpmask'.  It should look through the least-recently-used
+ * 'nr_to_scan' entries and attempt to free them up.  It should return
   * the number of objects which remain in the cache.  If it returns -1, it means
   * it cannot do any scanning at this time (eg. there is a risk of deadlock).
   *
@@ -1613,6 +1612,7 @@ int in_gate_area_no_mm(unsigned long addr);
  int drop_caches_sysctl_handler(struct ctl_table *, int,
  					void __user *, size_t *, loff_t *);
  unsigned long shrink_slab(struct shrink_control *shrink,
+			  unsigned long nr_pages_scanned,
  				unsigned long lru_pages);

  #ifndef CONFIG_MMU
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 341341b..5c8f7e0 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -241,10 +241,9 @@ void shake_page(struct page *p, int access)
  		do {
  			struct shrink_control shrink = {
  				.gfp_mask = GFP_KERNEL,
-				.nr_scanned = 1000,
  			};

-			nr = shrink_slab(&shrink, 1000);
+			nr = shrink_slab(&shrink, 1000, 1000);
  			if (page_count(p) == 1)
  				break;
  		} while (nr > 10);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 292582c..89e24f7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -201,6 +201,14 @@ void unregister_shrinker(struct shrinker *shrinker)
  }
  EXPORT_SYMBOL(unregister_shrinker);

+static inline int do_shrinker_shrink(struct shrinker *shrinker,
+				     struct shrink_control *sc,
+				     unsigned long nr_to_scan)
+{
+	sc->nr_to_scan = nr_to_scan;
+	return (*shrinker->shrink)(shrinker, sc);
+}
+
  #define SHRINK_BATCH 128
  /*
   * Call the shrink functions to age shrinkable caches
@@ -222,14 +230,14 @@ EXPORT_SYMBOL(unregister_shrinker);
   * Returns the number of slab objects which we shrunk.
   */
  unsigned long shrink_slab(struct shrink_control *shrink,
+			  unsigned long nr_pages_scanned,
  			  unsigned long lru_pages)
  {
  	struct shrinker *shrinker;
  	unsigned long ret = 0;
-	unsigned long scanned = shrink->nr_scanned;

-	if (scanned == 0)
-		scanned = SWAP_CLUSTER_MAX;
+	if (nr_pages_scanned == 0)
+		nr_pages_scanned = SWAP_CLUSTER_MAX;

  	if (!down_read_trylock(&shrinker_rwsem))
  		return 1;	/* Assume we'll be able to shrink next time */
@@ -239,9 +247,8 @@ unsigned long shrink_slab(struct shrink_control *shrink,
  		unsigned long total_scan;
  		unsigned long max_pass;

-		shrink->nr_slab_to_reclaim = 0;
-		max_pass = (*shrinker->shrink)(shrinker, shrink);
-		delta = (4 * scanned) / shrinker->seeks;
+		max_pass = do_shrinker_shrink(shrinker, shrink, 0);
+		delta = (4 * nr_pages_scanned) / shrinker->seeks;
  		delta *= max_pass;
  		do_div(delta, lru_pages + 1);
  		shrinker->nr += delta;
@@ -268,11 +275,9 @@ unsigned long shrink_slab(struct shrink_control *shrink,
  			int shrink_ret;
  			int nr_before;

-			shrink->nr_slab_to_reclaim = 0;
-			nr_before = (*shrinker->shrink)(shrinker, shrink);
-			shrink->nr_slab_to_reclaim = this_scan;
-			shrink_ret = (*shrinker->shrink)(shrinker, shrink);
-
+			nr_before = do_shrinker_shrink(shrinker, shrink, 0);
+			shrink_ret = do_shrinker_shrink(shrinker, shrink,
+							this_scan);
  			if (shrink_ret == -1)
  				break;
  			if (shrink_ret < nr_before)
@@ -2086,8 +2091,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
  				lru_pages += zone_reclaimable_pages(zone);
  			}

-			shrink->nr_scanned = sc->nr_scanned;
-			shrink_slab(shrink, lru_pages);
+			shrink_slab(shrink, sc->nr_scanned, lru_pages);
  			if (reclaim_state) {
  				sc->nr_reclaimed += reclaim_state->reclaimed_slab;
  				reclaim_state->reclaimed_slab = 0;
@@ -2488,8 +2492,7 @@ loop_again:
  					end_zone, 0))
  				shrink_zone(priority, zone, &sc);
  			reclaim_state->reclaimed_slab = 0;
-			shrink.nr_scanned = sc.nr_scanned;
-			nr_slab = shrink_slab(&shrink, lru_pages);
+			nr_slab = shrink_slab(&shrink, sc.nr_scanned, lru_pages);
  			sc.nr_reclaimed += reclaim_state->reclaimed_slab;
  			total_scanned += sc.nr_scanned;

@@ -3057,7 +3060,6 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
  	}

  	nr_slab_pages0 = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
-	shrink.nr_scanned = sc.nr_scanned;
  	if (nr_slab_pages0 > zone->min_slab_pages) {
  		/*
  		 * shrink_slab() does not currently allow us to determine how
@@ -3073,7 +3075,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
  			unsigned long lru_pages = zone_reclaimable_pages(zone);

  			/* No reclaimable slab or very low memory pressure */
-			if (!shrink_slab(&shrink, lru_pages))
+			if (!shrink_slab(&shrink, sc.nr_scanned, lru_pages))
  				break;

  			/* Freed enough memory */
-- 
1.7.3.1





^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
@ 2011-05-20 12:30     ` KOSAKI Motohiro
  0 siblings, 0 replies; 13+ messages in thread
From: KOSAKI Motohiro @ 2011-05-20 12:30 UTC (permalink / raw)
  To: yinghan; +Cc: linux-mm, linux-kernel, akpm

(2011/05/20 12:23), Ying Han wrote:
> On Thu, May 19, 2011 at 7:59 PM, KOSAKI Motohiro<
> kosaki.motohiro@jp.fujitsu.com>  wrote:
>
>>> Hmm, got Nick's email wrong.
>>>
>>> --Ying
>>
>> Ping.
>> Can you please explain current status? When I can see your answer?
>>
>
> The patch has been merged into mmotm-04-29-16-25. Sorry if there is a
> question that I missed ?

I know. I know you haven't fix my pointed issue and you haven't answer
my question over two week. :-/

As far as I can remember now, at least I pointed out

  - nr_slab_to_reclaim is wrong name.
    you misunderstand shrinker->shrink() interface.
  - least-recently-us is typo
  - don't exporse both nr_scanned and nr_slab_to_reclaim.
    Instead, calculate proper argument in shrink_slab.


Andrew, I've fixed it. please apply.


 From 104ad1af66b57e4030b2b3bce5e35d2d3ec29e41 Mon Sep 17 00:00:00 2001
From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Date: Fri, 20 May 2011 20:54:01 +0900
Subject: [PATCH] vmscan: fix up new shrinker API

Current new shrinker API submission has some easy mistake. Fix it up.

- remove nr_scanned field from shrink_control.
   we don't have to expose vmscan internal to shrinkers.
- rename nr_slab_to_reclaim to nr_to_scan.
   to_reclaim is very wrong name. shrinker API allow shrinker
   don't reclaim an slab object if they were recently accessed.
- typo: least-recently-us

This patch also make do_shrinker_shrink() helper function. It
increase code readability a bit.

Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
  arch/x86/kvm/mmu.c                   |    2 +-
  drivers/gpu/drm/i915/i915_gem.c      |    2 +-
  drivers/gpu/drm/ttm/ttm_page_alloc.c |    2 +-
  drivers/staging/zcache/zcache.c      |    2 +-
  fs/dcache.c                          |    2 +-
  fs/drop_caches.c                     |    7 ++---
  fs/gfs2/glock.c                      |    2 +-
  fs/inode.c                           |    2 +-
  fs/mbcache.c                         |    2 +-
  fs/nfs/dir.c                         |    2 +-
  fs/quota/dquot.c                     |    2 +-
  fs/xfs/linux-2.6/xfs_buf.c           |    2 +-
  fs/xfs/linux-2.6/xfs_sync.c          |    2 +-
  fs/xfs/quota/xfs_qm.c                |    2 +-
  include/linux/mm.h                   |   12 +++++-----
  mm/memory-failure.c                  |    3 +-
  mm/vmscan.c                          |   36 +++++++++++++++++----------------
  17 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 4cf6c15..bd14bb4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3549,7 +3549,7 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc)
  {
  	struct kvm *kvm;
  	struct kvm *kvm_freed = NULL;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;

  	if (nr_to_scan == 0)
  		goto out;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e4f3f6c..ec3d98c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4100,7 +4100,7 @@ i915_gem_inactive_shrink(struct shrinker *shrinker, struct shrink_control *sc)
  			     mm.inactive_shrinker);
  	struct drm_device *dev = dev_priv->dev;
  	struct drm_i915_gem_object *obj, *next;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	int cnt;

  	if (!mutex_trylock(&dev->struct_mutex))
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index 02ccf6f..d948575 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -402,7 +402,7 @@ static int ttm_pool_mm_shrink(struct shrinker *shrink,
  	unsigned i;
  	unsigned pool_offset = atomic_add_return(1, &start_pool);
  	struct ttm_page_pool *pool;
-	int shrink_pages = sc->nr_slab_to_reclaim;
+	int shrink_pages = sc->nr_to_scan;

  	pool_offset = pool_offset % NUM_POOLS;
  	/* select start pool in round robin fashion */
diff --git a/drivers/staging/zcache/zcache.c b/drivers/staging/zcache/zcache.c
index 135851a..77ac2d4 100644
--- a/drivers/staging/zcache/zcache.c
+++ b/drivers/staging/zcache/zcache.c
@@ -1185,7 +1185,7 @@ static int shrink_zcache_memory(struct shrinker *shrink,
  				struct shrink_control *sc)
  {
  	int ret = -1;
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if (nr >= 0) {
diff --git a/fs/dcache.c b/fs/dcache.c
index f70abf2..8926cd8 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1234,7 +1234,7 @@ EXPORT_SYMBOL(shrink_dcache_parent);
  static int shrink_dcache_memory(struct shrinker *shrink,
  				struct shrink_control *sc)
  {
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if (nr) {
diff --git a/fs/drop_caches.c b/fs/drop_caches.c
index 440999c..e0a2906 100644
--- a/fs/drop_caches.c
+++ b/fs/drop_caches.c
@@ -42,12 +42,11 @@ static void drop_slab(void)
  	int nr_objects;
  	struct shrink_control shrink = {
  		.gfp_mask = GFP_KERNEL,
-		.nr_scanned = 1000,
  	};

-	do {
-		nr_objects = shrink_slab(&shrink, 1000);
-	} while (nr_objects > 10);
+	do
+		nr_objects = shrink_slab(&shrink, 1000, 1000);
+	while (nr_objects > 10);
  }

  int drop_caches_sysctl_handler(ctl_table *table, int write,
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index f3c9b17..2792a79 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -1352,7 +1352,7 @@ static int gfs2_shrink_glock_memory(struct shrinker *shrink,
  	struct gfs2_glock *gl;
  	int may_demote;
  	int nr_skipped = 0;
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;
  	LIST_HEAD(skipped);

diff --git a/fs/inode.c b/fs/inode.c
index ce61a1b..fadba5a 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -752,7 +752,7 @@ static void prune_icache(int nr_to_scan)
  static int shrink_icache_memory(struct shrinker *shrink,
  				struct shrink_control *sc)
  {
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if (nr) {
diff --git a/fs/mbcache.c b/fs/mbcache.c
index 19a2666..8c32ef3 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -168,7 +168,7 @@ mb_cache_shrink_fn(struct shrinker *shrink, struct shrink_control *sc)
  	struct mb_cache *cache;
  	struct mb_cache_entry *entry, *tmp;
  	int count = 0;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	mb_debug("trying to free %d entries", nr_to_scan);
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index 9dee703..424e477 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -2048,7 +2048,7 @@ int nfs_access_cache_shrinker(struct shrinker *shrink,
  	LIST_HEAD(head);
  	struct nfs_inode *nfsi, *next;
  	struct nfs_access_entry *cache;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if ((gfp_mask & GFP_KERNEL) != GFP_KERNEL)
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index b780ee0..5b572c8 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -694,7 +694,7 @@ static void prune_dqcache(int count)
  static int shrink_dqcache_memory(struct shrinker *shrink,
  				 struct shrink_control *sc)
  {
-	int nr = sc->nr_slab_to_reclaim;
+	int nr = sc->nr_to_scan;

  	if (nr) {
  		spin_lock(&dq_list_lock);
diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index 04b9558..ddac2ec 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1406,7 +1406,7 @@ xfs_buftarg_shrink(
  	struct xfs_buftarg	*btp = container_of(shrink,
  					struct xfs_buftarg, bt_shrinker);
  	struct xfs_buf		*bp;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	LIST_HEAD(dispose);

  	if (!nr_to_scan)
diff --git a/fs/xfs/linux-2.6/xfs_sync.c b/fs/xfs/linux-2.6/xfs_sync.c
index 3fa9aae..2460114 100644
--- a/fs/xfs/linux-2.6/xfs_sync.c
+++ b/fs/xfs/linux-2.6/xfs_sync.c
@@ -1028,7 +1028,7 @@ xfs_reclaim_inode_shrink(
  	struct xfs_perag *pag;
  	xfs_agnumber_t	ag;
  	int		reclaimable;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	mp = container_of(shrink, struct xfs_mount, m_inode_shrink);
diff --git a/fs/xfs/quota/xfs_qm.c b/fs/xfs/quota/xfs_qm.c
index 2954330..c31a7ae 100644
--- a/fs/xfs/quota/xfs_qm.c
+++ b/fs/xfs/quota/xfs_qm.c
@@ -2012,7 +2012,7 @@ xfs_qm_shake(
  	struct shrink_control *sc)
  {
  	int	ndqused, nfree, n;
-	int nr_to_scan = sc->nr_slab_to_reclaim;
+	int nr_to_scan = sc->nr_to_scan;
  	gfp_t gfp_mask = sc->gfp_mask;

  	if (!kmem_shake_allow(gfp_mask))
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 72ba1f5..02be595 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1134,19 +1134,18 @@ static inline void sync_mm_rss(struct task_struct *task, struct mm_struct *mm)
   * We consolidate the values for easier extention later.
   */
  struct shrink_control {
-	unsigned long nr_scanned;
  	gfp_t gfp_mask;

-	/* How many slab objects shrinker() should reclaim */
-	unsigned long nr_slab_to_reclaim;
+	/* How many slab objects shrinker() should scan and try to reclaim */
+	unsigned long nr_to_scan;
  };

  /*
   * A callback you can register to apply pressure to ageable caches.
   *
- * 'sc' is passed shrink_control which includes a count 'nr_slab_to_reclaim'
- * and a 'gfpmask'.  It should look through the least-recently-us
- * 'nr_slab_to_reclaim' entries and attempt to free them up.  It should return
+ * 'sc' is passed shrink_control which includes a count 'nr_to_scan'
+ * and a 'gfpmask'.  It should look through the least-recently-used
+ * 'nr_to_scan' entries and attempt to free them up.  It should return
   * the number of objects which remain in the cache.  If it returns -1, it means
   * it cannot do any scanning at this time (eg. there is a risk of deadlock).
   *
@@ -1613,6 +1612,7 @@ int in_gate_area_no_mm(unsigned long addr);
  int drop_caches_sysctl_handler(struct ctl_table *, int,
  					void __user *, size_t *, loff_t *);
  unsigned long shrink_slab(struct shrink_control *shrink,
+			  unsigned long nr_pages_scanned,
  				unsigned long lru_pages);

  #ifndef CONFIG_MMU
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 341341b..5c8f7e0 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -241,10 +241,9 @@ void shake_page(struct page *p, int access)
  		do {
  			struct shrink_control shrink = {
  				.gfp_mask = GFP_KERNEL,
-				.nr_scanned = 1000,
  			};

-			nr = shrink_slab(&shrink, 1000);
+			nr = shrink_slab(&shrink, 1000, 1000);
  			if (page_count(p) == 1)
  				break;
  		} while (nr > 10);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 292582c..89e24f7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -201,6 +201,14 @@ void unregister_shrinker(struct shrinker *shrinker)
  }
  EXPORT_SYMBOL(unregister_shrinker);

+static inline int do_shrinker_shrink(struct shrinker *shrinker,
+				     struct shrink_control *sc,
+				     unsigned long nr_to_scan)
+{
+	sc->nr_to_scan = nr_to_scan;
+	return (*shrinker->shrink)(shrinker, sc);
+}
+
  #define SHRINK_BATCH 128
  /*
   * Call the shrink functions to age shrinkable caches
@@ -222,14 +230,14 @@ EXPORT_SYMBOL(unregister_shrinker);
   * Returns the number of slab objects which we shrunk.
   */
  unsigned long shrink_slab(struct shrink_control *shrink,
+			  unsigned long nr_pages_scanned,
  			  unsigned long lru_pages)
  {
  	struct shrinker *shrinker;
  	unsigned long ret = 0;
-	unsigned long scanned = shrink->nr_scanned;

-	if (scanned == 0)
-		scanned = SWAP_CLUSTER_MAX;
+	if (nr_pages_scanned == 0)
+		nr_pages_scanned = SWAP_CLUSTER_MAX;

  	if (!down_read_trylock(&shrinker_rwsem))
  		return 1;	/* Assume we'll be able to shrink next time */
@@ -239,9 +247,8 @@ unsigned long shrink_slab(struct shrink_control *shrink,
  		unsigned long total_scan;
  		unsigned long max_pass;

-		shrink->nr_slab_to_reclaim = 0;
-		max_pass = (*shrinker->shrink)(shrinker, shrink);
-		delta = (4 * scanned) / shrinker->seeks;
+		max_pass = do_shrinker_shrink(shrinker, shrink, 0);
+		delta = (4 * nr_pages_scanned) / shrinker->seeks;
  		delta *= max_pass;
  		do_div(delta, lru_pages + 1);
  		shrinker->nr += delta;
@@ -268,11 +275,9 @@ unsigned long shrink_slab(struct shrink_control *shrink,
  			int shrink_ret;
  			int nr_before;

-			shrink->nr_slab_to_reclaim = 0;
-			nr_before = (*shrinker->shrink)(shrinker, shrink);
-			shrink->nr_slab_to_reclaim = this_scan;
-			shrink_ret = (*shrinker->shrink)(shrinker, shrink);
-
+			nr_before = do_shrinker_shrink(shrinker, shrink, 0);
+			shrink_ret = do_shrinker_shrink(shrinker, shrink,
+							this_scan);
  			if (shrink_ret == -1)
  				break;
  			if (shrink_ret < nr_before)
@@ -2086,8 +2091,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
  				lru_pages += zone_reclaimable_pages(zone);
  			}

-			shrink->nr_scanned = sc->nr_scanned;
-			shrink_slab(shrink, lru_pages);
+			shrink_slab(shrink, sc->nr_scanned, lru_pages);
  			if (reclaim_state) {
  				sc->nr_reclaimed += reclaim_state->reclaimed_slab;
  				reclaim_state->reclaimed_slab = 0;
@@ -2488,8 +2492,7 @@ loop_again:
  					end_zone, 0))
  				shrink_zone(priority, zone, &sc);
  			reclaim_state->reclaimed_slab = 0;
-			shrink.nr_scanned = sc.nr_scanned;
-			nr_slab = shrink_slab(&shrink, lru_pages);
+			nr_slab = shrink_slab(&shrink, sc.nr_scanned, lru_pages);
  			sc.nr_reclaimed += reclaim_state->reclaimed_slab;
  			total_scanned += sc.nr_scanned;

@@ -3057,7 +3060,6 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
  	}

  	nr_slab_pages0 = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
-	shrink.nr_scanned = sc.nr_scanned;
  	if (nr_slab_pages0 > zone->min_slab_pages) {
  		/*
  		 * shrink_slab() does not currently allow us to determine how
@@ -3073,7 +3075,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
  			unsigned long lru_pages = zone_reclaimable_pages(zone);

  			/* No reclaimable slab or very low memory pressure */
-			if (!shrink_slab(&shrink, lru_pages))
+			if (!shrink_slab(&shrink, sc.nr_scanned, lru_pages))
  				break;

  			/* Freed enough memory */
-- 
1.7.3.1




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-04-27  1:15         ` Ying Han
@ 2011-04-27  1:18           ` Ying Han
  0 siblings, 0 replies; 13+ messages in thread
From: Ying Han @ 2011-04-27  1:18 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: npiggin, Minchan Kim, Daisuke Nishimura, Balbir Singh, Tejun Heo,
	Pavel Emelyanov, KAMEZAWA Hiroyuki, Andrew Morton, Li Zefan,
	Mel Gorman, Rik van Riel, Johannes Weiner, Hugh Dickins,
	Michal Hocko, Dave Hansen, Zhu Yanhai, linux-mm

Hmm, got Nick's email wrong.

--Ying

On Tue, Apr 26, 2011 at 6:15 PM, Ying Han <yinghan@google.com> wrote:
> On Tue, Apr 26, 2011 at 5:47 PM, KOSAKI Motohiro
> <kosaki.motohiro@jp.fujitsu.com> wrote:
>>> > >  {
>>> > >       struct xfs_mount *mp;
>>> > >       struct xfs_perag *pag;
>>> > >       xfs_agnumber_t  ag;
>>> > >       int             reclaimable;
>>> > > +     int nr_to_scan = sc->nr_slab_to_reclaim;
>>> > > +     gfp_t gfp_mask = sc->gfp_mask;
>>> >
>>> > And, this very near meaning field .nr_scanned and .nr_slab_to_reclaim
>>> > poped up new question.
>>> > Why don't we pass more clever slab shrinker target? Why do we need pass
>>> > similar two argument?
>>> >
>>>
>>> I renamed the nr_slab_to_reclaim and nr_scanned in shrink struct.
>>
>> Oh no. that's not naming issue. example, Nick's previous similar patch pass
>> zone-total-pages and how-much-scanned-pages. (ie shrink_slab don't calculate
>> current magical target scanning objects anymore)
>>        ie,  "4 *  max_pass  * (scanned / nr- lru_pages-in-zones)"
>>
>> Instead, individual shrink_slab callback calculate this one.
>> see git://git.kernel.org/pub/scm/linux/kernel/git/npiggin/linux-npiggin.git
>>
>> I'm curious why you change the design from another guy's previous very similar effort and
>> We have to be convinced which is better.
>
> Thank you for the pointer. My patch is intended to consolidate all
> existing parameters passed from reclaim code
> to the shrinker.
>
> Talked w/ Nick and Andrew from last LSF,  we agree that this patch
> will be useful for other extensions later which allows us easily
> adding extensions to the shrinkers without shrinker files. Nick and I
> talked about the effort later to pass the nodemask down to the
> shrinker. He is cc-ed in the thread. Another thing I would like to
> repost is to add the reclaim priority down to the shrinker, which we
> won't throw tons of page caches pages by reclaiming one inode slab
> object.
>
> --Ying
>
>
>
>>
>>
>>
>>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-04-27  0:47       ` KOSAKI Motohiro
@ 2011-04-27  1:15         ` Ying Han
  2011-04-27  1:18           ` Ying Han
  0 siblings, 1 reply; 13+ messages in thread
From: Ying Han @ 2011-04-27  1:15 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Nick Piggin, Minchan Kim, Daisuke Nishimura, Balbir Singh,
	Tejun Heo, Pavel Emelyanov, KAMEZAWA Hiroyuki, Andrew Morton,
	Li Zefan, Mel Gorman, Rik van Riel, Johannes Weiner,
	Hugh Dickins, Michal Hocko, Dave Hansen, Zhu Yanhai, linux-mm

On Tue, Apr 26, 2011 at 5:47 PM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
>> > >  {
>> > >       struct xfs_mount *mp;
>> > >       struct xfs_perag *pag;
>> > >       xfs_agnumber_t  ag;
>> > >       int             reclaimable;
>> > > +     int nr_to_scan = sc->nr_slab_to_reclaim;
>> > > +     gfp_t gfp_mask = sc->gfp_mask;
>> >
>> > And, this very near meaning field .nr_scanned and .nr_slab_to_reclaim
>> > poped up new question.
>> > Why don't we pass more clever slab shrinker target? Why do we need pass
>> > similar two argument?
>> >
>>
>> I renamed the nr_slab_to_reclaim and nr_scanned in shrink struct.
>
> Oh no. that's not naming issue. example, Nick's previous similar patch pass
> zone-total-pages and how-much-scanned-pages. (ie shrink_slab don't calculate
> current magical target scanning objects anymore)
>        ie,  "4 *  max_pass  * (scanned / nr- lru_pages-in-zones)"
>
> Instead, individual shrink_slab callback calculate this one.
> see git://git.kernel.org/pub/scm/linux/kernel/git/npiggin/linux-npiggin.git
>
> I'm curious why you change the design from another guy's previous very similar effort and
> We have to be convinced which is better.

Thank you for the pointer. My patch is intended to consolidate all
existing parameters passed from reclaim code
to the shrinker.

Talked w/ Nick and Andrew from last LSF,  we agree that this patch
will be useful for other extensions later which allows us easily
adding extensions to the shrinkers without shrinker files. Nick and I
talked about the effort later to pass the nodemask down to the
shrinker. He is cc-ed in the thread. Another thing I would like to
repost is to add the reclaim priority down to the shrinker, which we
won't throw tons of page caches pages by reclaiming one inode slab
object.

--Ying



>
>
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-04-27  0:21     ` Ying Han
@ 2011-04-27  0:47       ` KOSAKI Motohiro
  2011-04-27  1:15         ` Ying Han
  0 siblings, 1 reply; 13+ messages in thread
From: KOSAKI Motohiro @ 2011-04-27  0:47 UTC (permalink / raw)
  To: Ying Han
  Cc: kosaki.motohiro, Nick Piggin, Minchan Kim, Daisuke Nishimura,
	Balbir Singh, Tejun Heo, Pavel Emelyanov, KAMEZAWA Hiroyuki,
	Andrew Morton, Li Zefan, Mel Gorman, Rik van Riel,
	Johannes Weiner, Hugh Dickins, Michal Hocko, Dave Hansen,
	Zhu Yanhai, linux-mm

> > >  {
> > >       struct xfs_mount *mp;
> > >       struct xfs_perag *pag;
> > >       xfs_agnumber_t  ag;
> > >       int             reclaimable;
> > > +     int nr_to_scan = sc->nr_slab_to_reclaim;
> > > +     gfp_t gfp_mask = sc->gfp_mask;
> >
> > And, this very near meaning field .nr_scanned and .nr_slab_to_reclaim
> > poped up new question.
> > Why don't we pass more clever slab shrinker target? Why do we need pass
> > similar two argument?
> >
> 
> I renamed the nr_slab_to_reclaim and nr_scanned in shrink struct.

Oh no. that's not naming issue. example, Nick's previous similar patch pass
zone-total-pages and how-much-scanned-pages. (ie shrink_slab don't calculate 
current magical target scanning objects anymore)
	ie,  "4 *  max_pass  * (scanned / nr- lru_pages-in-zones)"

Instead, individual shrink_slab callback calculate this one.
see git://git.kernel.org/pub/scm/linux/kernel/git/npiggin/linux-npiggin.git

I'm curious why you change the design from another guy's previous very similar effort and
We have to be convinced which is better.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-04-26  1:15   ` KOSAKI Motohiro
@ 2011-04-27  0:21     ` Ying Han
  2011-04-27  0:47       ` KOSAKI Motohiro
  0 siblings, 1 reply; 13+ messages in thread
From: Ying Han @ 2011-04-27  0:21 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Nick Piggin, Minchan Kim, Daisuke Nishimura, Balbir Singh,
	Tejun Heo, Pavel Emelyanov, KAMEZAWA Hiroyuki, Andrew Morton,
	Li Zefan, Mel Gorman, Rik van Riel, Johannes Weiner,
	Hugh Dickins, Michal Hocko, Dave Hansen, Zhu Yanhai, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2299 bytes --]

On Mon, Apr 25, 2011 at 6:15 PM, KOSAKI Motohiro <
kosaki.motohiro@jp.fujitsu.com> wrote:

> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 7a2f657..abc13ea 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -1137,16 +1137,19 @@ static inline void sync_mm_rss(struct task_struct
> *task, struct mm_struct *mm)
> >  struct shrink_control {
> >       unsigned long nr_scanned;
> >       gfp_t gfp_mask;
> > +
> > +     /* How many slab objects shrinker() should reclaim */
> > +     unsigned long nr_slab_to_reclaim;
>
> Wrong name. The original shrinker API is
>        int (*shrink)(struct shrinker *, int nr_to_scan, gfp_t gfp_mask);
>
> ie, shrinker get scanning target. not reclaiming target.
> You should have think folloing diff hunk is strange.
>

Ok, changed to "nr_slab_to_shrink"

>
> >  {
> >       struct xfs_mount *mp;
> >       struct xfs_perag *pag;
> >       xfs_agnumber_t  ag;
> >       int             reclaimable;
> > +     int nr_to_scan = sc->nr_slab_to_reclaim;
> > +     gfp_t gfp_mask = sc->gfp_mask;
>
> And, this very near meaning field .nr_scanned and .nr_slab_to_reclaim
> poped up new question.
> Why don't we pass more clever slab shrinker target? Why do we need pass
> similar two argument?
>

I renamed the nr_slab_to_reclaim and nr_scanned in shrink struct.

--Ying

> >  /*
> >   * A callback you can register to apply pressure to ageable caches.
> >   *
> > - * 'shrink' is passed a count 'nr_to_scan' and a 'gfpmask'.  It should
> > - * look through the least-recently-used 'nr_to_scan' entries and
> > - * attempt to free them up.  It should return the number of objects
> > - * which remain in the cache.  If it returns -1, it means it cannot do
> > - * any scanning at this time (eg. there is a risk of deadlock).
> > + * 'sc' is passed shrink_control which includes a count
> 'nr_slab_to_reclaim'
> > + * and a 'gfpmask'.  It should look through the least-recently-us
>
>                                                                  us?
>
>
> > + * 'nr_slab_to_reclaim' entries and attempt to free them up.  It should
> return
> > + * the number of objects which remain in the cache.  If it returns -1,
> it means
> > + * it cannot do any scanning at this time (eg. there is a risk of
> deadlock).
> >   *
>
>
>
>
>

[-- Attachment #2: Type: text/html, Size: 3186 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-04-25 17:22 ` [PATCH V2 2/2] change shrinker API by passing shrink_control struct Ying Han
  2011-04-25 18:34   ` Rik van Riel
  2011-04-25 18:47   ` Pavel Emelyanov
@ 2011-04-26  1:15   ` KOSAKI Motohiro
  2011-04-27  0:21     ` Ying Han
  2 siblings, 1 reply; 13+ messages in thread
From: KOSAKI Motohiro @ 2011-04-26  1:15 UTC (permalink / raw)
  To: Ying Han
  Cc: kosaki.motohiro, Nick Piggin, Minchan Kim, Daisuke Nishimura,
	Balbir Singh, Tejun Heo, Pavel Emelyanov, KAMEZAWA Hiroyuki,
	Andrew Morton, Li Zefan, Mel Gorman, Rik van Riel,
	Johannes Weiner, Hugh Dickins, Michal Hocko, Dave Hansen,
	Zhu Yanhai, linux-mm

> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 7a2f657..abc13ea 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1137,16 +1137,19 @@ static inline void sync_mm_rss(struct task_struct *task, struct mm_struct *mm)
>  struct shrink_control {
>  	unsigned long nr_scanned;
>  	gfp_t gfp_mask;
> +
> +	/* How many slab objects shrinker() should reclaim */
> +	unsigned long nr_slab_to_reclaim;

Wrong name. The original shrinker API is
	int (*shrink)(struct shrinker *, int nr_to_scan, gfp_t gfp_mask);

ie, shrinker get scanning target. not reclaiming target.
You should have think folloing diff hunk is strange. 

>  {
>  	struct xfs_mount *mp;
>  	struct xfs_perag *pag;
>  	xfs_agnumber_t	ag;
>  	int		reclaimable;
> +	int nr_to_scan = sc->nr_slab_to_reclaim;
> +	gfp_t gfp_mask = sc->gfp_mask;

And, this very near meaning field .nr_scanned and .nr_slab_to_reclaim
poped up new question.
Why don't we pass more clever slab shrinker target? Why do we need pass
similar two argument?


>  /*
>   * A callback you can register to apply pressure to ageable caches.
>   *
> - * 'shrink' is passed a count 'nr_to_scan' and a 'gfpmask'.  It should
> - * look through the least-recently-used 'nr_to_scan' entries and
> - * attempt to free them up.  It should return the number of objects
> - * which remain in the cache.  If it returns -1, it means it cannot do
> - * any scanning at this time (eg. there is a risk of deadlock).
> + * 'sc' is passed shrink_control which includes a count 'nr_slab_to_reclaim'
> + * and a 'gfpmask'.  It should look through the least-recently-us

                                                                  us?


> + * 'nr_slab_to_reclaim' entries and attempt to free them up.  It should return
> + * the number of objects which remain in the cache.  If it returns -1, it means
> + * it cannot do any scanning at this time (eg. there is a risk of deadlock).
>   *




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-04-25 17:22 ` [PATCH V2 2/2] change shrinker API by passing shrink_control struct Ying Han
  2011-04-25 18:34   ` Rik van Riel
@ 2011-04-25 18:47   ` Pavel Emelyanov
  2011-04-26  1:15   ` KOSAKI Motohiro
  2 siblings, 0 replies; 13+ messages in thread
From: Pavel Emelyanov @ 2011-04-25 18:47 UTC (permalink / raw)
  To: Ying Han
  Cc: Nick Piggin, KOSAKI Motohiro, Minchan Kim, Daisuke Nishimura,
	Balbir Singh, Tejun Heo, KAMEZAWA Hiroyuki, Andrew Morton,
	Li Zefan, Mel Gorman, Rik van Riel, Johannes Weiner,
	Hugh Dickins, Michal Hocko, Dave Hansen, Zhu Yanhai, linux-mm

On 04/25/2011 09:22 PM, Ying Han wrote:
> The patch changes each shrinkers API by consolidating the existing
> parameters into shrink_control struct. This will simplify any further
> features added w/o touching each file of shrinker.
> 
> changelog v2..v1:
> 1. replace the scan_control to shrink_control, and only pass the set
> of values down to the shrinker.
> 
> Signed-off-by: Ying Han <yinghan@google.com>

Acked-by: Pavel Emelyanov <xemul@parallels.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-04-25 17:22 ` [PATCH V2 2/2] change shrinker API by passing shrink_control struct Ying Han
@ 2011-04-25 18:34   ` Rik van Riel
  2011-04-25 18:47   ` Pavel Emelyanov
  2011-04-26  1:15   ` KOSAKI Motohiro
  2 siblings, 0 replies; 13+ messages in thread
From: Rik van Riel @ 2011-04-25 18:34 UTC (permalink / raw)
  To: Ying Han
  Cc: Nick Piggin, KOSAKI Motohiro, Minchan Kim, Daisuke Nishimura,
	Balbir Singh, Tejun Heo, Pavel Emelyanov, KAMEZAWA Hiroyuki,
	Andrew Morton, Li Zefan, Mel Gorman, Johannes Weiner,
	Hugh Dickins, Michal Hocko, Dave Hansen, Zhu Yanhai, linux-mm

On 04/25/2011 01:22 PM, Ying Han wrote:
> The patch changes each shrinkers API by consolidating the existing
> parameters into shrink_control struct. This will simplify any further
> features added w/o touching each file of shrinker.
>
> changelog v2..v1:
> 1. replace the scan_control to shrink_control, and only pass the set
> of values down to the shrinker.
>
> Signed-off-by: Ying Han<yinghan@google.com>

Acked-by: Rik van Riel<riel@redhat.com>

-- 
All rights reversed

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH V2 2/2] change shrinker API by passing shrink_control struct
  2011-04-25 17:22 [PATCH V2 0/2] pass the scan_control into shrinkers Ying Han
@ 2011-04-25 17:22 ` Ying Han
  2011-04-25 18:34   ` Rik van Riel
                     ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Ying Han @ 2011-04-25 17:22 UTC (permalink / raw)
  To: Nick Piggin, KOSAKI Motohiro, Minchan Kim, Daisuke Nishimura,
	Balbir Singh, Tejun Heo, Pavel Emelyanov, KAMEZAWA Hiroyuki,
	Andrew Morton, Li Zefan, Mel Gorman, Rik van Riel,
	Johannes Weiner, Hugh Dickins, Michal Hocko, Dave Hansen,
	Zhu Yanhai
  Cc: linux-mm

The patch changes each shrinkers API by consolidating the existing
parameters into shrink_control struct. This will simplify any further
features added w/o touching each file of shrinker.

changelog v2..v1:
1. replace the scan_control to shrink_control, and only pass the set
of values down to the shrinker.

Signed-off-by: Ying Han <yinghan@google.com>
---
 arch/x86/kvm/mmu.c                   |    3 ++-
 drivers/gpu/drm/i915/i915_gem.c      |    5 ++---
 drivers/gpu/drm/ttm/ttm_page_alloc.c |    4 +++-
 drivers/staging/zcache/zcache.c      |    5 ++++-
 fs/dcache.c                          |    8 ++++++--
 fs/gfs2/glock.c                      |    5 ++++-
 fs/inode.c                           |    6 +++++-
 fs/mbcache.c                         |   10 ++++++----
 fs/nfs/dir.c                         |    5 ++++-
 fs/nfs/internal.h                    |    2 +-
 fs/quota/dquot.c                     |    5 ++++-
 fs/xfs/linux-2.6/xfs_buf.c           |    4 ++--
 fs/xfs/linux-2.6/xfs_sync.c          |    5 +++--
 fs/xfs/quota/xfs_qm.c                |    7 ++++---
 include/linux/mm.h                   |   15 +++++++++------
 mm/vmscan.c                          |   11 +++++++----
 net/sunrpc/auth.c                    |    4 +++-
 17 files changed, 69 insertions(+), 35 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b6a9963..e194cb6 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3579,10 +3579,11 @@ static int kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm,
 	return kvm_mmu_prepare_zap_page(kvm, page, invalid_list);
 }
 
-static int mmu_shrink(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
+static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc)
 {
 	struct kvm *kvm;
 	struct kvm *kvm_freed = NULL;
+	int nr_to_scan = sc->nr_slab_to_reclaim;
 
 	if (nr_to_scan == 0)
 		goto out;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 4aaf6cd..99afb5b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4105,9 +4105,7 @@ i915_gpu_is_active(struct drm_device *dev)
 }
 
 static int
-i915_gem_inactive_shrink(struct shrinker *shrinker,
-			 int nr_to_scan,
-			 gfp_t gfp_mask)
+i915_gem_inactive_shrink(struct shrinker *shrinker, struct shrink_control *sc)
 {
 	struct drm_i915_private *dev_priv =
 		container_of(shrinker,
@@ -4115,6 +4113,7 @@ i915_gem_inactive_shrink(struct shrinker *shrinker,
 			     mm.inactive_shrinker);
 	struct drm_device *dev = dev_priv->dev;
 	struct drm_i915_gem_object *obj, *next;
+	int nr_to_scan = sc->nr_slab_to_reclaim;
 	int cnt;
 
 	if (!mutex_trylock(&dev->struct_mutex))
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index 737a2a2..2f2376b 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -395,12 +395,14 @@ static int ttm_pool_get_num_unused_pages(void)
 /**
  * Callback for mm to request pool to reduce number of page held.
  */
-static int ttm_pool_mm_shrink(struct shrinker *shrink, int shrink_pages, gfp_t gfp_mask)
+static int ttm_pool_mm_shrink(struct shrinker *shrink,
+			      struct shrink_control *sc)
 {
 	static atomic_t start_pool = ATOMIC_INIT(0);
 	unsigned i;
 	unsigned pool_offset = atomic_add_return(1, &start_pool);
 	struct ttm_page_pool *pool;
+	int shrink_pages = sc->nr_slab_to_reclaim;
 
 	pool_offset = pool_offset % NUM_POOLS;
 	/* select start pool in round robin fashion */
diff --git a/drivers/staging/zcache/zcache.c b/drivers/staging/zcache/zcache.c
index b8a2b30..135851a 100644
--- a/drivers/staging/zcache/zcache.c
+++ b/drivers/staging/zcache/zcache.c
@@ -1181,9 +1181,12 @@ static bool zcache_freeze;
 /*
  * zcache shrinker interface (only useful for ephemeral pages, so zbud only)
  */
-static int shrink_zcache_memory(struct shrinker *shrink, int nr, gfp_t gfp_mask)
+static int shrink_zcache_memory(struct shrinker *shrink,
+				struct shrink_control *sc)
 {
 	int ret = -1;
+	int nr = sc->nr_slab_to_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
 
 	if (nr >= 0) {
 		if (!(gfp_mask & __GFP_FS))
diff --git a/fs/dcache.c b/fs/dcache.c
index 2f65679..47566ab 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1237,7 +1237,7 @@ void shrink_dcache_parent(struct dentry * parent)
 EXPORT_SYMBOL(shrink_dcache_parent);
 
 /*
- * Scan `nr' dentries and return the number which remain.
+ * Scan `sc->nr_slab_to_reclaim' dentries and return the number which remain.
  *
  * We need to avoid reentering the filesystem if the caller is performing a
  * GFP_NOFS allocation attempt.  One example deadlock is:
@@ -1248,8 +1248,12 @@ EXPORT_SYMBOL(shrink_dcache_parent);
  *
  * In this case we return -1 to tell the caller that we baled.
  */
-static int shrink_dcache_memory(struct shrinker *shrink, int nr, gfp_t gfp_mask)
+static int shrink_dcache_memory(struct shrinker *shrink,
+				struct shrink_control *sc)
 {
+	int nr = sc->nr_slab_to_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
+
 	if (nr) {
 		if (!(gfp_mask & __GFP_FS))
 			return -1;
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index bc36aef..e196ece 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -1348,11 +1348,14 @@ void gfs2_glock_complete(struct gfs2_glock *gl, int ret)
 }
 
 
-static int gfs2_shrink_glock_memory(struct shrinker *shrink, int nr, gfp_t gfp_mask)
+static int gfs2_shrink_glock_memory(struct shrinker *shrink,
+				    struct shrink_control *sc)
 {
 	struct gfs2_glock *gl;
 	int may_demote;
 	int nr_skipped = 0;
+	int nr = sc->nr_slab_to_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
 	LIST_HEAD(skipped);
 
 	if (nr == 0)
diff --git a/fs/inode.c b/fs/inode.c
index f4018ab..e65d00a 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -703,8 +703,12 @@ static void prune_icache(int nr_to_scan)
  * This function is passed the number of inodes to scan, and it returns the
  * total number of remaining possibly-reclaimable inodes.
  */
-static int shrink_icache_memory(struct shrinker *shrink, int nr, gfp_t gfp_mask)
+static int shrink_icache_memory(struct shrinker *shrink,
+				struct shrink_control *sc)
 {
+	int nr = sc->nr_slab_to_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
+
 	if (nr) {
 		/*
 		 * Nasty deadlock avoidance.  We may hold various FS locks,
diff --git a/fs/mbcache.c b/fs/mbcache.c
index a25444ab..3ba68e4 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -90,7 +90,8 @@ static DEFINE_SPINLOCK(mb_cache_spinlock);
  * What the mbcache registers as to get shrunk dynamically.
  */
 
-static int mb_cache_shrink_fn(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask);
+static int mb_cache_shrink_fn(struct shrinker *shrink,
+			      struct shrink_control *sc);
 
 static struct shrinker mb_cache_shrinker = {
 	.shrink = mb_cache_shrink_fn,
@@ -156,18 +157,19 @@ forget:
  * gets low.
  *
  * @shrink: (ignored)
- * @nr_to_scan: Number of objects to scan
- * @gfp_mask: (ignored)
+ * @sc: shrink_control passed from reclaim
  *
  * Returns the number of objects which are present in the cache.
  */
 static int
-mb_cache_shrink_fn(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
+mb_cache_shrink_fn(struct shrinker *shrink, struct shrink_control *sc)
 {
 	LIST_HEAD(free_list);
 	struct mb_cache *cache;
 	struct mb_cache_entry *entry, *tmp;
 	int count = 0;
+	int nr_to_scan = sc->nr_slab_to_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
 
 	mb_debug("trying to free %d entries", nr_to_scan);
 	spin_lock(&mb_cache_spinlock);
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index 2c3eb33..27aa4a4 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1962,11 +1962,14 @@ static void nfs_access_free_list(struct list_head *head)
 	}
 }
 
-int nfs_access_cache_shrinker(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
+int nfs_access_cache_shrinker(struct shrinker *shrink,
+			      struct shrink_control *sc)
 {
 	LIST_HEAD(head);
 	struct nfs_inode *nfsi, *next;
 	struct nfs_access_entry *cache;
+	int nr_to_scan = sc->nr_slab_to_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
 
 	if ((gfp_mask & GFP_KERNEL) != GFP_KERNEL)
 		return (nr_to_scan == 0) ? 0 : -1;
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index cf9fdbd..fabb489 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -218,7 +218,7 @@ void nfs_close_context(struct nfs_open_context *ctx, int is_sync);
 
 /* dir.c */
 extern int nfs_access_cache_shrinker(struct shrinker *shrink,
-					int nr_to_scan, gfp_t gfp_mask);
+					struct shrink_control *sc);
 
 /* inode.c */
 extern struct workqueue_struct *nfsiod_workqueue;
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index a2a622e..09d8750 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -696,8 +696,11 @@ static void prune_dqcache(int count)
  * This is called from kswapd when we think we need some
  * more memory
  */
-static int shrink_dqcache_memory(struct shrinker *shrink, int nr, gfp_t gfp_mask)
+static int shrink_dqcache_memory(struct shrinker *shrink,
+				 struct shrink_control *sc)
 {
+	int nr = sc->nr_slab_to_reclaim;
+
 	if (nr) {
 		spin_lock(&dq_list_lock);
 		prune_dqcache(nr);
diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index 5cb230f..a1d6e62 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1542,12 +1542,12 @@ restart:
 int
 xfs_buftarg_shrink(
 	struct shrinker		*shrink,
-	int			nr_to_scan,
-	gfp_t			mask)
+	struct shrink_control	*sc)
 {
 	struct xfs_buftarg	*btp = container_of(shrink,
 					struct xfs_buftarg, bt_shrinker);
 	struct xfs_buf		*bp;
+	int nr_to_scan = sc->nr_slab_to_reclaim;
 	LIST_HEAD(dispose);
 
 	if (!nr_to_scan)
diff --git a/fs/xfs/linux-2.6/xfs_sync.c b/fs/xfs/linux-2.6/xfs_sync.c
index 6c10f1d..6749bd3 100644
--- a/fs/xfs/linux-2.6/xfs_sync.c
+++ b/fs/xfs/linux-2.6/xfs_sync.c
@@ -998,13 +998,14 @@ xfs_reclaim_inodes(
 static int
 xfs_reclaim_inode_shrink(
 	struct shrinker	*shrink,
-	int		nr_to_scan,
-	gfp_t		gfp_mask)
+	struct shrink_control *sc)
 {
 	struct xfs_mount *mp;
 	struct xfs_perag *pag;
 	xfs_agnumber_t	ag;
 	int		reclaimable;
+	int nr_to_scan = sc->nr_slab_to_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
 
 	mp = container_of(shrink, struct xfs_mount, m_inode_shrink);
 	if (nr_to_scan) {
diff --git a/fs/xfs/quota/xfs_qm.c b/fs/xfs/quota/xfs_qm.c
index 254ee06..8e7991c 100644
--- a/fs/xfs/quota/xfs_qm.c
+++ b/fs/xfs/quota/xfs_qm.c
@@ -60,7 +60,7 @@ STATIC void	xfs_qm_list_destroy(xfs_dqlist_t *);
 
 STATIC int	xfs_qm_init_quotainos(xfs_mount_t *);
 STATIC int	xfs_qm_init_quotainfo(xfs_mount_t *);
-STATIC int	xfs_qm_shake(struct shrinker *, int, gfp_t);
+STATIC int	xfs_qm_shake(struct shrinker *, struct shrink_control *);
 
 static struct shrinker xfs_qm_shaker = {
 	.shrink = xfs_qm_shake,
@@ -2016,10 +2016,11 @@ xfs_qm_shake_freelist(
 STATIC int
 xfs_qm_shake(
 	struct shrinker	*shrink,
-	int		nr_to_scan,
-	gfp_t		gfp_mask)
+	struct shrink_control *sc)
 {
 	int	ndqused, nfree, n;
+	int nr_to_scan = sc->nr_slab_to_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
 
 	if (!kmem_shake_allow(gfp_mask))
 		return 0;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7a2f657..abc13ea 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1137,16 +1137,19 @@ static inline void sync_mm_rss(struct task_struct *task, struct mm_struct *mm)
 struct shrink_control {
 	unsigned long nr_scanned;
 	gfp_t gfp_mask;
+
+	/* How many slab objects shrinker() should reclaim */
+	unsigned long nr_slab_to_reclaim;
 };
 
 /*
  * A callback you can register to apply pressure to ageable caches.
  *
- * 'shrink' is passed a count 'nr_to_scan' and a 'gfpmask'.  It should
- * look through the least-recently-used 'nr_to_scan' entries and
- * attempt to free them up.  It should return the number of objects
- * which remain in the cache.  If it returns -1, it means it cannot do
- * any scanning at this time (eg. there is a risk of deadlock).
+ * 'sc' is passed shrink_control which includes a count 'nr_slab_to_reclaim'
+ * and a 'gfpmask'.  It should look through the least-recently-us
+ * 'nr_slab_to_reclaim' entries and attempt to free them up.  It should return
+ * the number of objects which remain in the cache.  If it returns -1, it means
+ * it cannot do any scanning at this time (eg. there is a risk of deadlock).
  *
  * The 'gfpmask' refers to the allocation we are currently trying to
  * fulfil.
@@ -1155,7 +1158,7 @@ struct shrink_control {
  * querying the cache size, so a fastpath for that case is appropriate.
  */
 struct shrinker {
-	int (*shrink)(struct shrinker *, int nr_to_scan, gfp_t gfp_mask);
+	int (*shrink)(struct shrinker *, struct shrink_control *sc);
 	int seeks;	/* seeks to recreate an obj */
 
 	/* These are for internal use */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 40edf73..1241e87 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -239,7 +239,8 @@ unsigned long shrink_slab(struct shrink_control *shrink,
 		unsigned long total_scan;
 		unsigned long max_pass;
 
-		max_pass = (*shrinker->shrink)(shrinker, 0, gfp_mask);
+		shrink->nr_slab_to_reclaim = 0;
+		max_pass = (*shrinker->shrink)(shrinker, shrink);
 		delta = (4 * scanned) / shrinker->seeks;
 		delta *= max_pass;
 		do_div(delta, lru_pages + 1);
@@ -267,9 +268,11 @@ unsigned long shrink_slab(struct shrink_control *shrink,
 			int shrink_ret;
 			int nr_before;
 
-			nr_before = (*shrinker->shrink)(shrinker, 0, gfp_mask);
-			shrink_ret = (*shrinker->shrink)(shrinker, this_scan,
-								gfp_mask);
+			shrink->nr_slab_to_reclaim = 0;
+			nr_before = (*shrinker->shrink)(shrinker, shrink);
+			shrink->nr_slab_to_reclaim = this_scan;
+			shrink_ret = (*shrinker->shrink)(shrinker, shrink);
+
 			if (shrink_ret == -1)
 				break;
 			if (shrink_ret < nr_before)
diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c
index 67e3127..287dd25 100644
--- a/net/sunrpc/auth.c
+++ b/net/sunrpc/auth.c
@@ -326,10 +326,12 @@ rpcauth_prune_expired(struct list_head *free, int nr_to_scan)
  * Run memory cache shrinker.
  */
 static int
-rpcauth_cache_shrinker(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
+rpcauth_cache_shrinker(struct shrinker *shrink, struct shrink_control *sc)
 {
 	LIST_HEAD(free);
 	int res;
+	int nr_to_scan = sc->nr_slab_to_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
 
 	if ((gfp_mask & GFP_KERNEL) != GFP_KERNEL)
 		return (nr_to_scan == 0) ? 0 : -1;
-- 
1.7.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2011-05-20 12:31 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-20  2:59 [PATCH V2 2/2] change shrinker API by passing shrink_control struct KOSAKI Motohiro
2011-05-20  2:59 ` KOSAKI Motohiro
2011-05-20  3:23 ` Ying Han
2011-05-20 12:30   ` KOSAKI Motohiro
2011-05-20 12:30     ` KOSAKI Motohiro
  -- strict thread matches above, loose matches on Subject: below --
2011-04-25 17:22 [PATCH V2 0/2] pass the scan_control into shrinkers Ying Han
2011-04-25 17:22 ` [PATCH V2 2/2] change shrinker API by passing shrink_control struct Ying Han
2011-04-25 18:34   ` Rik van Riel
2011-04-25 18:47   ` Pavel Emelyanov
2011-04-26  1:15   ` KOSAKI Motohiro
2011-04-27  0:21     ` Ying Han
2011-04-27  0:47       ` KOSAKI Motohiro
2011-04-27  1:15         ` Ying Han
2011-04-27  1:18           ` Ying Han

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.