All of lore.kernel.org
 help / color / mirror / Atom feed
From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
To: Nick Piggin <npiggin@kernel.dk>
Cc: kosaki.motohiro@jp.fujitsu.com, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Frank Mayhar <fmayhar@google.com>,
	John Stultz <johnstul@us.ibm.com>
Subject: Re: VFS scalability git tree
Date: Sat, 24 Jul 2010 19:54:43 +0900 (JST)	[thread overview]
Message-ID: <20100724174648.3C9F.A69D9226@jp.fujitsu.com> (raw)
In-Reply-To: <20100724174038.3C96.A69D9226@jp.fujitsu.com>

> > At this point, I would be very interested in reviewing, correctness
> > testing on different configurations, and of course benchmarking.
> 
> I haven't review this series so long time. but I've found one misterious
> shrink_slab() usage. can you please see my patch? (I will send it as
> another mail)

Plus, I have one question. upstream shrink_slab() calculation and your
calculation have bigger change rather than your patch description explained.

upstream:

  shrink_slab()

                                lru_scanned        max_pass
      basic_scan_objects = 4 x -------------  x -----------------------------
                                lru_pages        shrinker->seeks (default:2)

      scan_objects = min(basic_scan_objects, max_pass * 2)

  shrink_icache_memory()

                                          sysctl_vfs_cache_pressure
      max_pass = inodes_stat.nr_unused x --------------------------
                                                   100


That said, higher sysctl_vfs_cache_pressure makes higher slab reclaim.


In the other hand, your code: 
  shrinker_add_scan()

                           scanned          objects
      scan_objects = 4 x -------------  x -----------  x SHRINK_FACTOR x SHRINK_FACTOR
                           total            ratio

  shrink_icache_memory()

     ratio = DEFAULT_SEEKS * sysctl_vfs_cache_pressure / 100

That said, higher sysctl_vfs_cache_pressure makes smaller slab reclaim.


So, I guess following change honorly refrect your original intention.

New calculation is, 

  shrinker_add_scan()

                       scanned          
      scan_objects = -------------  x objects x ratio
                        total            

  shrink_icache_memory()

     ratio = DEFAULT_SEEKS * sysctl_vfs_cache_pressure / 100

This has the same behavior as upstream. because upstream's 4/shrinker->seeks = 2.
also the above has DEFAULT_SEEKS = SHRINK_FACTORx2.



===============
o move 'ratio' from denominator to numerator
o adapt kvm/mmu_shrink
o SHRINK_FACTOR / 2 (default seek) x 4 (unknown shrink slab modifier)
    -> (SHRINK_FACTOR*2) == DEFAULT_SEEKS

---
 arch/x86/kvm/mmu.c |    2 +-
 mm/vmscan.c        |   10 ++--------
 2 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ae5a038..cea1e92 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2942,7 +2942,7 @@ static int mmu_shrink(struct shrinker *shrink,
 	}
 
 	shrinker_add_scan(&nr_to_scan, scanned, global, cache_count,
-			DEFAULT_SEEKS*10);
+			DEFAULT_SEEKS/10);
 
 done:
 	cache_count = shrinker_do_scan(&nr_to_scan, SHRINK_BATCH);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 89b593e..2d8e9ab 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -208,14 +208,8 @@ void shrinker_add_scan(unsigned long *dst,
 {
 	unsigned long long delta;
 
-	/*
-	 * The constant 4 comes from old code. Who knows why.
-	 * This could all use a good tune up with some decent
-	 * benchmarks and numbers.
-	 */
-	delta = (unsigned long long)scanned * objects
-			* SHRINK_FACTOR * SHRINK_FACTOR * 4UL;
-	do_div(delta, (ratio * total + 1));
+	delta = (unsigned long long)scanned * objects * ratio;
+	do_div(delta, total+ 1);
 
 	/*
 	 * Avoid risking looping forever due to too large nr value:
-- 
1.6.5.2





WARNING: multiple messages have this Message-ID (diff)
From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
To: Nick Piggin <npiggin@kernel.dk>
Cc: kosaki.motohiro@jp.fujitsu.com, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Frank Mayhar <fmayhar@google.com>,
	John Stultz <johnstul@us.ibm.com>
Subject: Re: VFS scalability git tree
Date: Sat, 24 Jul 2010 19:54:43 +0900 (JST)	[thread overview]
Message-ID: <20100724174648.3C9F.A69D9226@jp.fujitsu.com> (raw)
In-Reply-To: <20100724174038.3C96.A69D9226@jp.fujitsu.com>

> > At this point, I would be very interested in reviewing, correctness
> > testing on different configurations, and of course benchmarking.
> 
> I haven't review this series so long time. but I've found one misterious
> shrink_slab() usage. can you please see my patch? (I will send it as
> another mail)

Plus, I have one question. upstream shrink_slab() calculation and your
calculation have bigger change rather than your patch description explained.

upstream:

  shrink_slab()

                                lru_scanned        max_pass
      basic_scan_objects = 4 x -------------  x -----------------------------
                                lru_pages        shrinker->seeks (default:2)

      scan_objects = min(basic_scan_objects, max_pass * 2)

  shrink_icache_memory()

                                          sysctl_vfs_cache_pressure
      max_pass = inodes_stat.nr_unused x --------------------------
                                                   100


That said, higher sysctl_vfs_cache_pressure makes higher slab reclaim.


In the other hand, your code: 
  shrinker_add_scan()

                           scanned          objects
      scan_objects = 4 x -------------  x -----------  x SHRINK_FACTOR x SHRINK_FACTOR
                           total            ratio

  shrink_icache_memory()

     ratio = DEFAULT_SEEKS * sysctl_vfs_cache_pressure / 100

That said, higher sysctl_vfs_cache_pressure makes smaller slab reclaim.


So, I guess following change honorly refrect your original intention.

New calculation is, 

  shrinker_add_scan()

                       scanned          
      scan_objects = -------------  x objects x ratio
                        total            

  shrink_icache_memory()

     ratio = DEFAULT_SEEKS * sysctl_vfs_cache_pressure / 100

This has the same behavior as upstream. because upstream's 4/shrinker->seeks = 2.
also the above has DEFAULT_SEEKS = SHRINK_FACTORx2.



===============
o move 'ratio' from denominator to numerator
o adapt kvm/mmu_shrink
o SHRINK_FACTOR / 2 (default seek) x 4 (unknown shrink slab modifier)
    -> (SHRINK_FACTOR*2) == DEFAULT_SEEKS

---
 arch/x86/kvm/mmu.c |    2 +-
 mm/vmscan.c        |   10 ++--------
 2 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ae5a038..cea1e92 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2942,7 +2942,7 @@ static int mmu_shrink(struct shrinker *shrink,
 	}
 
 	shrinker_add_scan(&nr_to_scan, scanned, global, cache_count,
-			DEFAULT_SEEKS*10);
+			DEFAULT_SEEKS/10);
 
 done:
 	cache_count = shrinker_do_scan(&nr_to_scan, SHRINK_BATCH);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 89b593e..2d8e9ab 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -208,14 +208,8 @@ void shrinker_add_scan(unsigned long *dst,
 {
 	unsigned long long delta;
 
-	/*
-	 * The constant 4 comes from old code. Who knows why.
-	 * This could all use a good tune up with some decent
-	 * benchmarks and numbers.
-	 */
-	delta = (unsigned long long)scanned * objects
-			* SHRINK_FACTOR * SHRINK_FACTOR * 4UL;
-	do_div(delta, (ratio * total + 1));
+	delta = (unsigned long long)scanned * objects * ratio;
+	do_div(delta, total+ 1);
 
 	/*
 	 * Avoid risking looping forever due to too large nr value:
-- 
1.6.5.2




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2010-07-24 10:54 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-22 19:01 VFS scalability git tree Nick Piggin
2010-07-22 19:01 ` Nick Piggin
2010-07-23 11:13 ` Dave Chinner
2010-07-23 11:13   ` Dave Chinner
2010-07-23 14:04   ` [PATCH 0/2] vfs scalability tree fixes Dave Chinner
2010-07-23 14:04     ` Dave Chinner
2010-07-23 16:09     ` Nick Piggin
2010-07-23 16:09       ` Nick Piggin
2010-07-23 14:04   ` [PATCH 1/2] xfs: fix shrinker build Dave Chinner
2010-07-23 14:04     ` Dave Chinner
2010-07-23 14:04   ` [PATCH 2/2] xfs: shrinker should use a per-filesystem scan count Dave Chinner
2010-07-23 14:04     ` Dave Chinner
2010-07-23 15:51   ` VFS scalability git tree Nick Piggin
2010-07-23 15:51     ` Nick Piggin
2010-07-24  0:21     ` Dave Chinner
2010-07-24  0:21       ` Dave Chinner
2010-07-23 11:17 ` Christoph Hellwig
2010-07-23 11:17   ` Christoph Hellwig
2010-07-23 15:42   ` Nick Piggin
2010-07-23 15:42     ` Nick Piggin
2010-07-23 13:55 ` Dave Chinner
2010-07-23 13:55   ` Dave Chinner
2010-07-23 16:16   ` Nick Piggin
2010-07-23 16:16     ` Nick Piggin
2010-07-27  7:05   ` Nick Piggin
2010-07-27  7:05     ` Nick Piggin
2010-07-27  8:06     ` Nick Piggin
2010-07-27  8:06       ` Nick Piggin
2010-07-27 11:36       ` XFS hang in xlog_grant_log_space (was Re: VFS scalability git tree) Nick Piggin
2010-07-27 13:30         ` Dave Chinner
2010-07-27 14:58           ` XFS hang in xlog_grant_log_space Dave Chinner
2010-07-28 13:17             ` Dave Chinner
2010-07-29 14:05               ` Nick Piggin
2010-07-29 22:56                 ` Dave Chinner
2010-07-30  3:59                   ` Nick Piggin
2010-07-28 12:57       ` VFS scalability git tree Dave Chinner
2010-07-28 12:57         ` Dave Chinner
2010-07-29 14:03         ` Nick Piggin
2010-07-29 14:03           ` Nick Piggin
2010-07-27 11:09     ` Nick Piggin
2010-07-27 11:09       ` Nick Piggin
2010-07-27 13:18     ` Dave Chinner
2010-07-27 13:18       ` Dave Chinner
2010-07-27 15:09       ` Nick Piggin
2010-07-27 15:09         ` Nick Piggin
2010-07-28  4:59         ` Dave Chinner
2010-07-28  4:59           ` Dave Chinner
2010-07-28  4:59           ` Dave Chinner
2010-07-23 15:35 ` Nick Piggin
2010-07-23 15:35   ` Nick Piggin
2010-07-24  8:43 ` KOSAKI Motohiro
2010-07-24  8:43   ` KOSAKI Motohiro
2010-07-24  8:44   ` [PATCH 1/2] vmscan: shrink_all_slab() use reclaim_state instead the return value of shrink_slab() KOSAKI Motohiro
2010-07-24  8:44     ` KOSAKI Motohiro
2010-07-24  8:44     ` KOSAKI Motohiro
2010-07-24 12:05     ` KOSAKI Motohiro
2010-07-24 12:05       ` KOSAKI Motohiro
2010-07-24  8:46   ` [PATCH 2/2] vmscan: change shrink_slab() return tyep with void KOSAKI Motohiro
2010-07-24  8:46     ` KOSAKI Motohiro
2010-07-24  8:46     ` KOSAKI Motohiro
2010-07-24 10:54   ` KOSAKI Motohiro [this message]
2010-07-24 10:54     ` VFS scalability git tree KOSAKI Motohiro
2010-07-26  5:41 ` Nick Piggin
2010-07-26  5:41   ` Nick Piggin
2010-07-28 10:24   ` Nick Piggin
2010-07-28 10:24     ` Nick Piggin
2010-07-30  9:12 ` Nick Piggin
2010-07-30  9:12   ` Nick Piggin
2010-08-03  0:27   ` john stultz
2010-08-03  0:27     ` john stultz
2010-08-03  0:27     ` john stultz
2010-08-03  5:44     ` Nick Piggin
2010-08-03  5:44       ` Nick Piggin
2010-08-03  5:44       ` Nick Piggin
2010-09-14 22:26       ` Christoph Hellwig
2010-09-14 23:02         ` Frank Mayhar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100724174648.3C9F.A69D9226@jp.fujitsu.com \
    --to=kosaki.motohiro@jp.fujitsu.com \
    --cc=fmayhar@google.com \
    --cc=johnstul@us.ibm.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=npiggin@kernel.dk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.