linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
@ 2009-04-30  6:11 KOSAKI Motohiro
  2009-04-30  6:18 ` Balbir Singh
  2009-05-01  1:10 ` Andrew Morton
  0 siblings, 2 replies; 9+ messages in thread
From: KOSAKI Motohiro @ 2009-04-30  6:11 UTC (permalink / raw)
  To: LKML, Bharata B Rao, Balaji Rao, Dhaval Giani, KAMEZAWA Hiroyuki,
	Peter Zijlstra, Balbir Singh, Ingo Molnar, Martin Schwidefsky
  Cc: kosaki.motohiro


Changelog:
  since v1
  - use percpu_counter_sum() instead percpu_counter_read()


-------------------------------------
Subject: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count

cpuacct_update_stats() is called at every tick updating. and it use percpu_counter
for avoiding performance degression.

For archs which define VIRT_CPU_ACCOUNTING, every tick would result
in >1000 units of cputime updates and since this is much much greater
than percpu_batch_counter, we end up taking spinlock on every tick.

This patch change batch rule. now, any cpu can store "percpu_counter_bach * jiffies"
cputime in per-cpu cache.
it mean this patch don't have behavior change if VIRT_CPU_ACCOUNTING=n.

Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
Cc: Balaji Rao <balajirrao@gmail.com>
Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 kernel/sched.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Index: b/kernel/sched.c
===================================================================
--- a/kernel/sched.c	2009-04-30 11:37:47.000000000 +0900
+++ b/kernel/sched.c	2009-04-30 14:17:00.000000000 +0900
@@ -10221,6 +10221,7 @@ struct cpuacct {
 };
 
 struct cgroup_subsys cpuacct_subsys;
+static s32 cpuacct_batch;
 
 /* return cpu accounting group corresponding to this container */
 static inline struct cpuacct *cgroup_ca(struct cgroup *cgrp)
@@ -10250,6 +10251,9 @@ static struct cgroup_subsys_state *cpuac
 	if (!ca->cpuusage)
 		goto out_free_ca;
 
+	if (!cpuacct_batch)
+		cpuacct_batch = jiffies_to_cputime(percpu_counter_batch);
+
 	for (i = 0; i < CPUACCT_STAT_NSTATS; i++)
 		if (percpu_counter_init(&ca->cpustat[i], 0))
 			goto out_free_counters;
@@ -10376,7 +10380,7 @@ static int cpuacct_stats_show(struct cgr
 	int i;
 
 	for (i = 0; i < CPUACCT_STAT_NSTATS; i++) {
-		s64 val = percpu_counter_read(&ca->cpustat[i]);
+		s64 val = percpu_counter_sum(&ca->cpustat[i]);
 		val = cputime64_to_clock_t(val);
 		cb->fill(cb, cpuacct_stat_desc[i], val);
 	}
@@ -10446,7 +10450,7 @@ static void cpuacct_update_stats(struct 
 	ca = task_ca(tsk);
 
 	do {
-		percpu_counter_add(&ca->cpustat[idx], val);
+		__percpu_counter_add(&ca->cpustat[idx], val, cpuacct_batch);
 		ca = ca->parent;
 	} while (ca);
 	rcu_read_unlock();



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
  2009-04-30  6:11 [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count KOSAKI Motohiro
@ 2009-04-30  6:18 ` Balbir Singh
  2009-04-30  8:28   ` KOSAKI Motohiro
  2009-04-30  8:47   ` Peter Zijlstra
  2009-05-01  1:10 ` Andrew Morton
  1 sibling, 2 replies; 9+ messages in thread
From: Balbir Singh @ 2009-04-30  6:18 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: LKML, Bharata B Rao, Balaji Rao, Dhaval Giani, KAMEZAWA Hiroyuki,
	Peter Zijlstra, Ingo Molnar, Martin Schwidefsky

* KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> [2009-04-30 15:11:15]:

> 
> Changelog:
>   since v1
>   - use percpu_counter_sum() instead percpu_counter_read()
> 
> 
> -------------------------------------
> Subject: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
> 
> cpuacct_update_stats() is called at every tick updating. and it use percpu_counter
> for avoiding performance degression.
> 
> For archs which define VIRT_CPU_ACCOUNTING, every tick would result
> in >1000 units of cputime updates and since this is much much greater
> than percpu_batch_counter, we end up taking spinlock on every tick.
> 
> This patch change batch rule. now, any cpu can store "percpu_counter_bach * jiffies"
> cputime in per-cpu cache.
> it mean this patch don't have behavior change if VIRT_CPU_ACCOUNTING=n.
> 
> Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
> Cc: Balaji Rao <balajirrao@gmail.com>
> Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> ---
>  kernel/sched.c |    8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> Index: b/kernel/sched.c
> ===================================================================
> --- a/kernel/sched.c	2009-04-30 11:37:47.000000000 +0900
> +++ b/kernel/sched.c	2009-04-30 14:17:00.000000000 +0900
> @@ -10221,6 +10221,7 @@ struct cpuacct {
>  };
> 
>  struct cgroup_subsys cpuacct_subsys;
> +static s32 cpuacct_batch;
> 
>  /* return cpu accounting group corresponding to this container */
>  static inline struct cpuacct *cgroup_ca(struct cgroup *cgrp)
> @@ -10250,6 +10251,9 @@ static struct cgroup_subsys_state *cpuac
>  	if (!ca->cpuusage)
>  		goto out_free_ca;
> 
> +	if (!cpuacct_batch)
> +		cpuacct_batch = jiffies_to_cputime(percpu_counter_batch);
> +
>  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++)
>  		if (percpu_counter_init(&ca->cpustat[i], 0))
>  			goto out_free_counters;
> @@ -10376,7 +10380,7 @@ static int cpuacct_stats_show(struct cgr
>  	int i;
> 
>  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++) {
> -		s64 val = percpu_counter_read(&ca->cpustat[i]);
> +		s64 val = percpu_counter_sum(&ca->cpustat[i]);
>  		val = cputime64_to_clock_t(val);
>  		cb->fill(cb, cpuacct_stat_desc[i], val);
>  	}
> @@ -10446,7 +10450,7 @@ static void cpuacct_update_stats(struct 
>  	ca = task_ca(tsk);
> 
>  	do {
> -		percpu_counter_add(&ca->cpustat[idx], val);
> +		__percpu_counter_add(&ca->cpustat[idx], val, cpuacct_batch);
>  		ca = ca->parent;
>  	} while (ca);
>  	rcu_read_unlock();
> 
>

What do the test results look like with this? I'll see if I can find
some time to test this patch. On a patch read level this seems much better
to me, Peter?

Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>

-- 
	Balbir

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
  2009-04-30  6:18 ` Balbir Singh
@ 2009-04-30  8:28   ` KOSAKI Motohiro
  2009-04-30  8:47   ` Peter Zijlstra
  1 sibling, 0 replies; 9+ messages in thread
From: KOSAKI Motohiro @ 2009-04-30  8:28 UTC (permalink / raw)
  To: balbir
  Cc: kosaki.motohiro, LKML, Bharata B Rao, Balaji Rao, Dhaval Giani,
	KAMEZAWA Hiroyuki, Peter Zijlstra, Ingo Molnar,
	Martin Schwidefsky

> * KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> [2009-04-30 15:11:15]:
> 
> > 
> > Changelog:
> >   since v1
> >   - use percpu_counter_sum() instead percpu_counter_read()
> > 
> > 
> > -------------------------------------
> > Subject: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
> > 
> > cpuacct_update_stats() is called at every tick updating. and it use percpu_counter
> > for avoiding performance degression.
> > 
> > For archs which define VIRT_CPU_ACCOUNTING, every tick would result
> > in >1000 units of cputime updates and since this is much much greater
> > than percpu_batch_counter, we end up taking spinlock on every tick.
> > 
> > This patch change batch rule. now, any cpu can store "percpu_counter_bach * jiffies"
> > cputime in per-cpu cache.
> > it mean this patch don't have behavior change if VIRT_CPU_ACCOUNTING=n.
> > 
> > Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > Cc: Balaji Rao <balajirrao@gmail.com>
> > Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
> > Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
> > Cc: Ingo Molnar <mingo@elte.hu>
> > Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> > ---
> >  kernel/sched.c |    8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > Index: b/kernel/sched.c
> > ===================================================================
> > --- a/kernel/sched.c	2009-04-30 11:37:47.000000000 +0900
> > +++ b/kernel/sched.c	2009-04-30 14:17:00.000000000 +0900
> > @@ -10221,6 +10221,7 @@ struct cpuacct {
> >  };
> > 
> >  struct cgroup_subsys cpuacct_subsys;
> > +static s32 cpuacct_batch;
> > 
> >  /* return cpu accounting group corresponding to this container */
> >  static inline struct cpuacct *cgroup_ca(struct cgroup *cgrp)
> > @@ -10250,6 +10251,9 @@ static struct cgroup_subsys_state *cpuac
> >  	if (!ca->cpuusage)
> >  		goto out_free_ca;
> > 
> > +	if (!cpuacct_batch)
> > +		cpuacct_batch = jiffies_to_cputime(percpu_counter_batch);
> > +
> >  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++)
> >  		if (percpu_counter_init(&ca->cpustat[i], 0))
> >  			goto out_free_counters;
> > @@ -10376,7 +10380,7 @@ static int cpuacct_stats_show(struct cgr
> >  	int i;
> > 
> >  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++) {
> > -		s64 val = percpu_counter_read(&ca->cpustat[i]);
> > +		s64 val = percpu_counter_sum(&ca->cpustat[i]);
> >  		val = cputime64_to_clock_t(val);
> >  		cb->fill(cb, cpuacct_stat_desc[i], val);
> >  	}
> > @@ -10446,7 +10450,7 @@ static void cpuacct_update_stats(struct 
> >  	ca = task_ca(tsk);
> > 
> >  	do {
> > -		percpu_counter_add(&ca->cpustat[idx], val);
> > +		__percpu_counter_add(&ca->cpustat[idx], val, cpuacct_batch);
> >  		ca = ca->parent;
> >  	} while (ca);
> >  	rcu_read_unlock();
> > 
> >
> 
> What do the test results look like with this? I'll see if I can find
> some time to test this patch. On a patch read level this seems much better
> to me, Peter?
> 
> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>

this patch decrease slow down risk on large server. but this patch
doesn't have functional change. you can't make functional test.
AFAIK, percpu_counter_sum() don't make any performance degression,
but you have good stress test, please tell me it.






^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
  2009-04-30  6:18 ` Balbir Singh
  2009-04-30  8:28   ` KOSAKI Motohiro
@ 2009-04-30  8:47   ` Peter Zijlstra
  2009-04-30  8:52     ` KOSAKI Motohiro
  2009-04-30  8:55     ` Ingo Molnar
  1 sibling, 2 replies; 9+ messages in thread
From: Peter Zijlstra @ 2009-04-30  8:47 UTC (permalink / raw)
  To: balbir
  Cc: KOSAKI Motohiro, LKML, Bharata B Rao, Balaji Rao, Dhaval Giani,
	KAMEZAWA Hiroyuki, Ingo Molnar, Martin Schwidefsky

On Thu, 2009-04-30 at 11:48 +0530, Balbir Singh wrote:

> > Subject: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
> > 
> > cpuacct_update_stats() is called at every tick updating. and it use percpu_counter
> > for avoiding performance degression.
> > 
> > For archs which define VIRT_CPU_ACCOUNTING, every tick would result
> > in >1000 units of cputime updates and since this is much much greater
> > than percpu_batch_counter, we end up taking spinlock on every tick.
> > 
> > This patch change batch rule. now, any cpu can store "percpu_counter_bach * jiffies"
> > cputime in per-cpu cache.
> > it mean this patch don't have behavior change if VIRT_CPU_ACCOUNTING=n.
> > 
> > Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > Cc: Balaji Rao <balajirrao@gmail.com>
> > Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
> > Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
> > Cc: Ingo Molnar <mingo@elte.hu>
> > Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> > ---
> >  kernel/sched.c |    8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > Index: b/kernel/sched.c
> > ===================================================================
> > --- a/kernel/sched.c	2009-04-30 11:37:47.000000000 +0900
> > +++ b/kernel/sched.c	2009-04-30 14:17:00.000000000 +0900
> > @@ -10221,6 +10221,7 @@ struct cpuacct {
> >  };
> > 
> >  struct cgroup_subsys cpuacct_subsys;
> > +static s32 cpuacct_batch;
> > 
> >  /* return cpu accounting group corresponding to this container */
> >  static inline struct cpuacct *cgroup_ca(struct cgroup *cgrp)
> > @@ -10250,6 +10251,9 @@ static struct cgroup_subsys_state *cpuac
> >  	if (!ca->cpuusage)
> >  		goto out_free_ca;
> > 
> > +	if (!cpuacct_batch)
> > +		cpuacct_batch = jiffies_to_cputime(percpu_counter_batch);
> > +
> >  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++)
> >  		if (percpu_counter_init(&ca->cpustat[i], 0))
> >  			goto out_free_counters;
> > @@ -10376,7 +10380,7 @@ static int cpuacct_stats_show(struct cgr
> >  	int i;
> > 
> >  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++) {
> > -		s64 val = percpu_counter_read(&ca->cpustat[i]);
> > +		s64 val = percpu_counter_sum(&ca->cpustat[i]);
> >  		val = cputime64_to_clock_t(val);
> >  		cb->fill(cb, cpuacct_stat_desc[i], val);
> >  	}
> > @@ -10446,7 +10450,7 @@ static void cpuacct_update_stats(struct 
> >  	ca = task_ca(tsk);
> > 
> >  	do {
> > -		percpu_counter_add(&ca->cpustat[idx], val);
> > +		__percpu_counter_add(&ca->cpustat[idx], val, cpuacct_batch);
> >  		ca = ca->parent;
> >  	} while (ca);
> >  	rcu_read_unlock();
> > 
> >
> 
> What do the test results look like with this? I'll see if I can find
> some time to test this patch. On a patch read level this seems much better
> to me, Peter?

I don't really fancy percpu_counter_sum() usage. I'm thinking its ok to
degrate accuracy on larger machines and simply use
percpu_counter_read().


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
  2009-04-30  8:47   ` Peter Zijlstra
@ 2009-04-30  8:52     ` KOSAKI Motohiro
  2009-04-30  9:02       ` Balbir Singh
  2009-04-30  8:55     ` Ingo Molnar
  1 sibling, 1 reply; 9+ messages in thread
From: KOSAKI Motohiro @ 2009-04-30  8:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: kosaki.motohiro, balbir, LKML, Bharata B Rao, Balaji Rao,
	Dhaval Giani, KAMEZAWA Hiroyuki, Ingo Molnar, Martin Schwidefsky

> > >  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++) {
> > > -		s64 val = percpu_counter_read(&ca->cpustat[i]);
> > > +		s64 val = percpu_counter_sum(&ca->cpustat[i]);
> > >  		val = cputime64_to_clock_t(val);
> > >  		cb->fill(cb, cpuacct_stat_desc[i], val);
> > >  	}
> > 
> > What do the test results look like with this? I'll see if I can find
> > some time to test this patch. On a patch read level this seems much better
> > to me, Peter?
> 
> I don't really fancy percpu_counter_sum() usage. I'm thinking its ok to
> degrate accuracy on larger machines and simply use
> percpu_counter_read().

I have same opinion with peter. Balbir, What do you think?




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
  2009-04-30  8:47   ` Peter Zijlstra
  2009-04-30  8:52     ` KOSAKI Motohiro
@ 2009-04-30  8:55     ` Ingo Molnar
  1 sibling, 0 replies; 9+ messages in thread
From: Ingo Molnar @ 2009-04-30  8:55 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: balbir, KOSAKI Motohiro, LKML, Bharata B Rao, Balaji Rao,
	Dhaval Giani, KAMEZAWA Hiroyuki, Martin Schwidefsky


* Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:

> On Thu, 2009-04-30 at 11:48 +0530, Balbir Singh wrote:
> 
> > > Subject: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
> > > 
> > > cpuacct_update_stats() is called at every tick updating. and it use percpu_counter
> > > for avoiding performance degression.
> > > 
> > > For archs which define VIRT_CPU_ACCOUNTING, every tick would result
> > > in >1000 units of cputime updates and since this is much much greater
> > > than percpu_batch_counter, we end up taking spinlock on every tick.
> > > 
> > > This patch change batch rule. now, any cpu can store "percpu_counter_bach * jiffies"
> > > cputime in per-cpu cache.
> > > it mean this patch don't have behavior change if VIRT_CPU_ACCOUNTING=n.
> > > 
> > > Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > > Cc: Balaji Rao <balajirrao@gmail.com>
> > > Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
> > > Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > > Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> > > Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
> > > Cc: Ingo Molnar <mingo@elte.hu>
> > > Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> > > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> > > ---
> > >  kernel/sched.c |    8 ++++++--
> > >  1 file changed, 6 insertions(+), 2 deletions(-)
> > > 
> > > Index: b/kernel/sched.c
> > > ===================================================================
> > > --- a/kernel/sched.c	2009-04-30 11:37:47.000000000 +0900
> > > +++ b/kernel/sched.c	2009-04-30 14:17:00.000000000 +0900
> > > @@ -10221,6 +10221,7 @@ struct cpuacct {
> > >  };
> > > 
> > >  struct cgroup_subsys cpuacct_subsys;
> > > +static s32 cpuacct_batch;
> > > 
> > >  /* return cpu accounting group corresponding to this container */
> > >  static inline struct cpuacct *cgroup_ca(struct cgroup *cgrp)
> > > @@ -10250,6 +10251,9 @@ static struct cgroup_subsys_state *cpuac
> > >  	if (!ca->cpuusage)
> > >  		goto out_free_ca;
> > > 
> > > +	if (!cpuacct_batch)
> > > +		cpuacct_batch = jiffies_to_cputime(percpu_counter_batch);
> > > +
> > >  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++)
> > >  		if (percpu_counter_init(&ca->cpustat[i], 0))
> > >  			goto out_free_counters;
> > > @@ -10376,7 +10380,7 @@ static int cpuacct_stats_show(struct cgr
> > >  	int i;
> > > 
> > >  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++) {
> > > -		s64 val = percpu_counter_read(&ca->cpustat[i]);
> > > +		s64 val = percpu_counter_sum(&ca->cpustat[i]);
> > >  		val = cputime64_to_clock_t(val);
> > >  		cb->fill(cb, cpuacct_stat_desc[i], val);
> > >  	}
> > > @@ -10446,7 +10450,7 @@ static void cpuacct_update_stats(struct 
> > >  	ca = task_ca(tsk);
> > > 
> > >  	do {
> > > -		percpu_counter_add(&ca->cpustat[idx], val);
> > > +		__percpu_counter_add(&ca->cpustat[idx], val, cpuacct_batch);
> > >  		ca = ca->parent;
> > >  	} while (ca);
> > >  	rcu_read_unlock();
> > > 
> > >
> > 
> > What do the test results look like with this? I'll see if I can 
> > find some time to test this patch. On a patch read level this 
> > seems much better to me, Peter?
> 
> I don't really fancy percpu_counter_sum() usage. I'm thinking its 
> ok to degrate accuracy on larger machines and simply use 
> percpu_counter_read().

yes - and the values will converge anyway, right? So it's just a 
small delay, not even any genuine loss of accuracy.

	Ingo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
  2009-04-30  8:52     ` KOSAKI Motohiro
@ 2009-04-30  9:02       ` Balbir Singh
  0 siblings, 0 replies; 9+ messages in thread
From: Balbir Singh @ 2009-04-30  9:02 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Peter Zijlstra, LKML, Bharata B Rao, Balaji Rao, Dhaval Giani,
	KAMEZAWA Hiroyuki, Ingo Molnar, Martin Schwidefsky

* KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> [2009-04-30 17:52:16]:

> > > >  	for (i = 0; i < CPUACCT_STAT_NSTATS; i++) {
> > > > -		s64 val = percpu_counter_read(&ca->cpustat[i]);
> > > > +		s64 val = percpu_counter_sum(&ca->cpustat[i]);
> > > >  		val = cputime64_to_clock_t(val);
> > > >  		cb->fill(cb, cpuacct_stat_desc[i], val);
> > > >  	}
> > > 
> > > What do the test results look like with this? I'll see if I can find
> > > some time to test this patch. On a patch read level this seems much better
> > > to me, Peter?
> > 
> > I don't really fancy percpu_counter_sum() usage. I'm thinking its ok to
> > degrate accuracy on larger machines and simply use
> > percpu_counter_read().
> 
> I have same opinion with peter. Balbir, What do you think?
>

Sure, but the larger the delta gets, the less useful the metric gets
:) I am OK with going back to percpu_counter_read() if that is the
consensus. 

-- 
	Balbir

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
  2009-04-30  6:11 [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count KOSAKI Motohiro
  2009-04-30  6:18 ` Balbir Singh
@ 2009-05-01  1:10 ` Andrew Morton
  2009-05-01  1:45   ` KOSAKI Motohiro
  1 sibling, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2009-05-01  1:10 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: linux-kernel, bharata, balajirrao, dhaval, kamezawa.hiroyu,
	a.p.zijlstra, balbir, mingo, schwidefsky, kosaki.motohiro

On Thu, 30 Apr 2009 15:11:15 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:

> 
> Changelog:
>   since v1
>   - use percpu_counter_sum() instead percpu_counter_read()
> 
> 
> -------------------------------------
> Subject: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count
> 
> cpuacct_update_stats() is called at every tick updating. and it use percpu_counter
> for avoiding performance degression.
> 
> For archs which define VIRT_CPU_ACCOUNTING, every tick would result
> in >1000 units of cputime updates and since this is much much greater
> than percpu_batch_counter, we end up taking spinlock on every tick.
> 
> This patch change batch rule. now, any cpu can store "percpu_counter_bach * jiffies"
> cputime in per-cpu cache.
> it mean this patch don't have behavior change if VIRT_CPU_ACCOUNTING=n.

Does this actually matter?

If we're calling cpuacct_update_stats() with large values of `cputime'
then presumably we're also calling cpuacct_update_stats() at a low
frequency, so the common lock-taking won't cause performance problems?


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu  cputime count
  2009-05-01  1:10 ` Andrew Morton
@ 2009-05-01  1:45   ` KOSAKI Motohiro
  0 siblings, 0 replies; 9+ messages in thread
From: KOSAKI Motohiro @ 2009-05-01  1:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, bharata, balajirrao, dhaval, kamezawa.hiroyu,
	a.p.zijlstra, balbir, mingo, schwidefsky

>> For archs which define VIRT_CPU_ACCOUNTING, every tick would result
>> in >1000 units of cputime updates and since this is much much greater
>> than percpu_batch_counter, we end up taking spinlock on every tick.
>>
>> This patch change batch rule. now, any cpu can store "percpu_counter_bach * jiffies"
>> cputime in per-cpu cache.
>> it mean this patch don't have behavior change if VIRT_CPU_ACCOUNTING=n.
>
> Does this actually matter?
>
> If we're calling cpuacct_update_stats() with large values of `cputime'
> then presumably we're also calling cpuacct_update_stats() at a low
> frequency, so the common lock-taking won't cause performance problems?

VIRT_CPU_ACCOUNTING change cputime_t meaning. but don't change calling
update time frequency.

example,
ia64, HZ=1000, VIRT_CPU_ACCOUNTING=y (1 cputime == 1ns, ie 1 jiffies
== 1000000 cputime)

every tick updating makes 1000000 cputime. (see jiffies_to_cputime)

-----------------------------------------------------------------------------------------
void account_process_tick(struct task_struct *p, int user_tick)
{
        cputime_t one_jiffy = jiffies_to_cputime(1);
        cputime_t one_jiffy_scaled = cputime_to_scaled(one_jiffy);
        struct rq *rq = this_rq();

        if (user_tick)
                account_user_time(p, one_jiffy, one_jiffy_scaled);
        else if (p != rq->idle)
                account_system_time(p, HARDIRQ_OFFSET, one_jiffy,
                                    one_jiffy_scaled);
        else
                account_idle_time(one_jiffy);
}
-----------------------------------------------------------------------------------------

but tick updating frequency don't changed.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2009-05-01  1:46 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-30  6:11 [PATCH v2] cpuacct: VIRT_CPU_ACCOUNTING don't prevent percpu cputime count KOSAKI Motohiro
2009-04-30  6:18 ` Balbir Singh
2009-04-30  8:28   ` KOSAKI Motohiro
2009-04-30  8:47   ` Peter Zijlstra
2009-04-30  8:52     ` KOSAKI Motohiro
2009-04-30  9:02       ` Balbir Singh
2009-04-30  8:55     ` Ingo Molnar
2009-05-01  1:10 ` Andrew Morton
2009-05-01  1:45   ` KOSAKI Motohiro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).