linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
@ 2008-09-24 16:48 David Howells
  2008-09-24 16:48 ` [PATCH 2/2] MN10300: Make sched_clock() report time since boot David Howells
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: David Howells @ 2008-09-24 16:48 UTC (permalink / raw)
  To: torvalds, akpm
  Cc: dhowells, a.p.zijlstra, nico, linux-am33-list, linux-kernel

Move asm-arm/cnt32_to_63.h to include/linux/ so that MN10300 can make use of it
too.

Signed-off-by: David Howells <dhowells@redhat.com>
---

 arch/arm/mach-pxa/time.c       |    2 +
 arch/arm/mach-sa1100/generic.c |    2 +
 arch/arm/mach-versatile/core.c |    2 +
 include/linux/cnt32_to_63.h    |   80 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 83 insertions(+), 3 deletions(-)
 create mode 100644 include/linux/cnt32_to_63.h


diff --git a/arch/arm/mach-pxa/time.c b/arch/arm/mach-pxa/time.c
index 67e1850..b0d6b32 100644
--- a/arch/arm/mach-pxa/time.c
+++ b/arch/arm/mach-pxa/time.c
@@ -17,9 +17,9 @@
 #include <linux/interrupt.h>
 #include <linux/clockchips.h>
 #include <linux/sched.h>
+#include <linux/cnt32_to_63.h>
 
 #include <asm/div64.h>
-#include <asm/cnt32_to_63.h>
 #include <asm/mach/irq.h>
 #include <asm/mach/time.h>
 #include <mach/pxa-regs.h>
diff --git a/arch/arm/mach-sa1100/generic.c b/arch/arm/mach-sa1100/generic.c
index 1362994..b422526 100644
--- a/arch/arm/mach-sa1100/generic.c
+++ b/arch/arm/mach-sa1100/generic.c
@@ -18,9 +18,9 @@
 #include <linux/ioport.h>
 #include <linux/sched.h>	/* just for sched_clock() - funny that */
 #include <linux/platform_device.h>
+#include <linux/cnt32_to_63.h>
 
 #include <asm/div64.h>
-#include <asm/cnt32_to_63.h>
 #include <mach/hardware.h>
 #include <asm/system.h>
 #include <asm/pgtable.h>
diff --git a/arch/arm/mach-versatile/core.c b/arch/arm/mach-versatile/core.c
index d75e795..b638f10 100644
--- a/arch/arm/mach-versatile/core.c
+++ b/arch/arm/mach-versatile/core.c
@@ -28,8 +28,8 @@
 #include <linux/amba/clcd.h>
 #include <linux/clocksource.h>
 #include <linux/clockchips.h>
+#include <linux/cnt32_to_63.h>
 
-#include <asm/cnt32_to_63.h>
 #include <asm/system.h>
 #include <mach/hardware.h>
 #include <asm/io.h>
diff --git a/include/linux/cnt32_to_63.h b/include/linux/cnt32_to_63.h
new file mode 100644
index 0000000..8c0f950
--- /dev/null
+++ b/include/linux/cnt32_to_63.h
@@ -0,0 +1,80 @@
+/*
+ *  Extend a 32-bit counter to 63 bits
+ *
+ *  Author:	Nicolas Pitre
+ *  Created:	December 3, 2006
+ *  Copyright:	MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation.
+ */
+
+#ifndef __LINUX_CNT32_TO_63_H__
+#define __LINUX_CNT32_TO_63_H__
+
+#include <linux/compiler.h>
+#include <linux/types.h>
+#include <asm/byteorder.h>
+
+/* this is used only to give gcc a clue about good code generation */
+union cnt32_to_63 {
+	struct {
+#if defined(__LITTLE_ENDIAN)
+		u32 lo, hi;
+#elif defined(__BIG_ENDIAN)
+		u32 hi, lo;
+#endif
+	};
+	u64 val;
+};
+
+
+/**
+ * cnt32_to_63 - Expand a 32-bit counter to a 63-bit counter
+ * @cnt_lo: The low part of the counter
+ *
+ * Many hardware clock counters are only 32 bits wide and therefore have
+ * a relatively short period making wrap-arounds rather frequent.  This
+ * is a problem when implementing sched_clock() for example, where a 64-bit
+ * non-wrapping monotonic value is expected to be returned.
+ *
+ * To overcome that limitation, let's extend a 32-bit counter to 63 bits
+ * in a completely lock free fashion. Bits 0 to 31 of the clock are provided
+ * by the hardware while bits 32 to 62 are stored in memory.  The top bit in
+ * memory is used to synchronize with the hardware clock half-period.  When
+ * the top bit of both counters (hardware and in memory) differ then the
+ * memory is updated with a new value, incrementing it when the hardware
+ * counter wraps around.
+ *
+ * Because a word store in memory is atomic then the incremented value will
+ * always be in synch with the top bit indicating to any potential concurrent
+ * reader if the value in memory is up to date or not with regards to the
+ * needed increment.  And any race in updating the value in memory is harmless
+ * as the same value would simply be stored more than once.
+ *
+ * The only restriction for the algorithm to work properly is that this
+ * code must be executed at least once per each half period of the 32-bit
+ * counter to properly update the state bit in memory. This is usually not a
+ * problem in practice, but if it is then a kernel timer could be scheduled
+ * to manage for this code to be executed often enough.
+ *
+ * Note that the top bit (bit 63) in the returned value should be considered
+ * as garbage.  It is not cleared here because callers are likely to use a
+ * multiplier on the returned value which can get rid of the top bit
+ * implicitly by making the multiplier even, therefore saving on a runtime
+ * clear-bit instruction. Otherwise caller must remember to clear the top
+ * bit explicitly.
+ */
+#define cnt32_to_63(cnt_lo) \
+({ \
+	static volatile u32 __m_cnt_hi; \
+	union cnt32_to_63 __x; \
+	__x.hi = __m_cnt_hi; \
+	__x.lo = (cnt_lo); \
+	if (unlikely((s32)(__x.hi ^ __x.lo) < 0)) \
+		__m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); \
+	__x.val; \
+})
+
+#endif


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/2] MN10300: Make sched_clock() report time since boot
  2008-09-24 16:48 [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/ David Howells
@ 2008-09-24 16:48 ` David Howells
  2008-09-26 10:58   ` Peter Zijlstra
  2008-09-26 10:56 ` [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/ Peter Zijlstra
  2008-09-26 13:27 ` David Howells
  2 siblings, 1 reply; 16+ messages in thread
From: David Howells @ 2008-09-24 16:48 UTC (permalink / raw)
  To: torvalds, akpm
  Cc: dhowells, a.p.zijlstra, nico, linux-am33-list, linux-kernel

Make sched_clock() report time since boot rather than time since last timer
interrupt.

Make sched_clock() expand and scale the 32-bit TSC value running at IOCLK
speed (~33MHz) to a 64-bit nanosecond counter, using cnt32_to_63() acquired
from the ARM arch and without using slow DIVU instructions every call.

Signed-off-by: David Howells <dhowells@redhat.com>
---

 arch/mn10300/kernel/time.c |   52 +++++++++++++++++++++++++++++++++++---------
 1 files changed, 41 insertions(+), 11 deletions(-)


diff --git a/arch/mn10300/kernel/time.c b/arch/mn10300/kernel/time.c
index babb7c2..e460658 100644
--- a/arch/mn10300/kernel/time.c
+++ b/arch/mn10300/kernel/time.c
@@ -1,6 +1,6 @@
 /* MN10300 Low level time management
  *
- * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
+ * Copyright (C) 2007-2008 Red Hat, Inc. All Rights Reserved.
  * Written by David Howells (dhowells@redhat.com)
  * - Derived from arch/i386/kernel/time.c
  *
@@ -16,6 +16,7 @@
 #include <linux/init.h>
 #include <linux/smp.h>
 #include <linux/profile.h>
+#include <linux/cnt32_to_63.h>
 #include <asm/irq.h>
 #include <asm/div64.h>
 #include <asm/processor.h>
@@ -40,27 +41,54 @@ static struct irqaction timer_irq = {
 	.name		= "timer",
 };
 
+static unsigned long sched_clock_multiplier;
+
 /*
  * scheduler clock - returns current time in nanosec units.
  */
 unsigned long long sched_clock(void)
 {
 	union {
-		unsigned long long l;
-		u32 w[2];
-	} quot;
+		unsigned long long ll;
+		unsigned l[2];
+	} tsc64, result;
+	unsigned long tsc, tmp;
+	unsigned product[3]; /* 96-bit intermediate value */
+
+	/* read the TSC value
+	 */
+	tsc = 0 - get_cycles(); /* get_cycles() counts down */
 
-	quot.w[0] = mn10300_last_tsc - get_cycles();
-	quot.w[1] = 1000000000;
+	/* expand to 64-bits.
+	 * - sched_clock() must be called once a minute or better or the
+	 *   following will go horribly wrong - see cnt32_to_63()
+	 */
+	tsc64.ll = cnt32_to_63(tsc) & 0x7fffffffffffffffULL;
 
-	asm("mulu %2,%3,%0,%1"
-	    : "=r"(quot.w[1]), "=r"(quot.w[0])
-	    : "0"(quot.w[1]), "1"(quot.w[0])
+	/* scale the 64-bit TSC value to a nanosecond value via a 96-bit
+	 * intermediate
+	 */
+	asm("mulu	%2,%0,%3,%0	\n"	/* LSW * mult ->  0:%3:%0 */
+	    "mulu	%2,%1,%2,%1	\n"	/* MSW * mult -> %2:%1:0 */
+	    "add	%3,%1		\n"
+	    "addc	0,%2		\n"	/* result in %2:%1:%0 */
+	    : "=r"(product[0]), "=r"(product[1]), "=r"(product[2]), "=r"(tmp)
+	    :  "0"(tsc64.l[0]),  "1"(tsc64.l[1]),  "2"(sched_clock_multiplier)
 	    : "cc");
 
-	do_div(quot.l, MN10300_TSCCLK);
+	result.l[0] = product[1] << 16 | product[0] >> 16;
+	result.l[1] = product[2] << 16 | product[1] >> 16;
 
-	return quot.l;
+	return result.ll;
+}
+
+/*
+ * initialise the scheduler clock
+ */
+static void __init mn10300_sched_clock_init(void)
+{
+	sched_clock_multiplier =
+		__muldiv64u(NSEC_PER_SEC, 1 << 16, MN10300_TSCCLK);
 }
 
 /*
@@ -128,4 +156,6 @@ void __init time_init(void)
 	/* start the watchdog timer */
 	watchdog_go();
 #endif
+
+	mn10300_sched_clock_init();
 }


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-09-24 16:48 [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/ David Howells
  2008-09-24 16:48 ` [PATCH 2/2] MN10300: Make sched_clock() report time since boot David Howells
@ 2008-09-26 10:56 ` Peter Zijlstra
  2008-09-26 12:03   ` Nicolas Pitre
  2008-09-26 13:27 ` David Howells
  2 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2008-09-26 10:56 UTC (permalink / raw)
  To: David Howells; +Cc: torvalds, akpm, nico, linux-am33-list, linux-kernel

On Wed, 2008-09-24 at 17:48 +0100, David Howells wrote:
> Move asm-arm/cnt32_to_63.h to include/linux/ so that MN10300 can make use of it
> too.
> 
> Signed-off-by: David Howells <dhowells@redhat.com>
> ---
> 
>  arch/arm/mach-pxa/time.c       |    2 +
>  arch/arm/mach-sa1100/generic.c |    2 +
>  arch/arm/mach-versatile/core.c |    2 +
>  include/linux/cnt32_to_63.h    |   80 ++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 83 insertions(+), 3 deletions(-)
>  create mode 100644 include/linux/cnt32_to_63.h

Didn't you forget to remove the old one?


> +#define cnt32_to_63(cnt_lo) \
> +({ \
> +	static volatile u32 __m_cnt_hi; \
> +	union cnt32_to_63 __x; \
> +	__x.hi = __m_cnt_hi; \
> +	__x.lo = (cnt_lo); \
> +	if (unlikely((s32)(__x.hi ^ __x.lo) < 0)) \
> +		__m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); \
> +	__x.val; \
> +})
> +
> +#endif

That code is way to smart :-)

Better make sure that non of its users are SMP capable though.



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/2] MN10300: Make sched_clock() report time since boot
  2008-09-24 16:48 ` [PATCH 2/2] MN10300: Make sched_clock() report time since boot David Howells
@ 2008-09-26 10:58   ` Peter Zijlstra
  0 siblings, 0 replies; 16+ messages in thread
From: Peter Zijlstra @ 2008-09-26 10:58 UTC (permalink / raw)
  To: David Howells; +Cc: torvalds, akpm, nico, linux-am33-list, linux-kernel

On Wed, 2008-09-24 at 17:48 +0100, David Howells wrote:
> Make sched_clock() report time since boot rather than time since last timer
> interrupt.
> 
> Make sched_clock() expand and scale the 32-bit TSC value running at IOCLK
> speed (~33MHz) to a 64-bit nanosecond counter, using cnt32_to_63() acquired
> from the ARM arch and without using slow DIVU instructions every call.

Right, so since you have to up-scale the 63 bit counter obtained using
the smarty pants cnt32_to_63 code, you actually end up with a genuine
64bit counter.. sweet.

I'm not going to pretend to understand the NM10300 details, but the
generic idea makes sense.

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>

> Signed-off-by: David Howells <dhowells@redhat.com>
> ---
> 
>  arch/mn10300/kernel/time.c |   52 +++++++++++++++++++++++++++++++++++---------
>  1 files changed, 41 insertions(+), 11 deletions(-)
> 
> 
> diff --git a/arch/mn10300/kernel/time.c b/arch/mn10300/kernel/time.c
> index babb7c2..e460658 100644
> --- a/arch/mn10300/kernel/time.c
> +++ b/arch/mn10300/kernel/time.c
> @@ -1,6 +1,6 @@
>  /* MN10300 Low level time management
>   *
> - * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
> + * Copyright (C) 2007-2008 Red Hat, Inc. All Rights Reserved.
>   * Written by David Howells (dhowells@redhat.com)
>   * - Derived from arch/i386/kernel/time.c
>   *
> @@ -16,6 +16,7 @@
>  #include <linux/init.h>
>  #include <linux/smp.h>
>  #include <linux/profile.h>
> +#include <linux/cnt32_to_63.h>
>  #include <asm/irq.h>
>  #include <asm/div64.h>
>  #include <asm/processor.h>
> @@ -40,27 +41,54 @@ static struct irqaction timer_irq = {
>  	.name		= "timer",
>  };
>  
> +static unsigned long sched_clock_multiplier;
> +
>  /*
>   * scheduler clock - returns current time in nanosec units.
>   */
>  unsigned long long sched_clock(void)
>  {
>  	union {
> -		unsigned long long l;
> -		u32 w[2];
> -	} quot;
> +		unsigned long long ll;
> +		unsigned l[2];
> +	} tsc64, result;
> +	unsigned long tsc, tmp;
> +	unsigned product[3]; /* 96-bit intermediate value */
> +
> +	/* read the TSC value
> +	 */
> +	tsc = 0 - get_cycles(); /* get_cycles() counts down */
>  
> -	quot.w[0] = mn10300_last_tsc - get_cycles();
> -	quot.w[1] = 1000000000;
> +	/* expand to 64-bits.
> +	 * - sched_clock() must be called once a minute or better or the
> +	 *   following will go horribly wrong - see cnt32_to_63()
> +	 */
> +	tsc64.ll = cnt32_to_63(tsc) & 0x7fffffffffffffffULL;
>  
> -	asm("mulu %2,%3,%0,%1"
> -	    : "=r"(quot.w[1]), "=r"(quot.w[0])
> -	    : "0"(quot.w[1]), "1"(quot.w[0])
> +	/* scale the 64-bit TSC value to a nanosecond value via a 96-bit
> +	 * intermediate
> +	 */
> +	asm("mulu	%2,%0,%3,%0	\n"	/* LSW * mult ->  0:%3:%0 */
> +	    "mulu	%2,%1,%2,%1	\n"	/* MSW * mult -> %2:%1:0 */
> +	    "add	%3,%1		\n"
> +	    "addc	0,%2		\n"	/* result in %2:%1:%0 */
> +	    : "=r"(product[0]), "=r"(product[1]), "=r"(product[2]), "=r"(tmp)
> +	    :  "0"(tsc64.l[0]),  "1"(tsc64.l[1]),  "2"(sched_clock_multiplier)
>  	    : "cc");
>  
> -	do_div(quot.l, MN10300_TSCCLK);
> +	result.l[0] = product[1] << 16 | product[0] >> 16;
> +	result.l[1] = product[2] << 16 | product[1] >> 16;
>  
> -	return quot.l;
> +	return result.ll;
> +}
> +
> +/*
> + * initialise the scheduler clock
> + */
> +static void __init mn10300_sched_clock_init(void)
> +{
> +	sched_clock_multiplier =
> +		__muldiv64u(NSEC_PER_SEC, 1 << 16, MN10300_TSCCLK);
>  }
>  
>  /*
> @@ -128,4 +156,6 @@ void __init time_init(void)
>  	/* start the watchdog timer */
>  	watchdog_go();
>  #endif
> +
> +	mn10300_sched_clock_init();
>  }
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-09-26 10:56 ` [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/ Peter Zijlstra
@ 2008-09-26 12:03   ` Nicolas Pitre
  2008-09-26 12:08     ` Peter Zijlstra
  2008-09-26 12:20     ` Peter Zijlstra
  0 siblings, 2 replies; 16+ messages in thread
From: Nicolas Pitre @ 2008-09-26 12:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: David Howells, torvalds, akpm, linux-am33-list, linux-kernel

On Fri, 26 Sep 2008, Peter Zijlstra wrote:

> On Wed, 2008-09-24 at 17:48 +0100, David Howells wrote:
> > Move asm-arm/cnt32_to_63.h to include/linux/ so that MN10300 can make use of it
> > too.
> > 
> > Signed-off-by: David Howells <dhowells@redhat.com>
> > ---
> > 
> >  arch/arm/mach-pxa/time.c       |    2 +
> >  arch/arm/mach-sa1100/generic.c |    2 +
> >  arch/arm/mach-versatile/core.c |    2 +
> >  include/linux/cnt32_to_63.h    |   80 ++++++++++++++++++++++++++++++++++++++++
> >  4 files changed, 83 insertions(+), 3 deletions(-)
> >  create mode 100644 include/linux/cnt32_to_63.h
> 
> Didn't you forget to remove the old one?
> 
> 
> > +#define cnt32_to_63(cnt_lo) \
> > +({ \
> > +	static volatile u32 __m_cnt_hi; \
> > +	union cnt32_to_63 __x; \
> > +	__x.hi = __m_cnt_hi; \
> > +	__x.lo = (cnt_lo); \
> > +	if (unlikely((s32)(__x.hi ^ __x.lo) < 0)) \
> > +		__m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); \
> > +	__x.val; \
> > +})
> > +
> > +#endif
> 
> That code is way to smart :-)
> 
> Better make sure that non of its users are SMP capable though.

Why would that matter?


Nicolas

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-09-26 12:03   ` Nicolas Pitre
@ 2008-09-26 12:08     ` Peter Zijlstra
  2008-09-29  1:21       ` Nicolas Pitre
  2008-09-26 12:20     ` Peter Zijlstra
  1 sibling, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2008-09-26 12:08 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: David Howells, torvalds, akpm, linux-am33-list, linux-kernel

On Fri, 2008-09-26 at 08:03 -0400, Nicolas Pitre wrote:
> On Fri, 26 Sep 2008, Peter Zijlstra wrote:
> 
> > On Wed, 2008-09-24 at 17:48 +0100, David Howells wrote:
> > > Move asm-arm/cnt32_to_63.h to include/linux/ so that MN10300 can make use of it
> > > too.
> > > 
> > > Signed-off-by: David Howells <dhowells@redhat.com>
> > > ---
> > > 
> > >  arch/arm/mach-pxa/time.c       |    2 +
> > >  arch/arm/mach-sa1100/generic.c |    2 +
> > >  arch/arm/mach-versatile/core.c |    2 +
> > >  include/linux/cnt32_to_63.h    |   80 ++++++++++++++++++++++++++++++++++++++++
> > >  4 files changed, 83 insertions(+), 3 deletions(-)
> > >  create mode 100644 include/linux/cnt32_to_63.h
> > 
> > Didn't you forget to remove the old one?
> > 
> > 
> > > +#define cnt32_to_63(cnt_lo) \
> > > +({ \
> > > +	static volatile u32 __m_cnt_hi; \
> > > +	union cnt32_to_63 __x; \
> > > +	__x.hi = __m_cnt_hi; \
> > > +	__x.lo = (cnt_lo); \
> > > +	if (unlikely((s32)(__x.hi ^ __x.lo) < 0)) \
> > > +		__m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); \
> > > +	__x.val; \
> > > +})
> > > +
> > > +#endif
> > 
> > That code is way to smart :-)
> > 
> > Better make sure that non of its users are SMP capable though.
> 
> Why would that matter?

It has a local static variable and no synchronization. Also, if the
counters differ per cpu the value of __m_cnt_hi ought to be different
per cpu.




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-09-26 12:03   ` Nicolas Pitre
  2008-09-26 12:08     ` Peter Zijlstra
@ 2008-09-26 12:20     ` Peter Zijlstra
  2008-09-29  1:42       ` Nicolas Pitre
  2008-10-01 14:34       ` David Howells
  1 sibling, 2 replies; 16+ messages in thread
From: Peter Zijlstra @ 2008-09-26 12:20 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: David Howells, torvalds, akpm, linux-am33-list, linux-kernel

On Fri, 2008-09-26 at 08:03 -0400, Nicolas Pitre wrote:
> On Fri, 26 Sep 2008, Peter Zijlstra wrote:
> 
> > On Wed, 2008-09-24 at 17:48 +0100, David Howells wrote:
> > > Move asm-arm/cnt32_to_63.h to include/linux/ so that MN10300 can make use of it
> > > too.
> > > 
> > > Signed-off-by: David Howells <dhowells@redhat.com>
> > > ---
> > > 
> > >  arch/arm/mach-pxa/time.c       |    2 +
> > >  arch/arm/mach-sa1100/generic.c |    2 +
> > >  arch/arm/mach-versatile/core.c |    2 +
> > >  include/linux/cnt32_to_63.h    |   80 ++++++++++++++++++++++++++++++++++++++++
> > >  4 files changed, 83 insertions(+), 3 deletions(-)
> > >  create mode 100644 include/linux/cnt32_to_63.h
> > 
> > Didn't you forget to remove the old one?
> > 
> > 
> > > +#define cnt32_to_63(cnt_lo) \
> > > +({ \
> > > +	static volatile u32 __m_cnt_hi; \
> > > +	union cnt32_to_63 __x; \
> > > +	__x.hi = __m_cnt_hi; \
> > > +	__x.lo = (cnt_lo); \
> > > +	if (unlikely((s32)(__x.hi ^ __x.lo) < 0)) \
> > > +		__m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); \
> > > +	__x.val; \
> > > +})
> > > +
> > > +#endif
> > 
> > That code is way to smart :-)
> > 
> > Better make sure that non of its users are SMP capable though.
> 
> Why would that matter?

#define cnt32_to_63(cnt_lo)
({
	static DEFINE_PER_CPU(u32, __m_cnt_hi);

       union cnt32_to_63 __x;
	u32 *__m_cnt_hi_ptr = &get_cpu_var(__m_cnt_hi);

	__x.hi = *__m_cnt_hi_ptr;
	__x.lo = (cnt_lo);

	if (unlikely((s32)(__x.hi ^ __x.lo) < 0))
		*__m_cnt_hi_ptr = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31);

	put_cpu_var(__m_cnt_hi);

	__x.val;
})

And I'm not quite sure why you had volatile in there.. but that should
be replaced by a barrier().

Also, we could probably use __get_cpu_var() and put BUG_ON(!in_atomic())
in there since if this stuff is per cpu, it'd better be atomic anyway -
you don't want to read a different cpu's counter than the one you're
using to upgrade.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-09-24 16:48 [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/ David Howells
  2008-09-24 16:48 ` [PATCH 2/2] MN10300: Make sched_clock() report time since boot David Howells
  2008-09-26 10:56 ` [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/ Peter Zijlstra
@ 2008-09-26 13:27 ` David Howells
  2 siblings, 0 replies; 16+ messages in thread
From: David Howells @ 2008-09-26 13:27 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: dhowells, torvalds, akpm, nico, linux-am33-list, linux-kernel

Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:

> Didn't you forget to remove the old one?

Ah...  I did remove the old one... it's just that someone also moved the old
one to arch/arm/include and GIT's merge tool didn't notice.

David

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-09-26 12:08     ` Peter Zijlstra
@ 2008-09-29  1:21       ` Nicolas Pitre
  0 siblings, 0 replies; 16+ messages in thread
From: Nicolas Pitre @ 2008-09-29  1:21 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: David Howells, torvalds, akpm, linux-am33-list, linux-kernel

On Fri, 26 Sep 2008, Peter Zijlstra wrote:

> On Fri, 2008-09-26 at 08:03 -0400, Nicolas Pitre wrote:
> > On Fri, 26 Sep 2008, Peter Zijlstra wrote:
> > 
> > > On Wed, 2008-09-24 at 17:48 +0100, David Howells wrote:
> > > > Move asm-arm/cnt32_to_63.h to include/linux/ so that MN10300 can make use of it
> > > > too.
> > > > 
> > > > Signed-off-by: David Howells <dhowells@redhat.com>
> > > > ---
> > > > 
> > > >  arch/arm/mach-pxa/time.c       |    2 +
> > > >  arch/arm/mach-sa1100/generic.c |    2 +
> > > >  arch/arm/mach-versatile/core.c |    2 +
> > > >  include/linux/cnt32_to_63.h    |   80 ++++++++++++++++++++++++++++++++++++++++
> > > >  4 files changed, 83 insertions(+), 3 deletions(-)
> > > >  create mode 100644 include/linux/cnt32_to_63.h
> > > 
> > > Didn't you forget to remove the old one?
> > > 
> > > 
> > > > +#define cnt32_to_63(cnt_lo) \
> > > > +({ \
> > > > +	static volatile u32 __m_cnt_hi; \
> > > > +	union cnt32_to_63 __x; \
> > > > +	__x.hi = __m_cnt_hi; \
> > > > +	__x.lo = (cnt_lo); \
> > > > +	if (unlikely((s32)(__x.hi ^ __x.lo) < 0)) \
> > > > +		__m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); \
> > > > +	__x.val; \
> > > > +})
> > > > +
> > > > +#endif
> > > 
> > > That code is way to smart :-)
> > > 
> > > Better make sure that non of its users are SMP capable though.
> > 
> > Why would that matter?
> 
> It has a local static variable and no synchronization.

So?  The whole point of this algorithm is to avoid all kind of locking 
and require absolutely no synchronization as it relies solely on the 
implicit synchronization from the top bit of both values.  You would get 
cache bouncing but the result will still be correct.

> Also, if the counters differ per cpu the value of __m_cnt_hi ought to 
> be different per cpu.

True.


Nicolas

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-09-26 12:20     ` Peter Zijlstra
@ 2008-09-29  1:42       ` Nicolas Pitre
  2008-10-01 14:34       ` David Howells
  1 sibling, 0 replies; 16+ messages in thread
From: Nicolas Pitre @ 2008-09-29  1:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: David Howells, torvalds, akpm, linux-am33-list, linux-kernel

On Fri, 26 Sep 2008, Peter Zijlstra wrote:

> On Fri, 2008-09-26 at 08:03 -0400, Nicolas Pitre wrote:
> > On Fri, 26 Sep 2008, Peter Zijlstra wrote:
> > 
> > > On Wed, 2008-09-24 at 17:48 +0100, David Howells wrote:
> > > > Move asm-arm/cnt32_to_63.h to include/linux/ so that MN10300 can make use of it
> > > > too.
> > > > 
> > > > Signed-off-by: David Howells <dhowells@redhat.com>
> > > > ---
> > > > 
> > > >  arch/arm/mach-pxa/time.c       |    2 +
> > > >  arch/arm/mach-sa1100/generic.c |    2 +
> > > >  arch/arm/mach-versatile/core.c |    2 +
> > > >  include/linux/cnt32_to_63.h    |   80 ++++++++++++++++++++++++++++++++++++++++
> > > >  4 files changed, 83 insertions(+), 3 deletions(-)
> > > >  create mode 100644 include/linux/cnt32_to_63.h
> > > 
> > > Didn't you forget to remove the old one?
> > > 
> > > 
> > > > +#define cnt32_to_63(cnt_lo) \
> > > > +({ \
> > > > +	static volatile u32 __m_cnt_hi; \
> > > > +	union cnt32_to_63 __x; \
> > > > +	__x.hi = __m_cnt_hi; \
> > > > +	__x.lo = (cnt_lo); \
> > > > +	if (unlikely((s32)(__x.hi ^ __x.lo) < 0)) \
> > > > +		__m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); \
> > > > +	__x.val; \
> > > > +})
> > > > +
> > > > +#endif
> > > 
> > > That code is way to smart :-)
> > > 
> > > Better make sure that non of its users are SMP capable though.
> > 
> > Why would that matter?
> 
> #define cnt32_to_63(cnt_lo)
> ({
> 	static DEFINE_PER_CPU(u32, __m_cnt_hi);
> 
>        union cnt32_to_63 __x;
> 	u32 *__m_cnt_hi_ptr = &get_cpu_var(__m_cnt_hi);

Nope.  That disables preemption which is against the purpose of this 
code.  Disabling preemption is unneeded.

> 	__x.hi = *__m_cnt_hi_ptr;
> 	__x.lo = (cnt_lo);
> 
> 	if (unlikely((s32)(__x.hi ^ __x.lo) < 0))
> 		*__m_cnt_hi_ptr = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31);
> 
> 	put_cpu_var(__m_cnt_hi);
> 
> 	__x.val;
> })
> 
> And I'm not quite sure why you had volatile in there.. but that should
> be replaced by a barrier().

That's to ensure that __x.hi is always (re)read before __x.lo, and not 
factorized out of a possible outer loop if gcc felt like doing such an 
optimization.  However I didn't want other unrelated variables to be 
affected by an undiscriminating barrier().  But that's a weak argument 
as I suspect in most case inserting a barrier() between the 2 reads will 
generate the same code.

> Also, we could probably use __get_cpu_var() and put BUG_ON(!in_atomic())
> in there since if this stuff is per cpu, it'd better be atomic anyway -
> you don't want to read a different cpu's counter than the one you're
> using to upgrade.

But again, the original code had no such restriction.  It works with 
mixed contexts.

If the base counter is per CPU, then the static variable has to be per 
CPU of course.  But if the base counter is global then the static 
variable should be shared by all CPUs unless you can guarantee that all 
CPUs will always call this at least once per half period of the base 
counter.


Nicolas

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-09-26 12:20     ` Peter Zijlstra
  2008-09-29  1:42       ` Nicolas Pitre
@ 2008-10-01 14:34       ` David Howells
  2008-10-01 15:36         ` Nicolas Pitre
  2008-10-02 22:23         ` David Howells
  1 sibling, 2 replies; 16+ messages in thread
From: David Howells @ 2008-10-01 14:34 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: dhowells, Peter Zijlstra, torvalds, akpm, linux-am33-list, linux-kernel

Nicolas Pitre <nico@cam.org> wrote:

> Disabling preemption is unneeded.

I think you may be wrong on that.  MEI came up with the following point:

	I think either disable preemption or disable interrupt is really
	necessary for cnt32_to_63 macro, because there seems to be assumption
	that the series of the code (1)-(4) must be executed within a half
	period of the 32-bit counter.

	-------------------------------------------------
	#define cnt32_to_63(cnt_lo) 
	({ 
	       static volatile u32 __m_cnt_hi = 0; 
	       cnt32_to_63_t __x; 
	(1)    __x.hi = __m_cnt_hi; 
	(2)    __x.lo = (cnt_lo); 
	(3)    if (unlikely((s32)(__x.hi ^ __x.lo) < 0))
	(4)            __m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); 
	       __x.val; 
	})
	-------------------------------------------------

	If a task is preempted while executing the series of the code and
	scheduled again after the half period of the 32-bit counter, the task
	may destroy __m_cnt_hi.

Their suggested remedy is:

	So I think it's better to disable interrupt the cnt32_to_63 and to
	ensure that the series of the code are executed within a short period.

I think this is excessive...  If we're sat there with interrupts disabled for
more than a half period (65s) then we've got other troubles.  I think
disabling preemption for the duration ought to be enough.  What do you think?

Now, I'm happy to put these in sched_clock() rather then cnt32_to_63() for my
purposes (see attached patch).

David
---
MN10300: Prevent cnt32_to_63() from being preempted in sched_clock()

From: David Howells <dhowells@redhat.com>

Prevent cnt32_to_63() from being preempted in sched_clock() because it may
read its internal counter, get preempted, get delayed for more than the half
period of the 'TSC' and then write the internal counter, thus corrupting it.

Whilst some callers of sched_clock() have interrupts disabled or hold
spinlocks, not all do, and so preemption must be held here.

Note that sched_clock() is called from lockdep, but that shouldn't be a problem
because although preempt_disable() calls into lockdep, lockdep has a recursion
counter to deal with this.

Signed-off-by: David Howells <dhowells@redhat.com>
---

 arch/mn10300/kernel/time.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)


diff --git a/arch/mn10300/kernel/time.c b/arch/mn10300/kernel/time.c
index e460658..38f88bb 100644
--- a/arch/mn10300/kernel/time.c
+++ b/arch/mn10300/kernel/time.c
@@ -55,6 +55,9 @@ unsigned long long sched_clock(void)
 	unsigned long tsc, tmp;
 	unsigned product[3]; /* 96-bit intermediate value */
 
+	/* cnt32_to_63() is not safe with preemption */
+	preempt_disable();
+
 	/* read the TSC value
 	 */
 	tsc = 0 - get_cycles(); /* get_cycles() counts down */
@@ -65,6 +68,8 @@ unsigned long long sched_clock(void)
 	 */
 	tsc64.ll = cnt32_to_63(tsc) & 0x7fffffffffffffffULL;
 
+	preempt_enable();
+
 	/* scale the 64-bit TSC value to a nanosecond value via a 96-bit
 	 * intermediate
 	 */

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-10-01 14:34       ` David Howells
@ 2008-10-01 15:36         ` Nicolas Pitre
  2008-10-02 22:23         ` David Howells
  1 sibling, 0 replies; 16+ messages in thread
From: Nicolas Pitre @ 2008-10-01 15:36 UTC (permalink / raw)
  To: David Howells; +Cc: Peter Zijlstra, torvalds, akpm, linux-am33-list, lkml

On Wed, 1 Oct 2008, David Howells wrote:

> Nicolas Pitre <nico@cam.org> wrote:
> 
> > Disabling preemption is unneeded.
> 
> I think you may be wrong on that.  MEI came up with the following point:
> 
> 	I think either disable preemption or disable interrupt is really
> 	necessary for cnt32_to_63 macro, because there seems to be assumption
> 	that the series of the code (1)-(4) must be executed within a half
> 	period of the 32-bit counter.

This is still wrong.  There is no need for any kind of locking what so 
ever, period.  That's the _point_ of the whole algorithm.  The only 
constraint is that the code has to be executed at least once per half 
period of the 32 bit counter, which is typically once every couple 
minutes or so.

> 	-------------------------------------------------
> 	#define cnt32_to_63(cnt_lo) 
> 	({ 
> 	       static volatile u32 __m_cnt_hi = 0; 
> 	       cnt32_to_63_t __x; 
> 	(1)    __x.hi = __m_cnt_hi; 
> 	(2)    __x.lo = (cnt_lo); 
> 	(3)    if (unlikely((s32)(__x.hi ^ __x.lo) < 0))
> 	(4)            __m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31); 
> 	       __x.val; 
> 	})
> 	-------------------------------------------------
> 
> 	If a task is preempted while executing the series of the code and
> 	scheduled again after the half period of the 32-bit counter, the task
> 	may destroy __m_cnt_hi.

Oh, that's a possibility.  In that case __m_cnt_hi will be reverted to a 
previous state just like if cnt32_to_63() has not been called yet since 
the new half period.  And a subsequent call will fix that again.

> Their suggested remedy is:
> 
> 	So I think it's better to disable interrupt the cnt32_to_63 and to
> 	ensure that the series of the code are executed within a short period.
> 
> I think this is excessive...  If we're sat there with interrupts disabled for
> more than a half period (65s) then we've got other troubles.  I think
> disabling preemption for the duration ought to be enough.  What do you think?

I think this is excessive too.  If you're preempted away for that long 
you still have a problem.  And if that's a real concern because you run 
RT and have tasks frequently blocked for that long then don't use this 
interface for critical counters for which absolute correctness is 
essential, which is not the case for sched_clock() anyway.  A 
sched_clock() implementation is meant to be as lightweight as possible 
even if it lacks in absolute precision, and the worst that could happen 
if you actually manage to cause a glitch in the returned value from 
sched_clock() is possibly the scheduling of the wrong task in a non RT 
scheduler, and we determined that a RT scheduler is needed to cause this 
glitch already.

> Now, I'm happy to put these in sched_clock() rather then cnt32_to_63() for my
> purposes (see attached patch).

If you still feel paranoid about this then I can't stop you from adding 
extra locking in your own code.  On machines I've created cnt32_to_63() 
for, the critical half period delay can be like 9 minutes and worrying 
about races that could last that long is really about ignoring a much 
worse problem.


Nicolas

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-10-01 14:34       ` David Howells
  2008-10-01 15:36         ` Nicolas Pitre
@ 2008-10-02 22:23         ` David Howells
  2008-10-03 19:15           ` Nicolas Pitre
  2008-10-06 10:44           ` David Howells
  1 sibling, 2 replies; 16+ messages in thread
From: David Howells @ 2008-10-02 22:23 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: dhowells, Peter Zijlstra, torvalds, akpm, linux-am33-list, lkml

Nicolas Pitre <nico@cam.org> wrote:

> Oh, that's a possibility.  In that case __m_cnt_hi will be reverted to a 
> previous state just like if cnt32_to_63() has not been called yet since 
> the new half period.  And a subsequent call will fix that again.

Surely that's not good enough.  There's a 50% chance that reversion to a
previous state will result in an extra increment of the __m_cnt_hi on the next
call.

David

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-10-02 22:23         ` David Howells
@ 2008-10-03 19:15           ` Nicolas Pitre
  2008-10-06 10:44           ` David Howells
  1 sibling, 0 replies; 16+ messages in thread
From: Nicolas Pitre @ 2008-10-03 19:15 UTC (permalink / raw)
  To: David Howells; +Cc: Peter Zijlstra, torvalds, akpm, linux-am33-list, lkml

On Thu, 2 Oct 2008, David Howells wrote:

> Nicolas Pitre <nico@cam.org> wrote:
> 
> > Oh, that's a possibility.  In that case __m_cnt_hi will be reverted to a 
> > previous state just like if cnt32_to_63() has not been called yet since 
> > the new half period.  And a subsequent call will fix that again.
> 
> Surely that's not good enough.  There's a 50% chance that reversion to a
> previous state will result in an extra increment of the __m_cnt_hi on the next
> call.

If so then you're using this interface in an inappropriate way.

The _absolute_ minimum frequency with which this should be fully 
executed is once per half period of the base counter.  I hope that in 
practice it happens far more often than that.


Nicolas

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-10-02 22:23         ` David Howells
  2008-10-03 19:15           ` Nicolas Pitre
@ 2008-10-06 10:44           ` David Howells
  2008-10-06 15:06             ` Nicolas Pitre
  1 sibling, 1 reply; 16+ messages in thread
From: David Howells @ 2008-10-06 10:44 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: dhowells, Peter Zijlstra, torvalds, akpm, linux-am33-list, lkml

Nicolas Pitre <nico@cam.org> wrote:

> If so then you're using this interface in an inappropriate way.
> 
> The _absolute_ minimum frequency with which this should be fully 
> executed is once per half period of the base counter.  I hope that in 
> practice it happens far more often than that.

I think you're misunderstanding my contention.

If preemption is enabled, cnt32_to_63() can be called with greater than
minimum frequency and yet reversions can still happen.

The problem is that a process that's half way through executing cnt32_to_63()
can be preempted for a period of time sufficient that when it is rescheduled
and writes __m_cnt_hi, it corrupts it with an out of date value.

David


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/
  2008-10-06 10:44           ` David Howells
@ 2008-10-06 15:06             ` Nicolas Pitre
  0 siblings, 0 replies; 16+ messages in thread
From: Nicolas Pitre @ 2008-10-06 15:06 UTC (permalink / raw)
  To: David Howells; +Cc: Peter Zijlstra, torvalds, akpm, linux-am33-list, lkml

On Mon, 6 Oct 2008, David Howells wrote:

> Nicolas Pitre <nico@cam.org> wrote:
> 
> > If so then you're using this interface in an inappropriate way.
> > 
> > The _absolute_ minimum frequency with which this should be fully 
> > executed is once per half period of the base counter.  I hope that in 
> > practice it happens far more often than that.
> 
> I think you're misunderstanding my contention.
> 
> If preemption is enabled, cnt32_to_63() can be called with greater than
> minimum frequency and yet reversions can still happen.
> 
> The problem is that a process that's half way through executing cnt32_to_63()
> can be preempted for a period of time sufficient that when it is rescheduled
> and writes __m_cnt_hi, it corrupts it with an out of date value.

I think you're misunderstanding my objection.

if a process that's half way through executing cnt32_to_63()
is preempted for a period of time in the same ball park as the 
half period of the base counter then my point is that you have 
a much bigger problem.

For __m_cnt_hi to be written with an out-of-date value, you'd need 
this sequence of events:

(1)	__x.hi = __m_cnt_hi;
(2)	__x.lo = (cnt_lo);
(3)	if (unlikely((s32)(__x.hi ^ __x.lo) < 0))
(4)		__m_cnt_hi = __x.hi = (__x.hi ^ 0x80000000) + (__x.hi >> 31);

cnt_lo		cnt_hi		instance A	instance B
---------------------------------------------------------------------
0x7fffffff	0x00000000	...
0x80000000	0x00000000	...
[the code isn't called for a _long_ time during which __m_cnt_hi
 is not in synch with the low part]
0xff000000	0x00000000	(1) __x.hi = 0
0xff000001	0x00000000	(2) __x.lo = 0xff000001
0xff000002	0x00000000	(3) condition is true
0xff000003	0x00000000	*** preemption ***
0xff000004	0x00000000			...
[a little while later]
0xffff0000	0x00000000			(1) __x.hi = 0
0xffff0001	0x00000000			(2) __x.lo = 0xffff0001
0xffff0002	0x00000000			(3) condition is true
0xffff0003	0x00000000			(4) update __m_cnt_hi
0xffff0004	0x80000000			...
[a little while later]
0xffffffff	0x80000000			...
0x00000000	0x80000000			...
0x00000001	0x80000000			(1) __x.hi = 0x80000000
0x00000002	0x80000000			(2) __x.lo = 0x00000002
0x00000003	0x80000000			(3) condition is true
0x00000004	0x80000000			(4) update __m_cnt_hi
0x00000005	0x00000001			...
0x00000006	0x00000001			*** preemption ***
0x00000007	0x00000001	(4) update update __m_cnt_hi
0x00000008	0x80000000	...

At this point, it is true that __m_cnt_hi is not up to date while it was 
a moment before.  But that has no consequence because it was reverted 
to the same state before the update at 0x00000004, meaning that the 
update will be redone as soon as the code is executed again.

Sure this could be defeated if 1) the execution in instance B didn't 
come soon enough to notice the initial lack of synchronization before 
the second half period passed, or 2) the duration of the preemption of 
instance A was long enough so not to leave enough time for __m_cnt_hi to 
be corrected back with the current half-period.  But those are extreme 
cases this algorithm is _not_ made for because in practice they should 
_not_ happen.

If the complete code sequence is always executed _at_ _least_ once per 
half-period then you're OK.  That's the theoretical requirement.  In 
practice you should expect it to be called many many times per 
half-period to keep __m_cnt_hi warm.  In my case this is a couple 
hundred times per half period so I'm not worried.

So, if you have a system where 1) this code might not get called for a 
long time _and_ 2) if called it can be preempted away for a long time, 
then you certainly risk having __m_cnt_hi miss a transition.  But in 
that case fixing those issue is way more important than worrying about a 
glitch in the __m_cnt_hi value.


Nicolas

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2008-10-06 15:07 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-09-24 16:48 [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/ David Howells
2008-09-24 16:48 ` [PATCH 2/2] MN10300: Make sched_clock() report time since boot David Howells
2008-09-26 10:58   ` Peter Zijlstra
2008-09-26 10:56 ` [PATCH 1/2] MN10300: Move asm-arm/cnt32_to_63.h to include/linux/ Peter Zijlstra
2008-09-26 12:03   ` Nicolas Pitre
2008-09-26 12:08     ` Peter Zijlstra
2008-09-29  1:21       ` Nicolas Pitre
2008-09-26 12:20     ` Peter Zijlstra
2008-09-29  1:42       ` Nicolas Pitre
2008-10-01 14:34       ` David Howells
2008-10-01 15:36         ` Nicolas Pitre
2008-10-02 22:23         ` David Howells
2008-10-03 19:15           ` Nicolas Pitre
2008-10-06 10:44           ` David Howells
2008-10-06 15:06             ` Nicolas Pitre
2008-09-26 13:27 ` David Howells

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).