linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V5] irq: Track the interrupt timings
@ 2016-06-14 16:33 Daniel Lezcano
  2016-06-14 17:46 ` Nicolas Pitre
  0 siblings, 1 reply; 12+ messages in thread
From: Daniel Lezcano @ 2016-06-14 16:33 UTC (permalink / raw)
  To: daniel.lezcano, tglx
  Cc: nicolas.pitre, shreyas, linux-kernel, peterz, rafael, vincent.guittot

The interrupt framework gives a lot of information about each interrupt.

It does not keep track of when those interrupts occur though.

This patch provides a mean to record the elapsed time between successive
interrupt occurrences in a per-IRQ per-CPU circular buffer to help with the
prediction of the next occurrence using a statistical model.

A new function is added to browse the different interrupts and retrieve the
timing information stored in it.

A static key is introduced so when the irq prediction is switched off at
runtime, we can reduce the overhead near to zero. The irq timings is
supposed to be potentially used by different sub-systems and for this reason
the static key is a ref counter, so when the last use releases the irq
timings that will result on the effective deactivation of the irq measurement.

Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
---
V5:
  - Changed comment about 'deterministic' as the comment is confusing
  - Added license comment in the header
  - Replaced irq_timings_get/put by irq_timings_enable/disable
  - Moved IRQS_TIMINGS check in the handle_timings inline function
  - Dropped 'if !prev' as it is pointless
  - Stored time interval in nsec basis with u64 instead of u32
  - Removed redundant store
  - Removed the math
V4:
  - Added a static key
  - Added more comments for irq_timings_get_next()
  - Unified some function names to be prefixed by 'irq_timings_...'
  - Fixed a rebase error
V3:
  - Replaced ktime_get() by local_clock()
  - Shared irq are not handled
  - Simplified code by adding the timing in the irqdesc struct
  - Added a function to browse the irq timings
V2:
  - Fixed kerneldoc comment
  - Removed data field from the struct irq timing
  - Changed the lock section comment
  - Removed semi-colon style with empty stub
  - Replaced macro by static inline
  - Fixed static functions declaration
RFC:
  - initial posting
---
 include/linux/interrupt.h |  17 ++++++++
 include/linux/irqdesc.h   |   4 ++
 kernel/irq/Kconfig        |   3 ++
 kernel/irq/Makefile       |   1 +
 kernel/irq/handle.c       |   2 +
 kernel/irq/internals.h    |  61 +++++++++++++++++++++++++++
 kernel/irq/irqdesc.c      |   6 +++
 kernel/irq/manage.c       |   3 ++
 kernel/irq/timings.c      | 104 ++++++++++++++++++++++++++++++++++++++++++++++
 9 files changed, 201 insertions(+)
 create mode 100644 kernel/irq/timings.c

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 9fcabeb..9b0983a 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -675,6 +675,23 @@ static inline void init_irq_proc(void)
 }
 #endif
 
+#ifdef CONFIG_IRQ_TIMINGS
+
+#define IRQ_TIMINGS_SHIFT	3
+#define IRQ_TIMINGS_SIZE	(1 << IRQ_TIMINGS_SHIFT)
+#define IRQ_TIMINGS_MASK	(IRQ_TIMINGS_SIZE - 1)
+
+struct irq_timings {
+	u64 values[IRQ_TIMINGS_SIZE];	/* our circular buffer */
+	u64 timestamp;			/* latest timestamp */
+	unsigned int w_index;		/* current buffer index */
+};
+
+struct irq_timings *irq_timings_get_next(int *irq);
+void irq_timings_enable(void);
+void irq_timings_disable(void);
+#endif
+
 struct seq_file;
 int show_interrupts(struct seq_file *p, void *v);
 int arch_show_interrupts(struct seq_file *p, int prec);
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index b51beeb..a21ddbbe 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -12,6 +12,7 @@ struct proc_dir_entry;
 struct module;
 struct irq_desc;
 struct irq_domain;
+struct irq_timings;
 struct pt_regs;
 
 /**
@@ -51,6 +52,9 @@ struct irq_desc {
 	struct irq_data		irq_data;
 	unsigned int __percpu	*kstat_irqs;
 	irq_flow_handler_t	handle_irq;
+#ifdef CONFIG_IRQ_TIMINGS
+	struct irq_timings __percpu *timings;
+#endif
 #ifdef CONFIG_IRQ_PREFLOW_FASTEOI
 	irq_preflow_handler_t	preflow_handler;
 #endif
diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
index 3bbfd6a..38e551d 100644
--- a/kernel/irq/Kconfig
+++ b/kernel/irq/Kconfig
@@ -81,6 +81,9 @@ config GENERIC_MSI_IRQ_DOMAIN
 config HANDLE_DOMAIN_IRQ
 	bool
 
+config IRQ_TIMINGS
+	bool
+
 config IRQ_DOMAIN_DEBUG
 	bool "Expose hardware/virtual IRQ mapping via debugfs"
 	depends on IRQ_DOMAIN && DEBUG_FS
diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile
index 2ee42e9..e1debaa9 100644
--- a/kernel/irq/Makefile
+++ b/kernel/irq/Makefile
@@ -9,3 +9,4 @@ obj-$(CONFIG_GENERIC_IRQ_MIGRATION) += cpuhotplug.o
 obj-$(CONFIG_PM_SLEEP) += pm.o
 obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o
 obj-$(CONFIG_GENERIC_IRQ_IPI) += ipi.o
+obj-$(CONFIG_IRQ_TIMINGS) += timings.o
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index a15b548..cd37536 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -138,6 +138,8 @@ irqreturn_t handle_irq_event_percpu(struct irq_desc *desc)
 	unsigned int flags = 0, irq = desc->irq_data.irq;
 	struct irqaction *action;
 
+	handle_timings(desc);
+
 	for_each_action_of_desc(desc, action) {
 		irqreturn_t res;
 
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index d5edcdc..cdd5413 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -57,6 +57,7 @@ enum {
 	IRQS_WAITING		= 0x00000080,
 	IRQS_PENDING		= 0x00000200,
 	IRQS_SUSPENDED		= 0x00000800,
+	IRQS_TIMINGS		= 0x00001000,
 };
 
 #include "debug.h"
@@ -223,3 +224,63 @@ irq_pm_install_action(struct irq_desc *desc, struct irqaction *action) { }
 static inline void
 irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action) { }
 #endif
+
+#ifdef CONFIG_IRQ_TIMINGS
+static inline int alloc_timings(struct irq_desc *desc)
+{
+	desc->timings = alloc_percpu(struct irq_timings);
+	if (!desc->timings)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static inline void free_timings(struct irq_desc *desc)
+{
+	free_percpu(desc->timings);
+}
+
+static inline void remove_timings(struct irq_desc *desc)
+{
+	desc->istate &= ~IRQS_TIMINGS;
+}
+
+static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
+{
+	/*
+	 * We don't need the measurement because the idle code already
+	 * knows the next expiry event.
+	 */
+	if (act->flags & __IRQF_TIMER)
+		return;
+
+	desc->istate |= IRQS_TIMINGS;
+}
+
+extern struct static_key_false irq_timing_enabled;
+
+extern void __handle_timings(struct irq_desc *desc);
+
+/*
+ * The function handle_timings is only called in one place in the
+ * interrupts handler. We want this function always inline so the
+ * code inside is embedded in the function and the static key branching
+ * code can act at the higher level. Without the explicit __always_inline
+ * we can end up with a call to the 'handle_timings' function with a small
+ * overhead in the hotpath for nothing.
+ */
+static __always_inline void handle_timings(struct irq_desc *desc)
+{
+	if (static_key_enabled(&irq_timing_enabled)) {
+		if (desc->istate & IRQS_TIMINGS)
+			__handle_timings(desc);
+	}
+}
+#else
+static inline int alloc_timings(struct irq_desc *desc) { return 0; }
+static inline void free_timings(struct irq_desc *desc) {}
+static inline void handle_timings(struct irq_desc *desc) {}
+static inline void remove_timings(struct irq_desc *desc) {}
+static inline void setup_timings(struct irq_desc *desc,
+				 struct irqaction *act) {};
+#endif
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 8731e1c..2c3ce74 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -174,6 +174,9 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 	if (alloc_masks(desc, gfp, node))
 		goto err_kstat;
 
+	if (alloc_timings(desc))
+		goto err_mask;
+
 	raw_spin_lock_init(&desc->lock);
 	lockdep_set_class(&desc->lock, &irq_desc_lock_class);
 	init_rcu_head(&desc->rcu);
@@ -182,6 +185,8 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 
 	return desc;
 
+err_mask:
+	free_masks(desc);
 err_kstat:
 	free_percpu(desc->kstat_irqs);
 err_desc:
@@ -193,6 +198,7 @@ static void delayed_free_desc(struct rcu_head *rhp)
 {
 	struct irq_desc *desc = container_of(rhp, struct irq_desc, rcu);
 
+	free_timings(desc);
 	free_masks(desc);
 	free_percpu(desc->kstat_irqs);
 	kfree(desc);
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 00cfc85..7a9f460 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1350,6 +1350,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		__enable_irq(desc);
 	}
 
+	setup_timings(desc, new);
+
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
 
 	/*
@@ -1480,6 +1482,7 @@ static struct irqaction *__free_irq(unsigned int irq, void *dev_id)
 		irq_settings_clr_disable_unlazy(desc);
 		irq_shutdown(desc);
 		irq_release_resources(desc);
+		remove_timings(desc);
 	}
 
 #ifdef CONFIG_SMP
diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
new file mode 100644
index 0000000..27297de
--- /dev/null
+++ b/kernel/irq/timings.c
@@ -0,0 +1,104 @@
+/*
+ * linux/kernel/irq/timings.c
+ *
+ * Copyright (C) 2016, Linaro Ltd - Daniel Lezcano <daniel.lezcano@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/irqdesc.h>
+#include <linux/percpu.h>
+#include <linux/static_key.h>
+
+#include "internals.h"
+
+DEFINE_STATIC_KEY_FALSE(irq_timing_enabled);
+
+void irq_timings_enable(void)
+{
+	static_branch_inc(&irq_timing_enabled);
+}
+
+void irq_timings_disable(void)
+{
+	static_branch_dec(&irq_timing_enabled);
+}
+
+/**
+ * __handle_timings - stores an irq timing when an interrupt occurs
+ *
+ * @desc: the irq descriptor
+ *
+ * For all interruptions with their IRQS_TIMINGS flag set, the function
+ * computes the time interval between two interrupt events and store it
+ * in a circular buffer.
+ */
+void __handle_timings(struct irq_desc *desc)
+{
+	struct irq_timings *timings;
+	u64 prev, now, diff;
+
+	timings = this_cpu_ptr(desc->timings);
+	now = local_clock();
+	prev = timings->timestamp;
+	timings->timestamp = now;
+
+	/*
+	 * In case it is the first time this function is called, the
+	 * 'prev' variable will be zero which reflects the time origin
+	 * when the system booted.
+	 */
+	diff = now - prev;
+
+	/* The oldest value corresponds to the next index. */
+	timings->w_index = (timings->w_index + 1) & IRQ_TIMINGS_MASK;
+	timings->values[timings->w_index] = diff;
+}
+
+/**
+ * irqtiming_get_next - return the next irq timing
+ *
+ * @irq: a pointer to an integer representing the interrupt number
+ *
+ * This function allows to browse safely the interrupt descriptors in order
+ * to retrieve the interrupts timings. The parameter gives the interrupt 
+ * number to begin with and will return the interrupt timings for the next
+ * allocated irq. This approach gives us the possibility to go through the
+ * different interrupts without having to handle the sparse irq.
+ *
+ * The function changes @irq to the next allocated irq + 1, it should be
+ * passed back again and again until NULL is returned. Usually this function
+ * is called the first time with @irq = 0.
+ *
+ * Returns a struct irq_timings, NULL if we reach the end of the interrupts
+ * list.
+ */
+struct irq_timings *irq_timings_get_next(int *irq)
+{
+	struct irq_desc *desc;
+	int next;
+
+again:
+	/* Do a racy lookup of the next allocated irq */
+	next = irq_get_next_irq(*irq);
+	if (next >= nr_irqs)
+		return NULL;
+
+	*irq = next + 1;
+
+	/*
+	 * Now lookup the descriptor. It's RCU protected. This
+	 * descriptor might belong to an uninteresting interrupt or
+	 * one that is not measured. Look for the next interrupt in
+	 * that case.
+	 */
+	desc = irq_to_desc(next);
+	if (!desc || !(desc->istate & IRQS_TIMINGS))
+		goto again;
+
+	return this_cpu_ptr(desc->timings);
+}
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH V5] irq: Track the interrupt timings
  2016-06-14 16:33 [PATCH V5] irq: Track the interrupt timings Daniel Lezcano
@ 2016-06-14 17:46 ` Nicolas Pitre
  2016-06-14 18:11   ` Thomas Gleixner
  0 siblings, 1 reply; 12+ messages in thread
From: Nicolas Pitre @ 2016-06-14 17:46 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: tglx, shreyas, linux-kernel, peterz, rafael, vincent.guittot

On Tue, 14 Jun 2016, Daniel Lezcano wrote:

[...]

> +void __handle_timings(struct irq_desc *desc)
> +{
> +	struct irq_timings *timings;
> +	u64 prev, now, diff;
> +
> +	timings = this_cpu_ptr(desc->timings);
> +	now = local_clock();
> +	prev = timings->timestamp;
> +	timings->timestamp = now;
> +
> +	/*
> +	 * In case it is the first time this function is called, the
> +	 * 'prev' variable will be zero which reflects the time origin
> +	 * when the system booted.
> +	 */
> +	diff = now - prev;
> +
> +	/* The oldest value corresponds to the next index. */
> +	timings->w_index = (timings->w_index + 1) & IRQ_TIMINGS_MASK;
> +	timings->values[timings->w_index] = diff;
> +}

What about simply this:

void __handle_timings(struct irq_desc *desc)
{
	struct irq_timings *timings = this_cpu_ptr(desc->timings);
	timings->w_index = (timings->w_index + 1) & IRQ_TIMINGS_MASK;
	timings->values[timings->w_index] = local_clock();
}

?

Then you could s/__handle_timings/__record_irq_time/ to better represent 
what it does.  And both the difference and the summing of squares could 
be done upon entering idle instead.


Nicolas

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V5] irq: Track the interrupt timings
  2016-06-14 17:46 ` Nicolas Pitre
@ 2016-06-14 18:11   ` Thomas Gleixner
  2016-06-14 19:52     ` Daniel Lezcano
  2016-06-14 20:10     ` [PATCH V6] " Daniel Lezcano
  0 siblings, 2 replies; 12+ messages in thread
From: Thomas Gleixner @ 2016-06-14 18:11 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Daniel Lezcano, shreyas, linux-kernel, peterz, rafael, vincent.guittot

On Tue, 14 Jun 2016, Nicolas Pitre wrote:
> What about simply this:
> 
> void __handle_timings(struct irq_desc *desc)
> {
> 	struct irq_timings *timings = this_cpu_ptr(desc->timings);
> 	timings->w_index = (timings->w_index + 1) & IRQ_TIMINGS_MASK;
> 	timings->values[timings->w_index] = local_clock();
> }
> 
> ?
> 
> Then you could s/__handle_timings/__record_irq_time/ to better represent 
> what it does.  And both the difference and the summing of squares could 
> be done upon entering idle instead.

And make it part of the handle_timings() inline to avoid the function call.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V5] irq: Track the interrupt timings
  2016-06-14 18:11   ` Thomas Gleixner
@ 2016-06-14 19:52     ` Daniel Lezcano
  2016-06-14 20:10     ` [PATCH V6] " Daniel Lezcano
  1 sibling, 0 replies; 12+ messages in thread
From: Daniel Lezcano @ 2016-06-14 19:52 UTC (permalink / raw)
  To: Thomas Gleixner, Nicolas Pitre
  Cc: shreyas, linux-kernel, peterz, rafael, vincent.guittot

On 06/14/2016 08:11 PM, Thomas Gleixner wrote:
> On Tue, 14 Jun 2016, Nicolas Pitre wrote:
>> What about simply this:
>>
>> void __handle_timings(struct irq_desc *desc)
>> {
>> 	struct irq_timings *timings = this_cpu_ptr(desc->timings);
>> 	timings->w_index = (timings->w_index + 1) & IRQ_TIMINGS_MASK;
>> 	timings->values[timings->w_index] = local_clock();
>> }
>>
>> ?
>>
>> Then you could s/__handle_timings/__record_irq_time/ to better represent
>> what it does.  And both the difference and the summing of squares could
>> be done upon entering idle instead.
>
> And make it part of the handle_timings() inline to avoid the function call.

Ah yes, nice !


-- 
  <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH V6] irq: Track the interrupt timings
  2016-06-14 18:11   ` Thomas Gleixner
  2016-06-14 19:52     ` Daniel Lezcano
@ 2016-06-14 20:10     ` Daniel Lezcano
  2016-06-14 20:38       ` Nicolas Pitre
  2016-06-17 13:46       ` Thomas Gleixner
  1 sibling, 2 replies; 12+ messages in thread
From: Daniel Lezcano @ 2016-06-14 20:10 UTC (permalink / raw)
  To: daniel.lezcano, tglx
  Cc: nicolas.pitre, shreyas, linux-kernel, peterz, rafael, vincent.guittot

The interrupt framework gives a lot of information about each interrupt.

It does not keep track of when those interrupts occur though.

This patch provides a mean to record the elapsed time between successive
interrupt occurrences in a per-IRQ per-CPU circular buffer to help with the
prediction of the next occurrence using a statistical model.

A new function is added to browse the different interrupts and retrieve the
timing information stored in it.

A static key is introduced so when the irq prediction is switched off at
runtime, we can reduce the overhead near to zero. The irq timings is
supposed to be potentially used by different sub-systems and for this reason
the static key is a ref counter, so when the last use releases the irq
timings that will result on the effective deactivation of the irq measurement.

Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
---
V6:
  - Renamed handle_irq_timings to record_irq_time
  - Stored the event time instead of the interval time
  - Removed the 'timestamp' field from the timings structure
  - Moved _handle_irq_timings content inside record_irq_time
V5:
  - Changed comment about 'deterministic' as the comment is confusing
  - Added license comment in the header
  - Replaced irq_timings_get/put by irq_timings_enable/disable
  - Moved IRQS_TIMINGS check in the handle_timings inline function
  - Dropped 'if !prev' as it is pointless
  - Stored time interval in nsec basis with u64 instead of u32
  - Removed redundant store
  - Removed the math
V4:
  - Added a static key
  - Added more comments for irq_timings_get_next()
  - Unified some function names to be prefixed by 'irq_timings_...'
  - Fixed a rebase error
V3:
  - Replaced ktime_get() by local_clock()
  - Shared irq are not handled
  - Simplified code by adding the timing in the irqdesc struct
  - Added a function to browse the irq timings
V2:
  - Fixed kerneldoc comment
  - Removed data field from the struct irq timing
  - Changed the lock section comment
  - Removed semi-colon style with empty stub
  - Replaced macro by static inline
  - Fixed static functions declaration
RFC:
  - initial posting
---
 include/linux/interrupt.h | 16 +++++++++++
 include/linux/irqdesc.h   |  4 +++
 kernel/irq/Kconfig        |  3 ++
 kernel/irq/Makefile       |  1 +
 kernel/irq/handle.c       |  2 ++
 kernel/irq/internals.h    | 62 ++++++++++++++++++++++++++++++++++++++++
 kernel/irq/irqdesc.c      |  6 ++++
 kernel/irq/manage.c       |  3 ++
 kernel/irq/timings.c      | 73 +++++++++++++++++++++++++++++++++++++++++++++++
 9 files changed, 170 insertions(+)
 create mode 100644 kernel/irq/timings.c

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 9fcabeb..5ff1d3a 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -675,6 +675,22 @@ static inline void init_irq_proc(void)
 }
 #endif
 
+#ifdef CONFIG_IRQ_TIMINGS
+
+#define IRQ_TIMINGS_SHIFT	3
+#define IRQ_TIMINGS_SIZE	(1 << IRQ_TIMINGS_SHIFT)
+#define IRQ_TIMINGS_MASK	(IRQ_TIMINGS_SIZE - 1)
+
+struct irq_timings {
+	u64 values[IRQ_TIMINGS_SIZE];	/* our circular buffer */
+	unsigned int w_index;		/* current buffer index */
+};
+
+struct irq_timings *irq_timings_get_next(int *irq);
+void irq_timings_enable(void);
+void irq_timings_disable(void);
+#endif
+
 struct seq_file;
 int show_interrupts(struct seq_file *p, void *v);
 int arch_show_interrupts(struct seq_file *p, int prec);
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index b51beeb..a21ddbbe 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -12,6 +12,7 @@ struct proc_dir_entry;
 struct module;
 struct irq_desc;
 struct irq_domain;
+struct irq_timings;
 struct pt_regs;
 
 /**
@@ -51,6 +52,9 @@ struct irq_desc {
 	struct irq_data		irq_data;
 	unsigned int __percpu	*kstat_irqs;
 	irq_flow_handler_t	handle_irq;
+#ifdef CONFIG_IRQ_TIMINGS
+	struct irq_timings __percpu *timings;
+#endif
 #ifdef CONFIG_IRQ_PREFLOW_FASTEOI
 	irq_preflow_handler_t	preflow_handler;
 #endif
diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
index 3bbfd6a..38e551d 100644
--- a/kernel/irq/Kconfig
+++ b/kernel/irq/Kconfig
@@ -81,6 +81,9 @@ config GENERIC_MSI_IRQ_DOMAIN
 config HANDLE_DOMAIN_IRQ
 	bool
 
+config IRQ_TIMINGS
+	bool
+
 config IRQ_DOMAIN_DEBUG
 	bool "Expose hardware/virtual IRQ mapping via debugfs"
 	depends on IRQ_DOMAIN && DEBUG_FS
diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile
index 2ee42e9..e1debaa9 100644
--- a/kernel/irq/Makefile
+++ b/kernel/irq/Makefile
@@ -9,3 +9,4 @@ obj-$(CONFIG_GENERIC_IRQ_MIGRATION) += cpuhotplug.o
 obj-$(CONFIG_PM_SLEEP) += pm.o
 obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o
 obj-$(CONFIG_GENERIC_IRQ_IPI) += ipi.o
+obj-$(CONFIG_IRQ_TIMINGS) += timings.o
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index a15b548..335847e 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -138,6 +138,8 @@ irqreturn_t handle_irq_event_percpu(struct irq_desc *desc)
 	unsigned int flags = 0, irq = desc->irq_data.irq;
 	struct irqaction *action;
 
+	record_irq_time(desc);
+
 	for_each_action_of_desc(desc, action) {
 		irqreturn_t res;
 
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index d5edcdc..734fe38 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -57,6 +57,7 @@ enum {
 	IRQS_WAITING		= 0x00000080,
 	IRQS_PENDING		= 0x00000200,
 	IRQS_SUSPENDED		= 0x00000800,
+	IRQS_TIMINGS		= 0x00001000,
 };
 
 #include "debug.h"
@@ -223,3 +224,64 @@ irq_pm_install_action(struct irq_desc *desc, struct irqaction *action) { }
 static inline void
 irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action) { }
 #endif
+
+#ifdef CONFIG_IRQ_TIMINGS
+static inline int alloc_timings(struct irq_desc *desc)
+{
+	desc->timings = alloc_percpu(struct irq_timings);
+	if (!desc->timings)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static inline void free_timings(struct irq_desc *desc)
+{
+	free_percpu(desc->timings);
+}
+
+static inline void remove_timings(struct irq_desc *desc)
+{
+	desc->istate &= ~IRQS_TIMINGS;
+}
+
+static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
+{
+	/*
+	 * We don't need the measurement because the idle code already
+	 * knows the next expiry event.
+	 */
+	if (act->flags & __IRQF_TIMER)
+		return;
+
+	desc->istate |= IRQS_TIMINGS;
+}
+
+extern struct static_key_false irq_timing_enabled;
+
+/*
+ * The function record_irq_time is only called in one place in the
+ * interrupts handler. We want this function always inline so the code
+ * inside is embedded in the function and the static key branching
+ * code can act at the higher level. Without the explicit
+ * __always_inline we can end up with a function call and a small
+ * overhead in the hotpath for nothing.
+ */
+static __always_inline void record_irq_time(struct irq_desc *desc)
+{
+	if (static_key_enabled(&irq_timing_enabled)) {
+		if (desc->istate & IRQS_TIMINGS) {
+			struct irq_timings *timings = this_cpu_ptr(desc->timings);
+			timings->w_index = (timings->w_index + 1) & IRQ_TIMINGS_MASK;
+			timings->values[timings->w_index] = local_clock();
+		}
+	}
+}
+#else
+static inline int alloc_timings(struct irq_desc *desc) { return 0; }
+static inline void free_timings(struct irq_desc *desc) {}
+static inline void remove_timings(struct irq_desc *desc) {}
+static inline void setup_timings(struct irq_desc *desc,
+				 struct irqaction *act) {};
+static inline void record_irq_time(struct irq_desc *desc) {}
+#endif
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 8731e1c..2c3ce74 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -174,6 +174,9 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 	if (alloc_masks(desc, gfp, node))
 		goto err_kstat;
 
+	if (alloc_timings(desc))
+		goto err_mask;
+
 	raw_spin_lock_init(&desc->lock);
 	lockdep_set_class(&desc->lock, &irq_desc_lock_class);
 	init_rcu_head(&desc->rcu);
@@ -182,6 +185,8 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 
 	return desc;
 
+err_mask:
+	free_masks(desc);
 err_kstat:
 	free_percpu(desc->kstat_irqs);
 err_desc:
@@ -193,6 +198,7 @@ static void delayed_free_desc(struct rcu_head *rhp)
 {
 	struct irq_desc *desc = container_of(rhp, struct irq_desc, rcu);
 
+	free_timings(desc);
 	free_masks(desc);
 	free_percpu(desc->kstat_irqs);
 	kfree(desc);
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 00cfc85..7a9f460 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1350,6 +1350,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		__enable_irq(desc);
 	}
 
+	setup_timings(desc, new);
+
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
 
 	/*
@@ -1480,6 +1482,7 @@ static struct irqaction *__free_irq(unsigned int irq, void *dev_id)
 		irq_settings_clr_disable_unlazy(desc);
 		irq_shutdown(desc);
 		irq_release_resources(desc);
+		remove_timings(desc);
 	}
 
 #ifdef CONFIG_SMP
diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
new file mode 100644
index 0000000..68da7f3
--- /dev/null
+++ b/kernel/irq/timings.c
@@ -0,0 +1,73 @@
+/*
+ * linux/kernel/irq/timings.c
+ *
+ * Copyright (C) 2016, Linaro Ltd - Daniel Lezcano <daniel.lezcano@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/irqdesc.h>
+#include <linux/percpu.h>
+#include <linux/static_key.h>
+
+#include "internals.h"
+
+DEFINE_STATIC_KEY_FALSE(irq_timing_enabled);
+
+void irq_timings_enable(void)
+{
+	static_branch_inc(&irq_timing_enabled);
+}
+
+void irq_timings_disable(void)
+{
+	static_branch_dec(&irq_timing_enabled);
+}
+
+/**
+ * irqtiming_get_next - return the next irq timing
+ *
+ * @irq: a pointer to an integer representing the interrupt number
+ *
+ * This function allows to browse safely the interrupt descriptors in order
+ * to retrieve the interrupts timings. The parameter gives the interrupt
+ * number to begin with and will return the interrupt timings for the next
+ * allocated irq. This approach gives us the possibility to go through the
+ * different interrupts without having to handle the sparse irq.
+ *
+ * The function changes @irq to the next allocated irq + 1, it should be
+ * passed back again and again until NULL is returned. Usually this function
+ * is called the first time with @irq = 0.
+ *
+ * Returns a struct irq_timings, NULL if we reach the end of the interrupts
+ * list.
+ */
+struct irq_timings *irq_timings_get_next(int *irq)
+{
+	struct irq_desc *desc;
+	int next;
+
+again:
+	/* Do a racy lookup of the next allocated irq */
+	next = irq_get_next_irq(*irq);
+	if (next >= nr_irqs)
+		return NULL;
+
+	*irq = next + 1;
+
+	/*
+	 * Now lookup the descriptor. It's RCU protected. This
+	 * descriptor might belong to an uninteresting interrupt or
+	 * one that is not measured. Look for the next interrupt in
+	 * that case.
+	 */
+	desc = irq_to_desc(next);
+	if (!desc || !(desc->istate & IRQS_TIMINGS))
+		goto again;
+
+	return this_cpu_ptr(desc->timings);
+}
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH V6] irq: Track the interrupt timings
  2016-06-14 20:10     ` [PATCH V6] " Daniel Lezcano
@ 2016-06-14 20:38       ` Nicolas Pitre
  2016-06-17 13:46       ` Thomas Gleixner
  1 sibling, 0 replies; 12+ messages in thread
From: Nicolas Pitre @ 2016-06-14 20:38 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: tglx, shreyas, linux-kernel, peterz, rafael, vincent.guittot

On Tue, 14 Jun 2016, Daniel Lezcano wrote:

> The interrupt framework gives a lot of information about each interrupt.
> 
> It does not keep track of when those interrupts occur though.
> 
> This patch provides a mean to record the elapsed time between successive
> interrupt occurrences in a per-IRQ per-CPU circular buffer to help with the
> prediction of the next occurrence using a statistical model.
> 
> A new function is added to browse the different interrupts and retrieve the
> timing information stored in it.
> 
> A static key is introduced so when the irq prediction is switched off at
> runtime, we can reduce the overhead near to zero. The irq timings is
> supposed to be potentially used by different sub-systems and for this reason
> the static key is a ref counter, so when the last use releases the irq
> timings that will result on the effective deactivation of the irq measurement.
> 
> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
> ---
> V6:
>   - Renamed handle_irq_timings to record_irq_time
>   - Stored the event time instead of the interval time
>   - Removed the 'timestamp' field from the timings structure
>   - Moved _handle_irq_timings content inside record_irq_time

Looks fine to me.


Nicolas

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V6] irq: Track the interrupt timings
  2016-06-14 20:10     ` [PATCH V6] " Daniel Lezcano
  2016-06-14 20:38       ` Nicolas Pitre
@ 2016-06-17 13:46       ` Thomas Gleixner
  2016-06-17 17:16         ` [PATCH V7] " Daniel Lezcano
  1 sibling, 1 reply; 12+ messages in thread
From: Thomas Gleixner @ 2016-06-17 13:46 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: nicolas.pitre, shreyas, linux-kernel, peterz, rafael, vincent.guittot

On Tue, 14 Jun 2016, Daniel Lezcano wrote:
> +/**
> + * irqtiming_get_next - return the next irq timing
> + *
> + * @irq: a pointer to an integer representing the interrupt number
> + *
> + * This function allows to browse safely the interrupt descriptors in order
> + * to retrieve the interrupts timings. The parameter gives the interrupt
> + * number to begin with and will return the interrupt timings for the next
> + * allocated irq. This approach gives us the possibility to go through the
> + * different interrupts without having to handle the sparse irq.
> + *
> + * The function changes @irq to the next allocated irq + 1, it should be
> + * passed back again and again until NULL is returned. Usually this function
> + * is called the first time with @irq = 0.
> + *
> + * Returns a struct irq_timings, NULL if we reach the end of the interrupts
> + * list.
> + */
> +struct irq_timings *irq_timings_get_next(int *irq)
> +{
> +	struct irq_desc *desc;
> +	int next;
> +
> +again:
> +	/* Do a racy lookup of the next allocated irq */
> +	next = irq_get_next_irq(*irq);
> +	if (next >= nr_irqs)
> +		return NULL;
> +
> +	*irq = next + 1;
> +
> +	/*
> +	 * Now lookup the descriptor. It's RCU protected. This

Please mention in the function description above that this function
must be called inside of a rcu_read() locked section.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH V7] irq: Track the interrupt timings
  2016-06-17 13:46       ` Thomas Gleixner
@ 2016-06-17 17:16         ` Daniel Lezcano
  2016-06-23  8:41           ` Thomas Gleixner
  0 siblings, 1 reply; 12+ messages in thread
From: Daniel Lezcano @ 2016-06-17 17:16 UTC (permalink / raw)
  To: tglx
  Cc: daniel.lezcano, nicolas.pitre, shreyas, linux-kernel, peterz,
	rafael, vincent.guittot

The interrupt framework gives a lot of information about each interrupt.

It does not keep track of when those interrupts occur though.

This patch provides a mean to record the elapsed time between successive
interrupt occurrences in a per-IRQ per-CPU circular buffer to help with the
prediction of the next occurrence using a statistical model.

A new function is added to browse the different interrupts and retrieve the
timing information stored in it.

A static key is introduced so when the irq prediction is switched off at
runtime, we can reduce the overhead near to zero. The irq timings is
supposed to be potentially used by different sub-systems and for this reason
the static key is a ref counter, so when the last use releases the irq
timings that will result on the effective deactivation of the irq measurement.

Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
---
V7:
  - Mentionned in the irq_timings_get_next() function description,
    the function must be called inside a rcu read locked section
V6:
  - Renamed handle_irq_timings to record_irq_time
  - Stored the event time instead of the interval time
  - Removed the 'timestamp' field from the timings structure
  - Moved _handle_irq_timings content inside record_irq_time
V5:
  - Changed comment about 'deterministic' as the comment is confusing
  - Added license comment in the header
  - Replaced irq_timings_get/put by irq_timings_enable/disable
  - Moved IRQS_TIMINGS check in the handle_timings inline function
  - Dropped 'if !prev' as it is pointless
  - Stored time interval in nsec basis with u64 instead of u32
  - Removed redundant store
  - Removed the math
V4:
  - Added a static key
  - Added more comments for irq_timings_get_next()
  - Unified some function names to be prefixed by 'irq_timings_...'
  - Fixed a rebase error
V3:
  - Replaced ktime_get() by local_clock()
  - Shared irq are not handled
  - Simplified code by adding the timing in the irqdesc struct
  - Added a function to browse the irq timings
V2:
  - Fixed kerneldoc comment
  - Removed data field from the struct irq timing
  - Changed the lock section comment
  - Removed semi-colon style with empty stub
  - Replaced macro by static inline
  - Fixed static functions declaration
RFC:
  - initial posting
---
 include/linux/interrupt.h | 16 ++++++++++
 include/linux/irqdesc.h   |  4 +++
 kernel/irq/Kconfig        |  3 ++
 kernel/irq/Makefile       |  1 +
 kernel/irq/handle.c       |  2 ++
 kernel/irq/internals.h    | 62 +++++++++++++++++++++++++++++++++++++++
 kernel/irq/irqdesc.c      |  6 ++++
 kernel/irq/manage.c       |  3 ++
 kernel/irq/timings.c      | 75 +++++++++++++++++++++++++++++++++++++++++++++++
 9 files changed, 172 insertions(+)
 create mode 100644 kernel/irq/timings.c

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 9fcabeb..5ff1d3a 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -675,6 +675,22 @@ static inline void init_irq_proc(void)
 }
 #endif
 
+#ifdef CONFIG_IRQ_TIMINGS
+
+#define IRQ_TIMINGS_SHIFT	3
+#define IRQ_TIMINGS_SIZE	(1 << IRQ_TIMINGS_SHIFT)
+#define IRQ_TIMINGS_MASK	(IRQ_TIMINGS_SIZE - 1)
+
+struct irq_timings {
+	u64 values[IRQ_TIMINGS_SIZE];	/* our circular buffer */
+	unsigned int w_index;		/* current buffer index */
+};
+
+struct irq_timings *irq_timings_get_next(int *irq);
+void irq_timings_enable(void);
+void irq_timings_disable(void);
+#endif
+
 struct seq_file;
 int show_interrupts(struct seq_file *p, void *v);
 int arch_show_interrupts(struct seq_file *p, int prec);
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index b51beeb..a21ddbbe 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -12,6 +12,7 @@ struct proc_dir_entry;
 struct module;
 struct irq_desc;
 struct irq_domain;
+struct irq_timings;
 struct pt_regs;
 
 /**
@@ -51,6 +52,9 @@ struct irq_desc {
 	struct irq_data		irq_data;
 	unsigned int __percpu	*kstat_irqs;
 	irq_flow_handler_t	handle_irq;
+#ifdef CONFIG_IRQ_TIMINGS
+	struct irq_timings __percpu *timings;
+#endif
 #ifdef CONFIG_IRQ_PREFLOW_FASTEOI
 	irq_preflow_handler_t	preflow_handler;
 #endif
diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
index 3bbfd6a..38e551d 100644
--- a/kernel/irq/Kconfig
+++ b/kernel/irq/Kconfig
@@ -81,6 +81,9 @@ config GENERIC_MSI_IRQ_DOMAIN
 config HANDLE_DOMAIN_IRQ
 	bool
 
+config IRQ_TIMINGS
+	bool
+
 config IRQ_DOMAIN_DEBUG
 	bool "Expose hardware/virtual IRQ mapping via debugfs"
 	depends on IRQ_DOMAIN && DEBUG_FS
diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile
index 2ee42e9..e1debaa9 100644
--- a/kernel/irq/Makefile
+++ b/kernel/irq/Makefile
@@ -9,3 +9,4 @@ obj-$(CONFIG_GENERIC_IRQ_MIGRATION) += cpuhotplug.o
 obj-$(CONFIG_PM_SLEEP) += pm.o
 obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o
 obj-$(CONFIG_GENERIC_IRQ_IPI) += ipi.o
+obj-$(CONFIG_IRQ_TIMINGS) += timings.o
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index a15b548..335847e 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -138,6 +138,8 @@ irqreturn_t handle_irq_event_percpu(struct irq_desc *desc)
 	unsigned int flags = 0, irq = desc->irq_data.irq;
 	struct irqaction *action;
 
+	record_irq_time(desc);
+
 	for_each_action_of_desc(desc, action) {
 		irqreturn_t res;
 
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index d5edcdc..734fe38 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -57,6 +57,7 @@ enum {
 	IRQS_WAITING		= 0x00000080,
 	IRQS_PENDING		= 0x00000200,
 	IRQS_SUSPENDED		= 0x00000800,
+	IRQS_TIMINGS		= 0x00001000,
 };
 
 #include "debug.h"
@@ -223,3 +224,64 @@ irq_pm_install_action(struct irq_desc *desc, struct irqaction *action) { }
 static inline void
 irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action) { }
 #endif
+
+#ifdef CONFIG_IRQ_TIMINGS
+static inline int alloc_timings(struct irq_desc *desc)
+{
+	desc->timings = alloc_percpu(struct irq_timings);
+	if (!desc->timings)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static inline void free_timings(struct irq_desc *desc)
+{
+	free_percpu(desc->timings);
+}
+
+static inline void remove_timings(struct irq_desc *desc)
+{
+	desc->istate &= ~IRQS_TIMINGS;
+}
+
+static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
+{
+	/*
+	 * We don't need the measurement because the idle code already
+	 * knows the next expiry event.
+	 */
+	if (act->flags & __IRQF_TIMER)
+		return;
+
+	desc->istate |= IRQS_TIMINGS;
+}
+
+extern struct static_key_false irq_timing_enabled;
+
+/*
+ * The function record_irq_time is only called in one place in the
+ * interrupts handler. We want this function always inline so the code
+ * inside is embedded in the function and the static key branching
+ * code can act at the higher level. Without the explicit
+ * __always_inline we can end up with a function call and a small
+ * overhead in the hotpath for nothing.
+ */
+static __always_inline void record_irq_time(struct irq_desc *desc)
+{
+	if (static_key_enabled(&irq_timing_enabled)) {
+		if (desc->istate & IRQS_TIMINGS) {
+			struct irq_timings *timings = this_cpu_ptr(desc->timings);
+			timings->w_index = (timings->w_index + 1) & IRQ_TIMINGS_MASK;
+			timings->values[timings->w_index] = local_clock();
+		}
+	}
+}
+#else
+static inline int alloc_timings(struct irq_desc *desc) { return 0; }
+static inline void free_timings(struct irq_desc *desc) {}
+static inline void remove_timings(struct irq_desc *desc) {}
+static inline void setup_timings(struct irq_desc *desc,
+				 struct irqaction *act) {};
+static inline void record_irq_time(struct irq_desc *desc) {}
+#endif
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 8731e1c..2c3ce74 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -174,6 +174,9 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 	if (alloc_masks(desc, gfp, node))
 		goto err_kstat;
 
+	if (alloc_timings(desc))
+		goto err_mask;
+
 	raw_spin_lock_init(&desc->lock);
 	lockdep_set_class(&desc->lock, &irq_desc_lock_class);
 	init_rcu_head(&desc->rcu);
@@ -182,6 +185,8 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 
 	return desc;
 
+err_mask:
+	free_masks(desc);
 err_kstat:
 	free_percpu(desc->kstat_irqs);
 err_desc:
@@ -193,6 +198,7 @@ static void delayed_free_desc(struct rcu_head *rhp)
 {
 	struct irq_desc *desc = container_of(rhp, struct irq_desc, rcu);
 
+	free_timings(desc);
 	free_masks(desc);
 	free_percpu(desc->kstat_irqs);
 	kfree(desc);
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 00cfc85..7a9f460 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1350,6 +1350,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		__enable_irq(desc);
 	}
 
+	setup_timings(desc, new);
+
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
 
 	/*
@@ -1480,6 +1482,7 @@ static struct irqaction *__free_irq(unsigned int irq, void *dev_id)
 		irq_settings_clr_disable_unlazy(desc);
 		irq_shutdown(desc);
 		irq_release_resources(desc);
+		remove_timings(desc);
 	}
 
 #ifdef CONFIG_SMP
diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
new file mode 100644
index 0000000..e6f1d61
--- /dev/null
+++ b/kernel/irq/timings.c
@@ -0,0 +1,75 @@
+/*
+ * linux/kernel/irq/timings.c
+ *
+ * Copyright (C) 2016, Linaro Ltd - Daniel Lezcano <daniel.lezcano@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/irqdesc.h>
+#include <linux/percpu.h>
+#include <linux/static_key.h>
+
+#include "internals.h"
+
+DEFINE_STATIC_KEY_FALSE(irq_timing_enabled);
+
+void irq_timings_enable(void)
+{
+	static_branch_inc(&irq_timing_enabled);
+}
+
+void irq_timings_disable(void)
+{
+	static_branch_dec(&irq_timing_enabled);
+}
+
+/**
+ * irqtiming_get_next - return the next irq timing
+ *
+ * @irq: a pointer to an integer representing the interrupt number
+ *
+ * Must be called under rcu_read_lock().
+ *
+ * This function allows to browse safely the interrupt descriptors in order
+ * to retrieve the interrupts timings. The parameter gives the interrupt
+ * number to begin with and will return the interrupt timings for the next
+ * allocated irq. This approach gives us the possibility to go through the
+ * different interrupts without having to handle the sparse irq.
+ *
+ * The function changes @irq to the next allocated irq + 1, it should be
+ * passed back again and again until NULL is returned. Usually this function
+ * is called the first time with @irq = 0.
+ *
+ * Returns a struct irq_timings, NULL if we reach the end of the interrupts
+ * list.
+ */
+struct irq_timings *irq_timings_get_next(int *irq)
+{
+	struct irq_desc *desc;
+	int next;
+
+again:
+	/* Do a racy lookup of the next allocated irq */
+	next = irq_get_next_irq(*irq);
+	if (next >= nr_irqs)
+		return NULL;
+
+	*irq = next + 1;
+
+	/*
+	 * Now lookup the descriptor. It's RCU protected. This
+	 * descriptor might belong to an uninteresting interrupt or
+	 * one that is not measured. Look for the next interrupt in
+	 * that case.
+	 */
+	desc = irq_to_desc(next);
+	if (!desc || !(desc->istate & IRQS_TIMINGS))
+		goto again;
+
+	return this_cpu_ptr(desc->timings);
+}
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH V7] irq: Track the interrupt timings
  2016-06-17 17:16         ` [PATCH V7] " Daniel Lezcano
@ 2016-06-23  8:41           ` Thomas Gleixner
  2016-06-23  9:39             ` Daniel Lezcano
  0 siblings, 1 reply; 12+ messages in thread
From: Thomas Gleixner @ 2016-06-23  8:41 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: nicolas.pitre, shreyas, linux-kernel, peterz, rafael, vincent.guittot

On Fri, 17 Jun 2016, Daniel Lezcano wrote:
> The interrupt framework gives a lot of information about each interrupt.
> 
> It does not keep track of when those interrupts occur though.
> 
> This patch provides a mean to record the elapsed time between successive
> interrupt occurrences in a per-IRQ per-CPU circular buffer to help with the
> prediction of the next occurrence using a statistical model.
> 
> A new function is added to browse the different interrupts and retrieve the
> timing information stored in it.
> 
> A static key is introduced so when the irq prediction is switched off at
> runtime, we can reduce the overhead near to zero. The irq timings is
> supposed to be potentially used by different sub-systems and for this reason
> the static key is a ref counter, so when the last use releases the irq
> timings that will result on the effective deactivation of the irq measurement.

Before merging this I really have to ask a few more questions. I'm a bit
worried about the usage site of this. It's going to iterate over all
interrupts in the system to do a next interrupt prediction. On larger machines
that's going to be quite some work and you touch a gazillion of cache lines
and many of them just to figure out that nothing happened.

Is it really required to do this per interrupt rather than providing per cpu
statistics of interrupts which arrived in the last X seconds or whatever
timeframe is relevant for this.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V7] irq: Track the interrupt timings
  2016-06-23  8:41           ` Thomas Gleixner
@ 2016-06-23  9:39             ` Daniel Lezcano
  2016-06-23 10:12               ` Thomas Gleixner
  0 siblings, 1 reply; 12+ messages in thread
From: Daniel Lezcano @ 2016-06-23  9:39 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: nicolas.pitre, shreyas, linux-kernel, peterz, rafael, vincent.guittot

On 06/23/2016 10:41 AM, Thomas Gleixner wrote:
> On Fri, 17 Jun 2016, Daniel Lezcano wrote:
>> The interrupt framework gives a lot of information about each interrupt.
>>
>> It does not keep track of when those interrupts occur though.
>>
>> This patch provides a mean to record the elapsed time between successive
>> interrupt occurrences in a per-IRQ per-CPU circular buffer to help with the
>> prediction of the next occurrence using a statistical model.
>>
>> A new function is added to browse the different interrupts and retrieve the
>> timing information stored in it.
>>
>> A static key is introduced so when the irq prediction is switched off at
>> runtime, we can reduce the overhead near to zero. The irq timings is
>> supposed to be potentially used by different sub-systems and for this reason
>> the static key is a ref counter, so when the last use releases the irq
>> timings that will result on the effective deactivation of the irq measurement.
>
> Before merging this I really have to ask a few more questions. I'm a bit
> worried about the usage site of this. It's going to iterate over all
> interrupts in the system to do a next interrupt prediction. On larger machines
> that's going to be quite some work and you touch a gazillion of cache lines
> and many of them just to figure out that nothing happened.
>
> Is it really required to do this per interrupt rather than providing per cpu
> statistics of interrupts which arrived in the last X seconds or whatever
> timeframe is relevant for this.

Perhaps I am misunderstanding but if the statistics are done per cpu 
without tracking per irq timings, it is not possible to extract a 
repeating pattern for each irq and have an accurate prediction.

Today, the code stores per cpu and per irq timings and the usage is to 
compute the next irq event by taking the earliest next irq event on the 
current cpu.

@@ -51,6 +52,9 @@ struct irq_desc {
         struct irq_data         irq_data;
         unsigned int __percpu   *kstat_irqs;
         irq_flow_handler_t      handle_irq;
+#ifdef CONFIG_IRQ_TIMINGS
+       struct irq_timings __percpu *timings;
+#endif
  #ifdef CONFIG_IRQ_PREFLOW_FASTEOI
         irq_preflow_handler_t   preflow_handler;
  #endif

If we step back and look at the potential users of this framework, we have:

  - mobile: by nature the interrupt line number is small and the devices 
are "slow"

  - desktop and laptop : a few interrupts are really interesting us, 
ethernet and sdd (the other ones are rare, or ignored like timers or IPI)

  - server : the interrupt line number is bigger, but not so much.

  - other big system: I don't know

Usually, server and super sized system want full performance and low 
latency. For this reason the kernel is configured with periodic tick and 
that makes the next prediction algorithm superfluous, especially when 
the latency is set to 0. So I don't think the irq timings + next irq 
event code path will be ever used in this case.

As you mentioned it, there are some parts we can make evolve and 
optimize like preventing to lookup an empty irq events cpu.





-- 
  <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V7] irq: Track the interrupt timings
  2016-06-23  9:39             ` Daniel Lezcano
@ 2016-06-23 10:12               ` Thomas Gleixner
  2016-06-23 13:12                 ` Daniel Lezcano
  0 siblings, 1 reply; 12+ messages in thread
From: Thomas Gleixner @ 2016-06-23 10:12 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: nicolas.pitre, shreyas, linux-kernel, peterz, rafael, vincent.guittot

On Thu, 23 Jun 2016, Daniel Lezcano wrote:
> On 06/23/2016 10:41 AM, Thomas Gleixner wrote:
> > Is it really required to do this per interrupt rather than providing per cpu
> > statistics of interrupts which arrived in the last X seconds or whatever
> > timeframe is relevant for this.
> 
> Perhaps I am misunderstanding but if the statistics are done per cpu without
> tracking per irq timings, it is not possible to extract a repeating pattern
> for each irq and have an accurate prediction.

I don't see why you need a repeating pattern for each irq. All you want to
know is whether there are repeating patterns of interrupts on a particular
cpu.

struct per_cpu_stat {
       u32	    irq;
       u64	    ts;
};

storing 32 entries of the above should give you enough information about
patterns etc. If you have a high rate of interrupts on that cpu it does not
matter at all whether thats from one or several devices. If you have only a
few then this storage is sufficient to get the desired information.

> If we step back and look at the potential users of this framework, we have:
> 
>  - mobile: by nature the interrupt line number is small and the devices are
> "slow"
> 
>  - desktop and laptop : a few interrupts are really interesting us, ethernet
> and sdd (the other ones are rare, or ignored like timers or IPI)

You still walk ALL interrupts. On my laptop that's 22 of them. And you touch
every single per cpu storage of each interrupt.
 
>  - server : the interrupt line number is bigger, but not so much.

Not so much? 158 interrupts on one of my larger machines.

> Usually, server and super sized system want full performance and low latency.
> For this reason the kernel is configured with periodic tick and that makes the
> next prediction algorithm superfluous, especially when the latency is set to
> 0. So I don't think the irq timings + next irq event code path will be ever
> used in this case.

Well, if such a machine runs with NOHZ=n then fine. But there are enough
machines where NOHZ is enabled so you get better power savings during times
where the machine is not loaded, but you want to have performance and low
latency if there is work to do.
 
> As you mentioned it, there are some parts we can make evolve and optimize like
> preventing to lookup an empty irq events cpu.

You better think about this now.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V7] irq: Track the interrupt timings
  2016-06-23 10:12               ` Thomas Gleixner
@ 2016-06-23 13:12                 ` Daniel Lezcano
  0 siblings, 0 replies; 12+ messages in thread
From: Daniel Lezcano @ 2016-06-23 13:12 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: nicolas.pitre, shreyas, linux-kernel, peterz, rafael, vincent.guittot

On 06/23/2016 12:12 PM, Thomas Gleixner wrote:
> On Thu, 23 Jun 2016, Daniel Lezcano wrote:
>> On 06/23/2016 10:41 AM, Thomas Gleixner wrote:
>>> Is it really required to do this per interrupt rather than providing per cpu
>>> statistics of interrupts which arrived in the last X seconds or whatever
>>> timeframe is relevant for this.
>>
>> Perhaps I am misunderstanding but if the statistics are done per cpu without
>> tracking per irq timings, it is not possible to extract a repeating pattern
>> for each irq and have an accurate prediction.
>
> I don't see why you need a repeating pattern for each irq. All you want to
> know is whether there are repeating patterns of interrupts on a particular
> cpu.
>
> struct per_cpu_stat {
>         u32	    irq;
>         u64	    ts;
> };
>
> storing 32 entries of the above should give you enough information about
> patterns etc. If you have a high rate of interrupts on that cpu it does not
> matter at all whether thats from one or several devices. If you have only a
> few then this storage is sufficient to get the desired information.

Mmmh, yes. I will investigate this patchset by replacing the percpu 
irqdesc's timings field by a per cpu irq event timings array.

Thanks !

   -- Daniel


-- 
  <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-06-23 13:12 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-14 16:33 [PATCH V5] irq: Track the interrupt timings Daniel Lezcano
2016-06-14 17:46 ` Nicolas Pitre
2016-06-14 18:11   ` Thomas Gleixner
2016-06-14 19:52     ` Daniel Lezcano
2016-06-14 20:10     ` [PATCH V6] " Daniel Lezcano
2016-06-14 20:38       ` Nicolas Pitre
2016-06-17 13:46       ` Thomas Gleixner
2016-06-17 17:16         ` [PATCH V7] " Daniel Lezcano
2016-06-23  8:41           ` Thomas Gleixner
2016-06-23  9:39             ` Daniel Lezcano
2016-06-23 10:12               ` Thomas Gleixner
2016-06-23 13:12                 ` Daniel Lezcano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).