linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC -tip 0/6] IRQ-bound performance events
@ 2012-12-17 11:51 Alexander Gordeev
  2012-12-17 11:51 ` [PATCH RFC -tip 1/6] perf/core: " Alexander Gordeev
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Alexander Gordeev @ 2012-12-17 11:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Arnaldo Carvalho de Melo

Hello,

This patchset is against perf/core branch.

This is an an attempt to introduce IRQ-bound performance events -
ones that only count in a context of a hardware interrupt handler.
The aim is to measure events which can not be measured using
existing task-bound or CPU-bound counters (i.e. L1 cache-misses
of a particular hardware handler or its durability).

The implementation is pretty straightforward: an IRQ-bound event
is registered with the IRQ descriptor and gets enabled/disabled
using new PMU callbacks: pmu_enable_irq() and pmu_disable_irq().

The series has not been tested thoroughly and is a concept proof
rather than a decent implementation: no group events could be be
loaded, inappropriate (i.e. software) events are not rejected,
only Intel and AMD PMUs were tried for 'perf stat', only Intel
PMU works with precise events. Perf tool changes are just a hack.

Yet, I want first ensure if the taken approach is not screwed and
I did not miss anything vital.

Below is a sample session on a machine with x2apic in cluster mode.
IRQ number is passed using new argument -I <irq> (please nevermind
'...process id '8'...' in the output):

# cat /proc/irq/8/smp_affinity_list
0,4,8,12,16,20,24,28,32,36,40,44
# ./tools/perf/perf stat -a -e L1-dcache-load-misses:k sleep 1

 Performance counter stats for 'sleep 1':

           124,078 L1-dcache-load-misses                                       

       1.001464219 seconds time elapsed

# ./tools/perf/perf stat -I 8 -a -e L1-dcache-load-misses:k sleep 1

 Performance counter stats for process id '8':

                 0 L1-dcache-load-misses                                       

       1.001466384 seconds time elapsed

# ./tools/perf/perf stat -I 8 -a -e L1-dcache-load-misses:k hwclock --test
Mon 17 Dec 2012 03:24:05 AM EST  -0.500690 seconds

 Performance counter stats for process id '8':

               317 L1-dcache-load-misses                                       

       0.502153382 seconds time elapsed

# ./tools/perf/perf stat -I 8 -C 0 -e L1-dcache-load-misses:k hwclock --test
Mon 17 Dec 2012 03:30:36 AM EST  -0.078717 seconds

 Performance counter stats for process id '8':

                72 L1-dcache-load-misses                                       

       0.079948468 seconds time elapsed

Alexander Gordeev (6):
  perf/core: IRQ-bound performance events
  perf/x86: IRQ-bound performance events
  perf/x86/AMD PMU: IRQ-bound performance events
  perf/x86/Core PMU: IRQ-bound performance events
  perf/x86/Intel PMU: IRQ-bound performance events
  perf/tool: Hack 'pid' as 'irq' for sys_perf_event_open()

 arch/x86/kernel/cpu/perf_event.c          |   71 ++++++++++++++++++---
 arch/x86/kernel/cpu/perf_event.h          |   19 ++++++
 arch/x86/kernel/cpu/perf_event_amd.c      |    2 +
 arch/x86/kernel/cpu/perf_event_intel.c    |   93 +++++++++++++++++++++++++--
 arch/x86/kernel/cpu/perf_event_intel_ds.c |    5 +-
 arch/x86/kernel/cpu/perf_event_knc.c      |    2 +
 arch/x86/kernel/cpu/perf_event_p4.c       |    2 +
 arch/x86/kernel/cpu/perf_event_p6.c       |    2 +
 include/linux/irq.h                       |    8 ++
 include/linux/irqdesc.h                   |    3 +
 include/linux/perf_event.h                |   16 +++++
 include/uapi/linux/perf_event.h           |    1 +
 kernel/events/core.c                      |   69 +++++++++++++++----
 kernel/irq/Makefile                       |    1 +
 kernel/irq/handle.c                       |    4 +
 kernel/irq/irqdesc.c                      |   14 ++++
 kernel/irq/perf_event.c                   |  100 +++++++++++++++++++++++++++++
 tools/perf/builtin-record.c               |    9 +++
 tools/perf/builtin-stat.c                 |   11 +++
 tools/perf/util/evlist.c                  |    4 +-
 tools/perf/util/evsel.c                   |    3 +
 tools/perf/util/evsel.h                   |    1 +
 tools/perf/util/target.c                  |    4 +
 tools/perf/util/thread_map.c              |   16 +++++
 24 files changed, 426 insertions(+), 34 deletions(-)
 create mode 100644 kernel/irq/perf_event.c

-- 
1.7.7.6


-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH RFC -tip 1/6] perf/core: IRQ-bound performance events
  2012-12-17 11:51 [PATCH RFC -tip 0/6] IRQ-bound performance events Alexander Gordeev
@ 2012-12-17 11:51 ` Alexander Gordeev
  2012-12-17 11:52 ` [PATCH RFC -tip 2/6] perf/x86: " Alexander Gordeev
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Alexander Gordeev @ 2012-12-17 11:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Arnaldo Carvalho de Melo

Make possible counting performance events while a particular
hardware context interrupt handler is running.

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
---
 include/linux/irq.h             |    8 +++
 include/linux/irqdesc.h         |    3 +
 include/linux/perf_event.h      |   16 ++++++
 include/uapi/linux/perf_event.h |    1 +
 kernel/events/core.c            |   69 +++++++++++++++++++++------
 kernel/irq/Makefile             |    1 +
 kernel/irq/handle.c             |    4 ++
 kernel/irq/irqdesc.c            |   14 +++++
 kernel/irq/perf_event.c         |  100 +++++++++++++++++++++++++++++++++++++++
 9 files changed, 201 insertions(+), 15 deletions(-)
 create mode 100644 kernel/irq/perf_event.c

diff --git a/include/linux/irq.h b/include/linux/irq.h
index 216b0ba..ef0a703 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -604,6 +604,14 @@ static inline int irq_reserve_irq(unsigned int irq)
 # define irq_reg_readl(addr)		readl(addr)
 #endif
 
+#ifdef CONFIG_PERF_EVENTS
+extern void perf_enable_irq_events(struct irq_desc *desc);
+extern void perf_disable_irq_events(struct irq_desc *desc);
+#else
+static inline void perf_enable_irq_events(struct irq_desc *desc)	{ }
+static inline void perf_disable_irq_events(struct irq_desc *desc)	{ }
+#endif
+
 /**
  * struct irq_chip_regs - register offsets for struct irq_gci
  * @enable:	Enable register offset to reg_base
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index 0ba014c..503479e 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -65,6 +65,9 @@ struct irq_desc {
 #ifdef CONFIG_PROC_FS
 	struct proc_dir_entry	*dir;
 #endif
+#ifdef CONFIG_PERF_EVENTS
+	struct list_head * __percpu event_list;
+#endif
 	struct module		*owner;
 	const char		*name;
 } ____cacheline_internodealigned_in_smp;
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 6bfb2faa..ef8a79b 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -197,6 +197,9 @@ struct pmu {
 	void (*pmu_enable)		(struct pmu *pmu); /* optional */
 	void (*pmu_disable)		(struct pmu *pmu); /* optional */
 
+	void (*pmu_enable_irq)		(struct pmu *pmu, int irq); /* opt. */
+	void (*pmu_disable_irq)		(struct pmu *pmu, int irq); /* opt. */
+
 	/*
 	 * Try and initialize the event for this PMU.
 	 * Should return -ENOENT when the @event doesn't match this PMU.
@@ -320,6 +323,7 @@ struct perf_event {
 	struct list_head		group_entry;
 	struct list_head		event_entry;
 	struct list_head		sibling_list;
+	struct list_head		irq_desc_list;
 	struct hlist_node		hlist_entry;
 	int				nr_siblings;
 	int				group_flags;
@@ -392,6 +396,7 @@ struct perf_event {
 
 	int				oncpu;
 	int				cpu;
+	int				irq;
 
 	struct list_head		owner_entry;
 	struct task_struct		*owner;
@@ -544,6 +549,8 @@ extern void perf_event_delayed_put(struct task_struct *task);
 extern void perf_event_print_debug(void);
 extern void perf_pmu_disable(struct pmu *pmu);
 extern void perf_pmu_enable(struct pmu *pmu);
+extern void perf_pmu_disable_irq(struct pmu *pmu, int irq);
+extern void perf_pmu_enable_irq(struct pmu *pmu, int irq);
 extern int perf_event_task_disable(void);
 extern int perf_event_task_enable(void);
 extern int perf_event_refresh(struct perf_event *event, int refresh);
@@ -624,6 +631,11 @@ static inline int is_software_event(struct perf_event *event)
 	return event->pmu->task_ctx_nr == perf_sw_context;
 }
 
+static inline bool is_interrupt_event(struct perf_event *event)
+{
+	return event->irq >= 0;
+}
+
 extern struct static_key perf_swevent_enabled[PERF_COUNT_SW_MAX];
 
 extern void __perf_sw_event(u32, u64, struct pt_regs *, u64);
@@ -753,6 +765,8 @@ extern void perf_event_enable(struct perf_event *event);
 extern void perf_event_disable(struct perf_event *event);
 extern int __perf_event_disable(void *info);
 extern void perf_event_task_tick(void);
+extern int perf_event_irq_add(struct perf_event *event);
+extern int perf_event_irq_del(struct perf_event *event);
 #else
 static inline void
 perf_event_task_sched_in(struct task_struct *prev,
@@ -792,6 +806,8 @@ static inline void perf_event_enable(struct perf_event *event)		{ }
 static inline void perf_event_disable(struct perf_event *event)		{ }
 static inline int __perf_event_disable(void *info)			{ return -1; }
 static inline void perf_event_task_tick(void)				{ }
+extern inline int perf_event_irq_add(struct perf_event *event)	{ return -EINVAL; }
+extern inline int perf_event_irq_del(struct perf_event *event)	{ return -EINVAL; }
 #endif
 
 #define perf_output_put(handle, x) perf_output_copy((handle), &(x), sizeof(x))
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 4f63c05..d4cfacd 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -611,5 +611,6 @@ enum perf_callchain_context {
 #define PERF_FLAG_FD_NO_GROUP		(1U << 0)
 #define PERF_FLAG_FD_OUTPUT		(1U << 1)
 #define PERF_FLAG_PID_CGROUP		(1U << 2) /* pid=cgroup id, per-cpu mode only */
+#define PERF_FLAG_PID_IRQ		(1U << 3) /* pid=irq number */
 
 #endif /* _UAPI_LINUX_PERF_EVENT_H */
diff --git a/kernel/events/core.c b/kernel/events/core.c
index dbccf83..ca8f489 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -116,8 +116,9 @@ static int cpu_function_call(int cpu, int (*func) (void *info), void *info)
 }
 
 #define PERF_FLAG_ALL (PERF_FLAG_FD_NO_GROUP |\
-		       PERF_FLAG_FD_OUTPUT  |\
-		       PERF_FLAG_PID_CGROUP)
+		       PERF_FLAG_FD_OUTPUT |\
+		       PERF_FLAG_PID_CGROUP |\
+		       PERF_FLAG_PID_IRQ)
 
 /*
  * branch priv levels that need permission checks
@@ -641,6 +642,20 @@ void perf_pmu_enable(struct pmu *pmu)
 		pmu->pmu_enable(pmu);
 }
 
+void perf_pmu_disable_irq(struct pmu *pmu, int irq)
+{
+	int *count = this_cpu_ptr(pmu->pmu_disable_count);
+	if (!(*count)++)
+		pmu->pmu_disable_irq(pmu, irq);
+}
+
+void perf_pmu_enable_irq(struct pmu *pmu, int irq)
+{
+	int *count = this_cpu_ptr(pmu->pmu_disable_count);
+	if (!--(*count))
+		pmu->pmu_enable_irq(pmu, irq);
+}
+
 static DEFINE_PER_CPU(struct list_head, rotation_list);
 
 /*
@@ -5804,6 +5819,10 @@ static void perf_pmu_nop_void(struct pmu *pmu)
 {
 }
 
+static void perf_pmu_int_nop_void(struct pmu *pmu, int irq)
+{
+}
+
 static int perf_pmu_nop_int(struct pmu *pmu)
 {
 	return 0;
@@ -6020,6 +6039,11 @@ got_cpu_context:
 		pmu->pmu_disable = perf_pmu_nop_void;
 	}
 
+	if (!pmu->pmu_enable_irq) {
+		pmu->pmu_enable_irq  = perf_pmu_int_nop_void;
+		pmu->pmu_disable_irq = perf_pmu_int_nop_void;
+	}
+
 	if (!pmu->event_idx)
 		pmu->event_idx = perf_event_idx_default;
 
@@ -6105,7 +6129,7 @@ unlock:
  * Allocate and initialize a event structure
  */
 static struct perf_event *
-perf_event_alloc(struct perf_event_attr *attr, int cpu,
+perf_event_alloc(struct perf_event_attr *attr, int cpu, int irq,
 		 struct task_struct *task,
 		 struct perf_event *group_leader,
 		 struct perf_event *parent_event,
@@ -6118,7 +6142,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
 	long err;
 
 	if ((unsigned)cpu >= nr_cpu_ids) {
-		if (!task || cpu != -1)
+		if (!task || cpu != -1 || irq < 0)
 			return ERR_PTR(-EINVAL);
 	}
 
@@ -6148,6 +6172,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
 
 	atomic_long_set(&event->refcount, 1);
 	event->cpu		= cpu;
+	event->irq		= irq;
 	event->attr		= *attr;
 	event->group_leader	= group_leader;
 	event->pmu		= NULL;
@@ -6442,6 +6467,7 @@ SYSCALL_DEFINE5(perf_event_open,
 	struct fd group = {NULL, 0};
 	struct task_struct *task = NULL;
 	struct pmu *pmu;
+	int irq = -1;
 	int event_fd;
 	int move_group = 0;
 	int err;
@@ -6450,6 +6476,27 @@ SYSCALL_DEFINE5(perf_event_open,
 	if (flags & ~PERF_FLAG_ALL)
 		return -EINVAL;
 
+	if ((flags & (PERF_FLAG_PID_CGROUP | PERF_FLAG_PID_IRQ)) ==
+	    (PERF_FLAG_PID_CGROUP | PERF_FLAG_PID_IRQ))
+		return -EINVAL;
+
+	/*
+	 * In irq mode, the pid argument is used to pass irq number.
+	 */
+	if (flags & PERF_FLAG_PID_IRQ) {
+		irq = pid;
+		pid = -1;
+	}
+
+	/*
+	 * In cgroup mode, the pid argument is used to pass the fd
+	 * opened to the cgroup directory in cgroupfs. The cpu argument
+	 * designates the cpu on which to monitor threads from that
+	 * cgroup.
+	 */
+	if ((flags & PERF_FLAG_PID_CGROUP) && (pid == -1 || cpu == -1))
+		return -EINVAL;
+
 	err = perf_copy_attr(attr_uptr, &attr);
 	if (err)
 		return err;
@@ -6464,15 +6511,6 @@ SYSCALL_DEFINE5(perf_event_open,
 			return -EINVAL;
 	}
 
-	/*
-	 * In cgroup mode, the pid argument is used to pass the fd
-	 * opened to the cgroup directory in cgroupfs. The cpu argument
-	 * designates the cpu on which to monitor threads from that
-	 * cgroup.
-	 */
-	if ((flags & PERF_FLAG_PID_CGROUP) && (pid == -1 || cpu == -1))
-		return -EINVAL;
-
 	event_fd = get_unused_fd();
 	if (event_fd < 0)
 		return event_fd;
@@ -6498,7 +6536,7 @@ SYSCALL_DEFINE5(perf_event_open,
 
 	get_online_cpus();
 
-	event = perf_event_alloc(&attr, cpu, task, group_leader, NULL,
+	event = perf_event_alloc(&attr, cpu, irq, task, group_leader, NULL,
 				 NULL, NULL);
 	if (IS_ERR(event)) {
 		err = PTR_ERR(event);
@@ -6698,7 +6736,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
 	 * Get the target context (task or percpu):
 	 */
 
-	event = perf_event_alloc(attr, cpu, task, NULL, NULL,
+	event = perf_event_alloc(attr, cpu, -1, task, NULL, NULL,
 				 overflow_handler, context);
 	if (IS_ERR(event)) {
 		err = PTR_ERR(event);
@@ -7012,6 +7050,7 @@ inherit_event(struct perf_event *parent_event,
 
 	child_event = perf_event_alloc(&parent_event->attr,
 					   parent_event->cpu,
+					   parent_event->irq,
 					   child,
 					   group_leader, parent_event,
 				           NULL, NULL);
diff --git a/kernel/irq/Makefile b/kernel/irq/Makefile
index fff1738..12c81e8 100644
--- a/kernel/irq/Makefile
+++ b/kernel/irq/Makefile
@@ -6,3 +6,4 @@ obj-$(CONFIG_IRQ_DOMAIN) += irqdomain.o
 obj-$(CONFIG_PROC_FS) += proc.o
 obj-$(CONFIG_GENERIC_PENDING_IRQ) += migration.o
 obj-$(CONFIG_PM_SLEEP) += pm.o
+obj-$(CONFIG_PERF_EVENTS) += perf_event.o
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index 131ca17..7542012 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -139,7 +139,11 @@ handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action)
 		irqreturn_t res;
 
 		trace_irq_handler_entry(irq, action);
+		perf_enable_irq_events(desc);
+
 		res = action->handler(irq, action->dev_id);
+
+		perf_disable_irq_events(desc);
 		trace_irq_handler_exit(irq, action, res);
 
 		if (WARN_ONCE(!irqs_disabled(),"irq %u handler %pF enabled interrupts\n",
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 192a302..2a10214 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -131,6 +131,14 @@ static void free_masks(struct irq_desc *desc)
 static inline void free_masks(struct irq_desc *desc) { }
 #endif
 
+#ifdef CONFIG_PERF_EVENTS
+extern int alloc_perf_events(struct irq_desc *desc);
+extern void free_perf_events(struct irq_desc *desc);
+#else
+static inline int alloc_perf_events(struct irq_desc *desc) { return 0; }
+static inline void free_perf_events(struct irq_desc *desc) { }
+#endif
+
 static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 {
 	struct irq_desc *desc;
@@ -147,6 +155,9 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 	if (alloc_masks(desc, gfp, node))
 		goto err_kstat;
 
+	if (alloc_perf_events(desc))
+		goto err_masks;
+
 	raw_spin_lock_init(&desc->lock);
 	lockdep_set_class(&desc->lock, &irq_desc_lock_class);
 
@@ -154,6 +165,8 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 
 	return desc;
 
+err_masks:
+	free_masks(desc);
 err_kstat:
 	free_percpu(desc->kstat_irqs);
 err_desc:
@@ -171,6 +184,7 @@ static void free_desc(unsigned int irq)
 	delete_irq_desc(irq);
 	mutex_unlock(&sparse_irq_lock);
 
+	free_perf_events(desc);
 	free_masks(desc);
 	free_percpu(desc->kstat_irqs);
 	kfree(desc);
diff --git a/kernel/irq/perf_event.c b/kernel/irq/perf_event.c
new file mode 100644
index 0000000..007a5bb
--- /dev/null
+++ b/kernel/irq/perf_event.c
@@ -0,0 +1,100 @@
+/*
+ * linux/kernel/irq/perf.c
+ *
+ * Copyright (C) 2012 Alexander Gordeev
+ *
+ * This file contains the code for per-IRQ performance counters
+ */
+
+#include <linux/irq.h>
+#include <linux/cpumask.h>
+#include <linux/perf_event.h>
+
+int alloc_perf_events(struct irq_desc *desc)
+{
+	struct list_head __percpu *head;
+	int cpu;
+
+	desc->event_list = alloc_percpu(struct list_head);
+	if (!desc->event_list)
+		return -ENOMEM;
+
+	for_each_possible_cpu(cpu) {
+		head = per_cpu_ptr(desc->event_list, cpu);
+		INIT_LIST_HEAD(head);
+	}
+
+	return 0;
+}
+
+void free_perf_events(struct irq_desc *desc)
+{
+	struct list_head __percpu *head;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		head = per_cpu_ptr(desc->event_list, cpu);
+		while (!list_empty(head))
+			list_del(head->next);
+	}
+
+	free_percpu(desc->event_list);
+}
+
+int perf_event_irq_add(struct perf_event *event)
+{
+	struct irq_desc *desc = irq_to_desc(event->irq);
+	struct list_head __percpu *head;
+
+	WARN_ON(event->cpu != smp_processor_id());
+
+	if (!desc)
+		return -ENOENT;
+
+	head = per_cpu_ptr(desc->event_list, event->cpu);
+
+	raw_spin_lock(&desc->lock);
+	list_add(&event->irq_desc_list, head);
+	raw_spin_unlock(&desc->lock);
+
+	return 0;
+}
+
+int perf_event_irq_del(struct perf_event *event)
+{
+	struct irq_desc *desc = irq_to_desc(event->irq);
+
+	if (!desc)
+		return -ENOENT;
+
+	WARN_ON(event->cpu != smp_processor_id());
+
+	raw_spin_lock(&desc->lock);
+	list_del(&event->irq_desc_list);
+	raw_spin_unlock(&desc->lock);
+
+	return 0;
+}
+
+static void __enable_irq_events(struct irq_desc *desc, bool enable)
+{
+	struct perf_event *event;
+	struct list_head __percpu *head = this_cpu_ptr(desc->event_list);
+
+	list_for_each_entry(event, head, irq_desc_list) {
+		struct pmu *pmu = event->pmu;
+		void (*func)(struct pmu *, int) =
+			enable ? pmu->pmu_enable_irq : pmu->pmu_disable_irq;
+		func(pmu, desc->irq_data.irq);
+	}
+}
+
+void perf_enable_irq_events(struct irq_desc *desc)
+{
+	__enable_irq_events(desc, true);
+}
+
+void perf_disable_irq_events(struct irq_desc *desc)
+{
+	__enable_irq_events(desc, false);
+}
-- 
1.7.7.6


-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH RFC -tip 2/6] perf/x86: IRQ-bound performance events
  2012-12-17 11:51 [PATCH RFC -tip 0/6] IRQ-bound performance events Alexander Gordeev
  2012-12-17 11:51 ` [PATCH RFC -tip 1/6] perf/core: " Alexander Gordeev
@ 2012-12-17 11:52 ` Alexander Gordeev
  2012-12-17 11:52 ` [PATCH RFC -tip 3/6] perf/x86/AMD PMU: " Alexander Gordeev
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Alexander Gordeev @ 2012-12-17 11:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Arnaldo Carvalho de Melo

Generic changes to x86 performance framework to further enable
implementations of IRQ-bound performance events.

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
---
 arch/x86/kernel/cpu/perf_event.c       |   33 +++++++++++++++++++++++++++++--
 arch/x86/kernel/cpu/perf_event.h       |    5 ++++
 arch/x86/kernel/cpu/perf_event_amd.c   |    2 +
 arch/x86/kernel/cpu/perf_event_intel.c |    4 +++
 arch/x86/kernel/cpu/perf_event_knc.c   |    2 +
 arch/x86/kernel/cpu/perf_event_p4.c    |    2 +
 arch/x86/kernel/cpu/perf_event_p6.c    |    2 +
 7 files changed, 47 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 4428fd1..8ab32d2 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -525,6 +525,11 @@ static void x86_pmu_disable(struct pmu *pmu)
 	x86_pmu.disable_all();
 }
 
+static void x86_pmu__disable_irq(struct pmu *pmu, int irq)
+{
+	x86_pmu.disable_irq(irq);
+}
+
 void x86_pmu_enable_all(int added)
 {
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
@@ -540,6 +545,10 @@ void x86_pmu_enable_all(int added)
 	}
 }
 
+void x86_pmu_enable_irq_nop_int(int irq)
+{
+}
+
 static struct pmu pmu;
 
 static inline int is_x86_event(struct perf_event *event)
@@ -920,6 +929,11 @@ static void x86_pmu_enable(struct pmu *pmu)
 	x86_pmu.enable_all(added);
 }
 
+static void x86_pmu__enable_irq(struct pmu *pmu, int irq)
+{
+	x86_pmu.enable_irq(irq);
+}
+
 static DEFINE_PER_CPU(u64 [X86_PMC_IDX_MAX], pmc_prev_left);
 
 /*
@@ -1065,7 +1079,12 @@ static void x86_pmu_start(struct perf_event *event, int flags)
 	event->hw.state = 0;
 
 	cpuc->events[idx] = event;
-	__set_bit(idx, cpuc->active_mask);
+	if (is_interrupt_event(event)) {
+		__set_bit(idx, cpuc->actirq_mask);
+		perf_event_irq_add(event);
+	} else {
+		__set_bit(idx, cpuc->active_mask);
+	}
 	__set_bit(idx, cpuc->running);
 	x86_pmu.enable(event);
 	perf_event_update_userpage(event);
@@ -1102,6 +1121,7 @@ void perf_event_print_debug(void)
 		pr_info("CPU#%d: pebs:       %016llx\n", cpu, pebs);
 	}
 	pr_info("CPU#%d: active:     %016llx\n", cpu, *(u64 *)cpuc->active_mask);
+	pr_info("CPU#%d: actirq:     %016llx\n", cpu, *(u64 *)cpuc->actirq_mask);
 
 	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
 		rdmsrl(x86_pmu_config_addr(idx), pmc_ctrl);
@@ -1130,8 +1150,11 @@ void x86_pmu_stop(struct perf_event *event, int flags)
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
 	struct hw_perf_event *hwc = &event->hw;
 
-	if (__test_and_clear_bit(hwc->idx, cpuc->active_mask)) {
+	if (__test_and_clear_bit(hwc->idx, cpuc->active_mask) ||
+	    __test_and_clear_bit(hwc->idx, cpuc->actirq_mask)) {
 		x86_pmu.disable(event);
+		if (unlikely(is_interrupt_event(event)))
+			perf_event_irq_del(event);
 		cpuc->events[hwc->idx] = NULL;
 		WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
 		hwc->state |= PERF_HES_STOPPED;
@@ -1199,7 +1222,8 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
 	apic_write(APIC_LVTPC, APIC_DM_NMI);
 
 	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
-		if (!test_bit(idx, cpuc->active_mask)) {
+		if (!test_bit(idx, cpuc->active_mask) &&
+		    !test_bit(idx, cpuc->actirq_mask)) {
 			/*
 			 * Though we deactivated the counter some cpus
 			 * might still deliver spurious interrupts still
@@ -1792,6 +1816,9 @@ static struct pmu pmu = {
 	.pmu_enable		= x86_pmu_enable,
 	.pmu_disable		= x86_pmu_disable,
 
+	.pmu_enable_irq		= x86_pmu__enable_irq,
+	.pmu_disable_irq	= x86_pmu__disable_irq,
+
 	.attr_groups		= x86_pmu_attr_groups,
 
 	.event_init		= x86_pmu_event_init,
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 115c1ea..ab56c05 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -118,6 +118,7 @@ struct cpu_hw_events {
 	 */
 	struct perf_event	*events[X86_PMC_IDX_MAX]; /* in counter order */
 	unsigned long		active_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
+	unsigned long		actirq_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
 	unsigned long		running[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
 	int			enabled;
 
@@ -319,6 +320,8 @@ struct x86_pmu {
 	int		(*handle_irq)(struct pt_regs *);
 	void		(*disable_all)(void);
 	void		(*enable_all)(int added);
+	void		(*disable_irq)(int irq);
+	void		(*enable_irq)(int irq);
 	void		(*enable)(struct perf_event *);
 	void		(*disable)(struct perf_event *);
 	int		(*hw_config)(struct perf_event *event);
@@ -488,6 +491,8 @@ static inline void __x86_pmu_enable_event(struct hw_perf_event *hwc,
 
 void x86_pmu_enable_all(int added);
 
+void x86_pmu_enable_irq_nop_int(int irq);
+
 int perf_assign_events(struct event_constraint **constraints, int n,
 			int wmin, int wmax, int *assign);
 int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign);
diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c
index c93bc4e..d42845f 100644
--- a/arch/x86/kernel/cpu/perf_event_amd.c
+++ b/arch/x86/kernel/cpu/perf_event_amd.c
@@ -581,6 +581,8 @@ static __initconst const struct x86_pmu amd_pmu = {
 	.handle_irq		= x86_pmu_handle_irq,
 	.disable_all		= x86_pmu_disable_all,
 	.enable_all		= x86_pmu_enable_all,
+	.disable_irq		= x86_pmu_enable_irq_nop_int,
+	.enable_irq		= x86_pmu_enable_irq_nop_int,
 	.enable			= x86_pmu_enable_event,
 	.disable		= x86_pmu_disable_event,
 	.hw_config		= amd_pmu_hw_config,
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 93b9e11..61e6db4 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -1615,6 +1615,8 @@ static __initconst const struct x86_pmu core_pmu = {
 	.handle_irq		= x86_pmu_handle_irq,
 	.disable_all		= x86_pmu_disable_all,
 	.enable_all		= core_pmu_enable_all,
+	.disable_irq		= x86_pmu_enable_irq_nop_int,
+	.enable_irq		= x86_pmu_enable_irq_nop_int,
 	.enable			= core_pmu_enable_event,
 	.disable		= x86_pmu_disable_event,
 	.hw_config		= x86_pmu_hw_config,
@@ -1754,6 +1756,8 @@ static __initconst const struct x86_pmu intel_pmu = {
 	.handle_irq		= intel_pmu_handle_irq,
 	.disable_all		= intel_pmu_disable_all,
 	.enable_all		= intel_pmu_enable_all,
+	.disable_irq		= x86_pmu_enable_irq_nop_int,
+	.enable_irq		= x86_pmu_enable_irq_nop_int,
 	.enable			= intel_pmu_enable_event,
 	.disable		= intel_pmu_disable_event,
 	.hw_config		= intel_pmu_hw_config,
diff --git a/arch/x86/kernel/cpu/perf_event_knc.c b/arch/x86/kernel/cpu/perf_event_knc.c
index 4b7731b..fce8f30 100644
--- a/arch/x86/kernel/cpu/perf_event_knc.c
+++ b/arch/x86/kernel/cpu/perf_event_knc.c
@@ -289,6 +289,8 @@ static __initconst struct x86_pmu knc_pmu = {
 	.handle_irq		= knc_pmu_handle_irq,
 	.disable_all		= knc_pmu_disable_all,
 	.enable_all		= knc_pmu_enable_all,
+	.disable_irq		= x86_pmu_enable_irq_nop_int,
+	.enable_irq		= x86_pmu_enable_irq_nop_int,
 	.enable			= knc_pmu_enable_event,
 	.disable		= knc_pmu_disable_event,
 	.hw_config		= x86_pmu_hw_config,
diff --git a/arch/x86/kernel/cpu/perf_event_p4.c b/arch/x86/kernel/cpu/perf_event_p4.c
index 92c7e39..e8afd84 100644
--- a/arch/x86/kernel/cpu/perf_event_p4.c
+++ b/arch/x86/kernel/cpu/perf_event_p4.c
@@ -1287,6 +1287,8 @@ static __initconst const struct x86_pmu p4_pmu = {
 	.handle_irq		= p4_pmu_handle_irq,
 	.disable_all		= p4_pmu_disable_all,
 	.enable_all		= p4_pmu_enable_all,
+	.disable_irq		= x86_pmu_enable_irq_nop_int,
+	.enable_irq		= x86_pmu_enable_irq_nop_int,
 	.enable			= p4_pmu_enable_event,
 	.disable		= p4_pmu_disable_event,
 	.eventsel		= MSR_P4_BPU_CCCR0,
diff --git a/arch/x86/kernel/cpu/perf_event_p6.c b/arch/x86/kernel/cpu/perf_event_p6.c
index f2af39f..07fbb2b 100644
--- a/arch/x86/kernel/cpu/perf_event_p6.c
+++ b/arch/x86/kernel/cpu/perf_event_p6.c
@@ -202,6 +202,8 @@ static __initconst const struct x86_pmu p6_pmu = {
 	.handle_irq		= x86_pmu_handle_irq,
 	.disable_all		= p6_pmu_disable_all,
 	.enable_all		= p6_pmu_enable_all,
+	.disable_irq		= x86_pmu_enable_irq_nop_int,
+	.enable_irq		= x86_pmu_enable_irq_nop_int,
 	.enable			= p6_pmu_enable_event,
 	.disable		= p6_pmu_disable_event,
 	.hw_config		= x86_pmu_hw_config,
-- 
1.7.7.6


-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH RFC -tip 3/6] perf/x86/AMD PMU: IRQ-bound performance events
  2012-12-17 11:51 [PATCH RFC -tip 0/6] IRQ-bound performance events Alexander Gordeev
  2012-12-17 11:51 ` [PATCH RFC -tip 1/6] perf/core: " Alexander Gordeev
  2012-12-17 11:52 ` [PATCH RFC -tip 2/6] perf/x86: " Alexander Gordeev
@ 2012-12-17 11:52 ` Alexander Gordeev
  2012-12-17 11:53 ` [PATCH RFC -tip 4/6] perf/x86/Core " Alexander Gordeev
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Alexander Gordeev @ 2012-12-17 11:52 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Arnaldo Carvalho de Melo

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
---
 arch/x86/kernel/cpu/perf_event.c     |   38 ++++++++++++++++++++++++++++-----
 arch/x86/kernel/cpu/perf_event.h     |   14 ++++++++++++
 arch/x86/kernel/cpu/perf_event_amd.c |    4 +-
 3 files changed, 48 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 8ab32d2..aa69997 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -496,15 +496,23 @@ void x86_pmu_disable_all(void)
 	int idx;
 
 	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
-		u64 val;
-
 		if (!test_bit(idx, cpuc->active_mask))
 			continue;
-		rdmsrl(x86_pmu_config_addr(idx), val);
-		if (!(val & ARCH_PERFMON_EVENTSEL_ENABLE))
+		__x86_pmu_disable_event(idx, ARCH_PERFMON_EVENTSEL_ENABLE);
+	}
+}
+
+void x86_pmu_disable_irq(int irq)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	int idx;
+
+	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+		if (!test_bit(idx, cpuc->actirq_mask))
+			continue;
+		if (cpuc->events[idx]->irq != irq)
 			continue;
-		val &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
-		wrmsrl(x86_pmu_config_addr(idx), val);
+		__x86_pmu_disable_event(idx, ARCH_PERFMON_EVENTSEL_ENABLE);
 	}
 }
 
@@ -549,6 +557,24 @@ void x86_pmu_enable_irq_nop_int(int irq)
 {
 }
 
+void x86_pmu_enable_irq(int irq)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	int idx;
+
+	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+		struct perf_event *event = cpuc->events[idx];
+
+		if (!test_bit(idx, cpuc->actirq_mask))
+			continue;
+		if (event->irq != irq)
+			continue;
+
+		__x86_pmu_enable_event(&event->hw,
+				       ARCH_PERFMON_EVENTSEL_ENABLE);
+	}
+}
+
 static struct pmu pmu;
 
 static inline int is_x86_event(struct perf_event *event)
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index ab56c05..e7d47a0 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -479,6 +479,19 @@ int x86_pmu_hw_config(struct perf_event *event);
 
 void x86_pmu_disable_all(void);
 
+void x86_pmu_disable_irq(int irq);
+
+static void inline __x86_pmu_disable_event(int idx, u64 enable_mask)
+{
+	u64 val;
+
+	rdmsrl(x86_pmu_config_addr(idx), val);
+	if (val & enable_mask) {
+		val &= ~enable_mask;
+		wrmsrl(x86_pmu_config_addr(idx), val);
+	}
+}
+
 static inline void __x86_pmu_enable_event(struct hw_perf_event *hwc,
 					  u64 enable_mask)
 {
@@ -491,6 +504,7 @@ static inline void __x86_pmu_enable_event(struct hw_perf_event *hwc,
 
 void x86_pmu_enable_all(int added);
 
+void x86_pmu_enable_irq(int irq);
 void x86_pmu_enable_irq_nop_int(int irq);
 
 int perf_assign_events(struct event_constraint **constraints, int n,
diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c
index d42845f..2754880 100644
--- a/arch/x86/kernel/cpu/perf_event_amd.c
+++ b/arch/x86/kernel/cpu/perf_event_amd.c
@@ -581,8 +581,8 @@ static __initconst const struct x86_pmu amd_pmu = {
 	.handle_irq		= x86_pmu_handle_irq,
 	.disable_all		= x86_pmu_disable_all,
 	.enable_all		= x86_pmu_enable_all,
-	.disable_irq		= x86_pmu_enable_irq_nop_int,
-	.enable_irq		= x86_pmu_enable_irq_nop_int,
+	.disable_irq		= x86_pmu_disable_irq,
+	.enable_irq		= x86_pmu_enable_irq,
 	.enable			= x86_pmu_enable_event,
 	.disable		= x86_pmu_disable_event,
 	.hw_config		= amd_pmu_hw_config,
-- 
1.7.7.6


-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH RFC -tip 4/6] perf/x86/Core PMU: IRQ-bound performance events
  2012-12-17 11:51 [PATCH RFC -tip 0/6] IRQ-bound performance events Alexander Gordeev
                   ` (2 preceding siblings ...)
  2012-12-17 11:52 ` [PATCH RFC -tip 3/6] perf/x86/AMD PMU: " Alexander Gordeev
@ 2012-12-17 11:53 ` Alexander Gordeev
  2012-12-17 11:53 ` [PATCH RFC -tip 5/6] perf/x86/Intel " Alexander Gordeev
  2012-12-17 11:54 ` [PATCH RFC -tip 6/6] perf/tool: Hack 'pid' as 'irq' for sys_perf_event_open() Alexander Gordeev
  5 siblings, 0 replies; 8+ messages in thread
From: Alexander Gordeev @ 2012-12-17 11:53 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Arnaldo Carvalho de Melo

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c |   23 +++++++++++++++++++++--
 1 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 61e6db4..71086c4 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -1585,6 +1585,25 @@ static void core_pmu_enable_all(int added)
 	}
 }
 
+void core_pmu_enable_irq(int irq)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	int idx;
+
+	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+		struct perf_event *event = cpuc->events[idx];
+
+		if (!test_bit(idx, cpuc->actirq_mask) ||
+				cpuc->events[idx]->attr.exclude_host)
+			continue;
+		if (event->irq != irq)
+			continue;
+
+		__x86_pmu_enable_event(&event->hw,
+				       ARCH_PERFMON_EVENTSEL_ENABLE);
+	}
+}
+
 PMU_FORMAT_ATTR(event,	"config:0-7"	);
 PMU_FORMAT_ATTR(umask,	"config:8-15"	);
 PMU_FORMAT_ATTR(edge,	"config:18"	);
@@ -1615,8 +1634,8 @@ static __initconst const struct x86_pmu core_pmu = {
 	.handle_irq		= x86_pmu_handle_irq,
 	.disable_all		= x86_pmu_disable_all,
 	.enable_all		= core_pmu_enable_all,
-	.disable_irq		= x86_pmu_enable_irq_nop_int,
-	.enable_irq		= x86_pmu_enable_irq_nop_int,
+	.disable_irq		= x86_pmu_disable_irq,
+	.enable_irq		= core_pmu_enable_irq,
 	.enable			= core_pmu_enable_event,
 	.disable		= x86_pmu_disable_event,
 	.hw_config		= x86_pmu_hw_config,
-- 
1.7.7.6


-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH RFC -tip 5/6] perf/x86/Intel PMU: IRQ-bound performance events
  2012-12-17 11:51 [PATCH RFC -tip 0/6] IRQ-bound performance events Alexander Gordeev
                   ` (3 preceding siblings ...)
  2012-12-17 11:53 ` [PATCH RFC -tip 4/6] perf/x86/Core " Alexander Gordeev
@ 2012-12-17 11:53 ` Alexander Gordeev
  2012-12-17 11:54 ` [PATCH RFC -tip 6/6] perf/tool: Hack 'pid' as 'irq' for sys_perf_event_open() Alexander Gordeev
  5 siblings, 0 replies; 8+ messages in thread
From: Alexander Gordeev @ 2012-12-17 11:53 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Arnaldo Carvalho de Melo

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c    |   74 +++++++++++++++++++++++++----
 arch/x86/kernel/cpu/perf_event_intel_ds.c |    5 +-
 2 files changed, 68 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 71086c4..460682a 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -821,6 +821,24 @@ static inline bool intel_pmu_needs_lbr_smpl(struct perf_event *event)
 	return false;
 }
 
+u64 __get_intel_ctrl_irq_mask(struct cpu_hw_events *cpuc, int irq)
+{
+	int idx;
+	u64 ret = 0;
+
+	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+		struct perf_event *event = cpuc->events[idx];
+
+		if (!test_bit(idx, cpuc->actirq_mask))
+			continue;
+
+		if ((event->irq == irq) || (irq < 0))
+			ret |= (1ull << event->hw.idx);
+	}
+
+	return ret;
+}
+
 static void intel_pmu_disable_all(void)
 {
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
@@ -834,14 +852,14 @@ static void intel_pmu_disable_all(void)
 	intel_pmu_lbr_disable_all();
 }
 
-static void intel_pmu_enable_all(int added)
+static void __intel_pmu_enable(u64 control)
 {
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
 
 	intel_pmu_pebs_enable_all();
 	intel_pmu_lbr_enable_all();
-	wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL,
-			x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_guest_mask);
+
+	wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, control);
 
 	if (test_bit(INTEL_PMC_IDX_FIXED_BTS, cpuc->active_mask)) {
 		struct perf_event *event =
@@ -854,6 +872,33 @@ static void intel_pmu_enable_all(int added)
 	}
 }
 
+static void intel_pmu_enable_all(int added)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	u64 irq_mask = __get_intel_ctrl_irq_mask(cpuc, -1);
+
+	__intel_pmu_enable(x86_pmu.intel_ctrl &
+			 ~(cpuc->intel_ctrl_guest_mask | irq_mask));
+}
+
+static void intel_pmu_disable_irq(int irq)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	u64 irq_mask = __get_intel_ctrl_irq_mask(cpuc, irq);
+
+	wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL,
+		x86_pmu.intel_ctrl & ~(cpuc->intel_ctrl_guest_mask | irq_mask));
+}
+
+static void intel_pmu_enable_irq(int irq)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	u64 irq_mask = __get_intel_ctrl_irq_mask(cpuc, irq);
+
+	wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL,
+		(x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_guest_mask) | irq_mask);
+}
+
 /*
  * Workaround for:
  *   Intel Errata AAK100 (model 26)
@@ -935,6 +980,15 @@ static void intel_pmu_nhm_enable_all(int added)
 	intel_pmu_enable_all(added);
 }
 
+static inline u64 intel_pmu_get_control(void)
+{
+	u64 control;
+
+	rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, control);
+
+	return control;
+}
+
 static inline u64 intel_pmu_get_status(void)
 {
 	u64 status;
@@ -1104,7 +1158,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
 	struct perf_sample_data data;
 	struct cpu_hw_events *cpuc;
 	int bit, loops;
-	u64 status;
+	u64 control, status;
 	int handled;
 
 	cpuc = &__get_cpu_var(cpu_hw_events);
@@ -1119,11 +1173,12 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
 	 */
 	apic_write(APIC_LVTPC, APIC_DM_NMI);
 
+	control = intel_pmu_get_control();
 	intel_pmu_disable_all();
 	handled = intel_pmu_drain_bts_buffer();
 	status = intel_pmu_get_status();
 	if (!status) {
-		intel_pmu_enable_all(0);
+		__intel_pmu_enable(control);
 		return handled;
 	}
 
@@ -1154,7 +1209,8 @@ again:
 
 		handled++;
 
-		if (!test_bit(bit, cpuc->active_mask))
+		if (!test_bit(bit, cpuc->active_mask) &&
+		    !test_bit(bit, cpuc->actirq_mask))
 			continue;
 
 		if (!intel_pmu_save_and_restart(event))
@@ -1177,7 +1233,7 @@ again:
 		goto again;
 
 done:
-	intel_pmu_enable_all(0);
+	__intel_pmu_enable(control);
 	return handled;
 }
 
@@ -1775,8 +1831,8 @@ static __initconst const struct x86_pmu intel_pmu = {
 	.handle_irq		= intel_pmu_handle_irq,
 	.disable_all		= intel_pmu_disable_all,
 	.enable_all		= intel_pmu_enable_all,
-	.disable_irq		= x86_pmu_enable_irq_nop_int,
-	.enable_irq		= x86_pmu_enable_irq_nop_int,
+	.disable_irq		= intel_pmu_disable_irq,
+	.enable_irq		= intel_pmu_enable_irq,
 	.enable			= intel_pmu_enable_event,
 	.disable		= intel_pmu_disable_event,
 	.hw_config		= intel_pmu_hw_config,
diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 826054a..9c48a00 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -619,7 +619,7 @@ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs)
 	 */
 	ds->pebs_index = ds->pebs_buffer_base;
 
-	if (!test_bit(0, cpuc->active_mask))
+	if (!test_bit(0, cpuc->active_mask) && !test_bit(0, cpuc->actirq_mask))
 		return;
 
 	WARN_ON_ONCE(!event);
@@ -671,7 +671,8 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
 	for ( ; at < top; at++) {
 		for_each_set_bit(bit, (unsigned long *)&at->status, x86_pmu.max_pebs_events) {
 			event = cpuc->events[bit];
-			if (!test_bit(bit, cpuc->active_mask))
+			if (!test_bit(bit, cpuc->active_mask) &&
+			    !test_bit(bit, cpuc->actirq_mask))
 				continue;
 
 			WARN_ON_ONCE(!event);
-- 
1.7.7.6


-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH RFC -tip 6/6] perf/tool: Hack 'pid' as 'irq' for sys_perf_event_open()
  2012-12-17 11:51 [PATCH RFC -tip 0/6] IRQ-bound performance events Alexander Gordeev
                   ` (4 preceding siblings ...)
  2012-12-17 11:53 ` [PATCH RFC -tip 5/6] perf/x86/Intel " Alexander Gordeev
@ 2012-12-17 11:54 ` Alexander Gordeev
  5 siblings, 0 replies; 8+ messages in thread
From: Alexander Gordeev @ 2012-12-17 11:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Ingo Molnar, Arnaldo Carvalho de Melo

This is not a decent change, just a quick fix to make
possible testing of IRQ-bound performance events.

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
---
 tools/perf/builtin-record.c  |    9 +++++++++
 tools/perf/builtin-stat.c    |   11 +++++++++++
 tools/perf/util/evlist.c     |    4 +++-
 tools/perf/util/evsel.c      |    3 +++
 tools/perf/util/evsel.h      |    1 +
 tools/perf/util/target.c     |    4 ++++
 tools/perf/util/thread_map.c |   16 ++++++++++++++++
 7 files changed, 47 insertions(+), 1 deletions(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index f3151d3..494eec2 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -228,6 +228,7 @@ static int perf_record__open(struct perf_record *rec)
 	struct perf_evlist *evlist = rec->evlist;
 	struct perf_session *session = rec->session;
 	struct perf_record_opts *opts = &rec->opts;
+	int irq = false;
 	int rc = 0;
 
 	/*
@@ -239,6 +240,10 @@ static int perf_record__open(struct perf_record *rec)
 
 	perf_evlist__config_attrs(evlist, opts);
 
+	if (perf_target__has_cpu(&opts->target) &&
+	    perf_target__has_task(&opts->target))
+		irq = true;
+
 	list_for_each_entry(pos, &evlist->entries, node) {
 		struct perf_event_attr *attr = &pos->attr;
 		/*
@@ -255,6 +260,8 @@ static int perf_record__open(struct perf_record *rec)
 		 */
 		bool time_needed = attr->sample_type & PERF_SAMPLE_TIME;
 
+		pos->irq = irq;
+
 fallback_missing_features:
 		if (opts->exclude_guest_missing)
 			attr->exclude_guest = attr->exclude_host = 0;
@@ -1000,6 +1007,8 @@ const struct option record_options[] = {
 		     parse_events_option),
 	OPT_CALLBACK(0, "filter", &record.evlist, "filter",
 		     "event filter", parse_filter),
+	OPT_STRING('I', "irq", &record.opts.target.pid, "irq",
+		    "record events on existing irq handler"),
 	OPT_STRING('p', "pid", &record.opts.target.pid, "pid",
 		    "record events on existing process id"),
 	OPT_STRING('t', "tid", &record.opts.target.tid, "tid",
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index c247fac..f56f195 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -145,6 +145,15 @@ retry:
 	if (exclude_guest_missing)
 		evsel->attr.exclude_guest = evsel->attr.exclude_host = 0;
 
+	if (perf_target__has_cpu(&target) && perf_target__has_task(&target)) {
+		evsel->irq = true;
+		ret = perf_evsel__open(evsel, perf_evsel__cpus(evsel),
+				       evsel_list->threads);
+		if (ret)
+			goto check_ret;
+		return 0;
+	}
+
 	if (perf_target__has_cpu(&target)) {
 		ret = perf_evsel__open_per_cpu(evsel, perf_evsel__cpus(evsel));
 		if (ret)
@@ -1108,6 +1117,8 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
 		     "event filter", parse_filter),
 	OPT_BOOLEAN('i', "no-inherit", &no_inherit,
 		    "child tasks do not inherit counters"),
+	OPT_STRING('I', "irq", &target.pid, "irq",
+		   "stat events on existing irq handler"),
 	OPT_STRING('p', "pid", &target.pid, "pid",
 		   "stat events on existing process id"),
 	OPT_STRING('t', "tid", &target.tid, "tid",
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 7052934..7809f415 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -572,7 +572,9 @@ int perf_evlist__create_maps(struct perf_evlist *evlist,
 	if (evlist->threads == NULL)
 		return -1;
 
-	if (perf_target__has_task(target))
+	if (perf_target__has_task(target) && perf_target__has_cpu(target))
+		evlist->cpus = cpu_map__new(target->cpu_list);
+	else if (perf_target__has_task(target))
 		evlist->cpus = cpu_map__dummy_new();
 	else if (!perf_target__has_cpu(target) && !target->uses_mmap)
 		evlist->cpus = cpu_map__dummy_new();
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 1b16dd1..748df69 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -738,6 +738,9 @@ static int __perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus,
 		pid = evsel->cgrp->fd;
 	}
 
+	if (evsel->irq)
+		flags = PERF_FLAG_PID_IRQ;
+
 	for (cpu = 0; cpu < cpus->nr; cpu++) {
 
 		for (thread = 0; thread < threads->nr; thread++) {
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 3d2b801..a1782a4 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -71,6 +71,7 @@ struct perf_evsel {
 	unsigned int		sample_size;
 	bool 			supported;
 	bool 			needs_swap;
+	bool			irq;
 	/* parse modifier helper */
 	int			exclude_GH;
 	struct perf_evsel	*leader;
diff --git a/tools/perf/util/target.c b/tools/perf/util/target.c
index 065528b..a4469db 100644
--- a/tools/perf/util/target.c
+++ b/tools/perf/util/target.c
@@ -20,12 +20,14 @@ enum perf_target_errno perf_target__validate(struct perf_target *target)
 	if (target->pid)
 		target->tid = target->pid;
 
+#if (0)
 	/* CPU and PID are mutually exclusive */
 	if (target->tid && target->cpu_list) {
 		target->cpu_list = NULL;
 		if (ret == PERF_ERRNO_TARGET__SUCCESS)
 			ret = PERF_ERRNO_TARGET__PID_OVERRIDE_CPU;
 	}
+#endif
 
 	/* UID and PID are mutually exclusive */
 	if (target->tid && target->uid_str) {
@@ -41,12 +43,14 @@ enum perf_target_errno perf_target__validate(struct perf_target *target)
 			ret = PERF_ERRNO_TARGET__UID_OVERRIDE_CPU;
 	}
 
+#if (0)
 	/* PID and SYSTEM are mutually exclusive */
 	if (target->tid && target->system_wide) {
 		target->system_wide = false;
 		if (ret == PERF_ERRNO_TARGET__SUCCESS)
 			ret = PERF_ERRNO_TARGET__PID_OVERRIDE_SYSTEM;
 	}
+#endif
 
 	/* UID and SYSTEM are mutually exclusive */
 	if (target->uid_str && target->system_wide) {
diff --git a/tools/perf/util/thread_map.c b/tools/perf/util/thread_map.c
index 9b5f856..48cc8ec 100644
--- a/tools/perf/util/thread_map.c
+++ b/tools/perf/util/thread_map.c
@@ -159,8 +159,12 @@ static struct thread_map *thread_map__new_by_pid_str(const char *pid_str)
 	struct thread_map *threads = NULL, *nt;
 	char name[256];
 	int items, total_tasks = 0;
+#if (0)
 	struct dirent **namelist = NULL;
 	int i, j = 0;
+#else
+	int j = 0;
+#endif
 	pid_t pid, prev_pid = INT_MAX;
 	char *end_ptr;
 	struct str_node *pos;
@@ -180,7 +184,11 @@ static struct thread_map *thread_map__new_by_pid_str(const char *pid_str)
 			continue;
 
 		sprintf(name, "/proc/%d/task", pid);
+#if (0)
 		items = scandir(name, &namelist, filter, NULL);
+#else
+		items = 1;
+#endif
 		if (items <= 0)
 			goto out_free_threads;
 
@@ -192,12 +200,18 @@ static struct thread_map *thread_map__new_by_pid_str(const char *pid_str)
 
 		threads = nt;
 
+#if (0)
 		for (i = 0; i < items; i++) {
 			threads->map[j++] = atoi(namelist[i]->d_name);
 			free(namelist[i]);
 		}
+#else
+		threads->map[j++] = pid;
+#endif
 		threads->nr = total_tasks;
+#if (0)
 		free(namelist);
+#endif
 	}
 
 out:
@@ -205,9 +219,11 @@ out:
 	return threads;
 
 out_free_namelist:
+#if (0)
 	for (i = 0; i < items; i++)
 		free(namelist[i]);
 	free(namelist);
+#endif
 
 out_free_threads:
 	free(threads);
-- 
1.7.7.6


-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH RFC -tip 5/6] perf/x86/Intel PMU: IRQ-bound performance events
       [not found] <cover.1370251263.git.agordeev@redhat.com>
@ 2013-06-03  9:42 ` Alexander Gordeev
  0 siblings, 0 replies; 8+ messages in thread
From: Alexander Gordeev @ 2013-06-03  9:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, Thomas Gleixner, Ingo Molnar, Peter Zijlstra,
	Arnaldo Carvalho de Melo, Jiri Olsa, Frederic Weisbecker

Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c    |   74 +++++++++++++++++++++++++----
 arch/x86/kernel/cpu/perf_event_intel_ds.c |    5 +-
 2 files changed, 68 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 0e8f183..d215408 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -878,6 +878,24 @@ static inline bool intel_pmu_needs_lbr_smpl(struct perf_event *event)
 	return false;
 }
 
+u64 __get_intel_ctrl_irq_mask(struct cpu_hw_events *cpuc, int irq)
+{
+	int idx;
+	u64 ret = 0;
+
+	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+		struct perf_event *event = cpuc->events[idx];
+
+		if (!test_bit(idx, cpuc->actirq_mask))
+			continue;
+
+		if ((event->irq == irq) || (irq < 0))
+			ret |= (1ull << event->hw.idx);
+	}
+
+	return ret;
+}
+
 static void intel_pmu_disable_all(void)
 {
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
@@ -891,14 +909,14 @@ static void intel_pmu_disable_all(void)
 	intel_pmu_lbr_disable_all();
 }
 
-static void intel_pmu_enable_all(int added)
+static void __intel_pmu_enable(u64 control)
 {
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
 
 	intel_pmu_pebs_enable_all();
 	intel_pmu_lbr_enable_all();
-	wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL,
-			x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_guest_mask);
+
+	wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, control);
 
 	if (test_bit(INTEL_PMC_IDX_FIXED_BTS, cpuc->active_mask)) {
 		struct perf_event *event =
@@ -911,6 +929,33 @@ static void intel_pmu_enable_all(int added)
 	}
 }
 
+static void intel_pmu_enable_all(int added)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	u64 irq_mask = __get_intel_ctrl_irq_mask(cpuc, -1);
+
+	__intel_pmu_enable(x86_pmu.intel_ctrl &
+			 ~(cpuc->intel_ctrl_guest_mask | irq_mask));
+}
+
+static void intel_pmu_disable_irq(int irq)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	u64 irq_mask = __get_intel_ctrl_irq_mask(cpuc, irq);
+
+	wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL,
+		x86_pmu.intel_ctrl & ~(cpuc->intel_ctrl_guest_mask | irq_mask));
+}
+
+static void intel_pmu_enable_irq(int irq)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	u64 irq_mask = __get_intel_ctrl_irq_mask(cpuc, irq);
+
+	wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL,
+		(x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_guest_mask) | irq_mask);
+}
+
 /*
  * Workaround for:
  *   Intel Errata AAK100 (model 26)
@@ -992,6 +1037,15 @@ static void intel_pmu_nhm_enable_all(int added)
 	intel_pmu_enable_all(added);
 }
 
+static inline u64 intel_pmu_get_control(void)
+{
+	u64 control;
+
+	rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, control);
+
+	return control;
+}
+
 static inline u64 intel_pmu_get_status(void)
 {
 	u64 status;
@@ -1161,7 +1215,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
 	struct perf_sample_data data;
 	struct cpu_hw_events *cpuc;
 	int bit, loops;
-	u64 status;
+	u64 control, status;
 	int handled;
 
 	cpuc = &__get_cpu_var(cpu_hw_events);
@@ -1176,11 +1230,12 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
 	 */
 	apic_write(APIC_LVTPC, APIC_DM_NMI);
 
+	control = intel_pmu_get_control();
 	intel_pmu_disable_all();
 	handled = intel_pmu_drain_bts_buffer();
 	status = intel_pmu_get_status();
 	if (!status) {
-		intel_pmu_enable_all(0);
+		__intel_pmu_enable(control);
 		return handled;
 	}
 
@@ -1211,7 +1266,8 @@ again:
 
 		handled++;
 
-		if (!test_bit(bit, cpuc->active_mask))
+		if (!test_bit(bit, cpuc->active_mask) &&
+		    !test_bit(bit, cpuc->actirq_mask))
 			continue;
 
 		if (!intel_pmu_save_and_restart(event))
@@ -1234,7 +1290,7 @@ again:
 		goto again;
 
 done:
-	intel_pmu_enable_all(0);
+	__intel_pmu_enable(control);
 	return handled;
 }
 
@@ -1839,8 +1895,8 @@ static __initconst const struct x86_pmu intel_pmu = {
 	.handle_irq		= intel_pmu_handle_irq,
 	.disable_all		= intel_pmu_disable_all,
 	.enable_all		= intel_pmu_enable_all,
-	.disable_irq		= x86_pmu_enable_irq_nop_int,
-	.enable_irq		= x86_pmu_enable_irq_nop_int,
+	.disable_irq		= intel_pmu_disable_irq,
+	.enable_irq		= intel_pmu_enable_irq,
 	.enable			= intel_pmu_enable_event,
 	.disable		= intel_pmu_disable_event,
 	.hw_config		= intel_pmu_hw_config,
diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 60250f6..e72769a 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -784,7 +784,7 @@ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs)
 	 */
 	ds->pebs_index = ds->pebs_buffer_base;
 
-	if (!test_bit(0, cpuc->active_mask))
+	if (!test_bit(0, cpuc->active_mask) && !test_bit(0, cpuc->actirq_mask))
 		return;
 
 	WARN_ON_ONCE(!event);
@@ -836,7 +836,8 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
 	for ( ; at < top; at++) {
 		for_each_set_bit(bit, (unsigned long *)&at->status, x86_pmu.max_pebs_events) {
 			event = cpuc->events[bit];
-			if (!test_bit(bit, cpuc->active_mask))
+			if (!test_bit(bit, cpuc->active_mask) &&
+			    !test_bit(bit, cpuc->actirq_mask))
 				continue;
 
 			WARN_ON_ONCE(!event);
-- 
1.7.7.6


-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-06-03  9:41 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-12-17 11:51 [PATCH RFC -tip 0/6] IRQ-bound performance events Alexander Gordeev
2012-12-17 11:51 ` [PATCH RFC -tip 1/6] perf/core: " Alexander Gordeev
2012-12-17 11:52 ` [PATCH RFC -tip 2/6] perf/x86: " Alexander Gordeev
2012-12-17 11:52 ` [PATCH RFC -tip 3/6] perf/x86/AMD PMU: " Alexander Gordeev
2012-12-17 11:53 ` [PATCH RFC -tip 4/6] perf/x86/Core " Alexander Gordeev
2012-12-17 11:53 ` [PATCH RFC -tip 5/6] perf/x86/Intel " Alexander Gordeev
2012-12-17 11:54 ` [PATCH RFC -tip 6/6] perf/tool: Hack 'pid' as 'irq' for sys_perf_event_open() Alexander Gordeev
     [not found] <cover.1370251263.git.agordeev@redhat.com>
2013-06-03  9:42 ` [PATCH RFC -tip 5/6] perf/x86/Intel PMU: IRQ-bound performance events Alexander Gordeev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).