linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Arnaldo Carvalho de Melo <acme@redhat.com>,
	Paul Mackerras <paulus@samba.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: [GIT PULL] perf fixes
Date: Thu, 11 Mar 2010 20:12:04 +0100	[thread overview]
Message-ID: <20100311191204.GA368@elte.hu> (raw)

Linus,

Please pull the latest perf-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git perf-fixes-for-linus

 Thanks,

	Ingo

------------------>
Arnaldo Carvalho de Melo (1):
      perf archive: Don't try to collect files without a build-id

Frederic Weisbecker (5):
      perf lock: Drop the buffers multiplexing dependency
      perf: Remove pointless breakpoint union
      x86/hw-breakpoints: Remove the name field
      hw-breakpoints: Remove stub unthrottle callback
      x86/stacktrace: Don't dereference bad frame pointers

Hitoshi Mitake (1):
      perf lock: Fix and add misc documentally things

Ingo Molnar (1):
      MAINTAINERS: Add Arnaldo as tools/perf/ co-maintainer

Peter Zijlstra (3):
      perf, x86: Restrict the ANY flag
      perf_events, x86: Fixup fixed counter constraints
      perf: Provide generic perf_sample_data initialization

Robert Richter (3):
      perf, x86: make IBS macros available in perf_event.h
      perf, x86: add some IBS macros to perf_event.h
      perf, x86: rename macro in ARCH_PERFMON_EVENTSEL_ENABLE

Tejun Heo (1):
      percpu: Add __percpu sparse annotations to hw_breakpoint

Tom Zanussi (2):
      perf trace/scripting: Remove extraneous header read
      perf trace: Don't use pager if scripting

Will Deacon (1):
      perf, ARM: Modify kuser rmb() call to compile for Thumb-2


 MAINTAINERS                             |    1 +
 arch/arm/kernel/perf_event.c            |    4 +-
 arch/powerpc/kernel/perf_event.c        |    8 +-
 arch/sparc/kernel/perf_event.c          |    2 +-
 arch/x86/include/asm/hw_breakpoint.h    |    1 -
 arch/x86/include/asm/perf_event.h       |   16 +++-
 arch/x86/kernel/cpu/perf_event.c        |   39 ++++++---
 arch/x86/kernel/cpu/perf_event_intel.c  |   37 +++++---
 arch/x86/kernel/cpu/perf_event_p6.c     |    8 +-
 arch/x86/kernel/cpu/perfctr-watchdog.c  |    2 +-
 arch/x86/kernel/dumpstack_64.c          |   10 ++-
 arch/x86/kernel/hw_breakpoint.c         |   12 ---
 arch/x86/oprofile/op_model_amd.c        |   23 ++----
 arch/x86/oprofile/op_model_ppro.c       |    6 +-
 include/linux/hw_breakpoint.h           |    8 +-
 include/linux/perf_event.h              |   27 +++++-
 kernel/hw_breakpoint.c                  |   11 +--
 kernel/perf_event.c                     |   36 ++------
 lib/Kconfig.debug                       |    8 ++
 samples/hw_breakpoint/data_breakpoint.c |    6 +-
 tools/perf/Documentation/perf-lock.txt  |   29 ++++++
 tools/perf/builtin-lock.c               |  148 ++++++++++++++++++++++++++++++-
 tools/perf/builtin-trace.c              |    4 +-
 tools/perf/command-list.txt             |    1 +
 tools/perf/perf-archive.sh              |    3 +-
 tools/perf/perf.h                       |    4 +-
 tools/perf/util/probe-event.c           |    2 +-
 27 files changed, 327 insertions(+), 129 deletions(-)
 create mode 100644 tools/perf/Documentation/perf-lock.txt

diff --git a/MAINTAINERS b/MAINTAINERS
index c8a8b1f..14d5c4c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4293,6 +4293,7 @@ PERFORMANCE EVENTS SUBSYSTEM
 M:	Peter Zijlstra <a.p.zijlstra@chello.nl>
 M:	Paul Mackerras <paulus@samba.org>
 M:	Ingo Molnar <mingo@elte.hu>
+M:	Arnaldo Carvalho de Melo <acme@redhat.com>
 S:	Supported
 F:	kernel/perf_event.c
 F:	include/linux/perf_event.h
diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c
index c54ceb3..3875d99 100644
--- a/arch/arm/kernel/perf_event.c
+++ b/arch/arm/kernel/perf_event.c
@@ -965,7 +965,7 @@ armv6pmu_handle_irq(int irq_num,
 	 */
 	armv6_pmcr_write(pmcr);
 
-	data.addr = 0;
+	perf_sample_data_init(&data, 0);
 
 	cpuc = &__get_cpu_var(cpu_hw_events);
 	for (idx = 0; idx <= armpmu->num_events; ++idx) {
@@ -1945,7 +1945,7 @@ static irqreturn_t armv7pmu_handle_irq(int irq_num, void *dev)
 	 */
 	regs = get_irq_regs();
 
-	data.addr = 0;
+	perf_sample_data_init(&data, 0);
 
 	cpuc = &__get_cpu_var(cpu_hw_events);
 	for (idx = 0; idx <= armpmu->num_events; ++idx) {
diff --git a/arch/powerpc/kernel/perf_event.c b/arch/powerpc/kernel/perf_event.c
index b6cf8f1..5120bd4 100644
--- a/arch/powerpc/kernel/perf_event.c
+++ b/arch/powerpc/kernel/perf_event.c
@@ -1164,10 +1164,10 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
 	 * Finally record data if requested.
 	 */
 	if (record) {
-		struct perf_sample_data data = {
-			.addr	= ~0ULL,
-			.period	= event->hw.last_period,
-		};
+		struct perf_sample_data data;
+
+		perf_sample_data_init(&data, ~0ULL);
+		data.period = event->hw.last_period;
 
 		if (event->attr.sample_type & PERF_SAMPLE_ADDR)
 			perf_get_data_addr(regs, &data.addr);
diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
index 9f2b2ba..6504208 100644
--- a/arch/sparc/kernel/perf_event.c
+++ b/arch/sparc/kernel/perf_event.c
@@ -1189,7 +1189,7 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
 
 	regs = args->regs;
 
-	data.addr = 0;
+	perf_sample_data_init(&data, 0);
 
 	cpuc = &__get_cpu_var(cpu_hw_events);
 
diff --git a/arch/x86/include/asm/hw_breakpoint.h b/arch/x86/include/asm/hw_breakpoint.h
index 0675a7c..2a1bd8f 100644
--- a/arch/x86/include/asm/hw_breakpoint.h
+++ b/arch/x86/include/asm/hw_breakpoint.h
@@ -10,7 +10,6 @@
  * (display/resolving)
  */
 struct arch_hw_breakpoint {
-	char		*name; /* Contains name of the symbol to set bkpt */
 	unsigned long	address;
 	u8		len;
 	u8		type;
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index befd172..db6109a 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -18,7 +18,7 @@
 #define MSR_ARCH_PERFMON_EVENTSEL0			     0x186
 #define MSR_ARCH_PERFMON_EVENTSEL1			     0x187
 
-#define ARCH_PERFMON_EVENTSEL0_ENABLE			  (1 << 22)
+#define ARCH_PERFMON_EVENTSEL_ENABLE			  (1 << 22)
 #define ARCH_PERFMON_EVENTSEL_ANY			  (1 << 21)
 #define ARCH_PERFMON_EVENTSEL_INT			  (1 << 20)
 #define ARCH_PERFMON_EVENTSEL_OS			  (1 << 17)
@@ -50,7 +50,7 @@
 	 INTEL_ARCH_INV_MASK| \
 	 INTEL_ARCH_EDGE_MASK|\
 	 INTEL_ARCH_UNIT_MASK|\
-	 INTEL_ARCH_EVTSEL_MASK)
+	 INTEL_ARCH_EVENT_MASK)
 
 #define ARCH_PERFMON_UNHALTED_CORE_CYCLES_SEL		      0x3c
 #define ARCH_PERFMON_UNHALTED_CORE_CYCLES_UMASK		(0x00 << 8)
@@ -117,6 +117,18 @@ union cpuid10_edx {
  */
 #define X86_PMC_IDX_FIXED_BTS				(X86_PMC_IDX_FIXED + 16)
 
+/* IbsFetchCtl bits/masks */
+#define IBS_FETCH_RAND_EN		(1ULL<<57)
+#define IBS_FETCH_VAL			(1ULL<<49)
+#define IBS_FETCH_ENABLE		(1ULL<<48)
+#define IBS_FETCH_CNT			0xFFFF0000ULL
+#define IBS_FETCH_MAX_CNT		0x0000FFFFULL
+
+/* IbsOpCtl bits */
+#define IBS_OP_CNT_CTL			(1ULL<<19)
+#define IBS_OP_VAL			(1ULL<<18)
+#define IBS_OP_ENABLE			(1ULL<<17)
+#define IBS_OP_MAX_CNT			0x0000FFFFULL
 
 #ifdef CONFIG_PERF_EVENTS
 extern void init_hw_perf_events(void);
diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index b1fbdee..42aafd1 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -73,10 +73,10 @@ struct debug_store {
 struct event_constraint {
 	union {
 		unsigned long	idxmsk[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
-		u64		idxmsk64[1];
+		u64		idxmsk64;
 	};
-	int	code;
-	int	cmask;
+	u64	code;
+	u64	cmask;
 	int	weight;
 };
 
@@ -103,7 +103,7 @@ struct cpu_hw_events {
 };
 
 #define __EVENT_CONSTRAINT(c, n, m, w) {\
-	{ .idxmsk64[0] = (n) },		\
+	{ .idxmsk64 = (n) },		\
 	.code = (c),			\
 	.cmask = (m),			\
 	.weight = (w),			\
@@ -116,7 +116,7 @@ struct cpu_hw_events {
 	EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVTSEL_MASK)
 
 #define FIXED_EVENT_CONSTRAINT(c, n)	\
-	EVENT_CONSTRAINT(c, n, INTEL_ARCH_FIXED_MASK)
+	EVENT_CONSTRAINT(c, (1ULL << (32+n)), INTEL_ARCH_FIXED_MASK)
 
 #define EVENT_CONSTRAINT_END		\
 	EVENT_CONSTRAINT(0, 0, 0)
@@ -503,6 +503,9 @@ static int __hw_perf_event_init(struct perf_event *event)
 	 */
 	if (attr->type == PERF_TYPE_RAW) {
 		hwc->config |= x86_pmu.raw_event(attr->config);
+		if ((hwc->config & ARCH_PERFMON_EVENTSEL_ANY) &&
+		    perf_paranoid_cpu() && !capable(CAP_SYS_ADMIN))
+			return -EACCES;
 		return 0;
 	}
 
@@ -553,9 +556,9 @@ static void x86_pmu_disable_all(void)
 		if (!test_bit(idx, cpuc->active_mask))
 			continue;
 		rdmsrl(x86_pmu.eventsel + idx, val);
-		if (!(val & ARCH_PERFMON_EVENTSEL0_ENABLE))
+		if (!(val & ARCH_PERFMON_EVENTSEL_ENABLE))
 			continue;
-		val &= ~ARCH_PERFMON_EVENTSEL0_ENABLE;
+		val &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
 		wrmsrl(x86_pmu.eventsel + idx, val);
 	}
 }
@@ -590,7 +593,7 @@ static void x86_pmu_enable_all(void)
 			continue;
 
 		val = event->hw.config;
-		val |= ARCH_PERFMON_EVENTSEL0_ENABLE;
+		val |= ARCH_PERFMON_EVENTSEL_ENABLE;
 		wrmsrl(x86_pmu.eventsel + idx, val);
 	}
 }
@@ -612,8 +615,8 @@ static int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
 	bitmap_zero(used_mask, X86_PMC_IDX_MAX);
 
 	for (i = 0; i < n; i++) {
-		constraints[i] =
-		  x86_pmu.get_event_constraints(cpuc, cpuc->event_list[i]);
+		c = x86_pmu.get_event_constraints(cpuc, cpuc->event_list[i]);
+		constraints[i] = c;
 	}
 
 	/*
@@ -853,7 +856,7 @@ void hw_perf_enable(void)
 static inline void __x86_pmu_enable_event(struct hw_perf_event *hwc, int idx)
 {
 	(void)checking_wrmsrl(hwc->config_base + idx,
-			      hwc->config | ARCH_PERFMON_EVENTSEL0_ENABLE);
+			      hwc->config | ARCH_PERFMON_EVENTSEL_ENABLE);
 }
 
 static inline void x86_pmu_disable_event(struct hw_perf_event *hwc, int idx)
@@ -1094,8 +1097,7 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
 	int idx, handled = 0;
 	u64 val;
 
-	data.addr = 0;
-	data.raw = NULL;
+	perf_sample_data_init(&data, 0);
 
 	cpuc = &__get_cpu_var(cpu_hw_events);
 
@@ -1347,6 +1349,7 @@ static void __init pmu_check_apic(void)
 
 void __init init_hw_perf_events(void)
 {
+	struct event_constraint *c;
 	int err;
 
 	pr_info("Performance Events: ");
@@ -1395,6 +1398,16 @@ void __init init_hw_perf_events(void)
 		__EVENT_CONSTRAINT(0, (1ULL << x86_pmu.num_events) - 1,
 				   0, x86_pmu.num_events);
 
+	if (x86_pmu.event_constraints) {
+		for_each_event_constraint(c, x86_pmu.event_constraints) {
+			if (c->cmask != INTEL_ARCH_FIXED_MASK)
+				continue;
+
+			c->idxmsk64 |= (1ULL << x86_pmu.num_events) - 1;
+			c->weight += x86_pmu.num_events;
+		}
+	}
+
 	pr_info("... version:                %d\n",     x86_pmu.version);
 	pr_info("... bit width:              %d\n",     x86_pmu.event_bits);
 	pr_info("... generic registers:      %d\n",     x86_pmu.num_events);
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 977e754..44b60c8 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -1,7 +1,7 @@
 #ifdef CONFIG_CPU_SUP_INTEL
 
 /*
- * Intel PerfMon v3. Used on Core2 and later.
+ * Intel PerfMon, used on Core and later.
  */
 static const u64 intel_perfmon_event_map[] =
 {
@@ -27,8 +27,14 @@ static struct event_constraint intel_core_event_constraints[] =
 
 static struct event_constraint intel_core2_event_constraints[] =
 {
-	FIXED_EVENT_CONSTRAINT(0xc0, (0x3|(1ULL<<32))), /* INSTRUCTIONS_RETIRED */
-	FIXED_EVENT_CONSTRAINT(0x3c, (0x3|(1ULL<<33))), /* UNHALTED_CORE_CYCLES */
+	FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+	FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
+	/*
+	 * Core2 has Fixed Counter 2 listed as CPU_CLK_UNHALTED.REF and event
+	 * 0x013c as CPU_CLK_UNHALTED.BUS and specifies there is a fixed
+	 * ratio between these counters.
+	 */
+	/* FIXED_EVENT_CONSTRAINT(0x013c, 2),  CPU_CLK_UNHALTED.REF */
 	INTEL_EVENT_CONSTRAINT(0x10, 0x1), /* FP_COMP_OPS_EXE */
 	INTEL_EVENT_CONSTRAINT(0x11, 0x2), /* FP_ASSIST */
 	INTEL_EVENT_CONSTRAINT(0x12, 0x2), /* MUL */
@@ -37,14 +43,16 @@ static struct event_constraint intel_core2_event_constraints[] =
 	INTEL_EVENT_CONSTRAINT(0x18, 0x1), /* IDLE_DURING_DIV */
 	INTEL_EVENT_CONSTRAINT(0x19, 0x2), /* DELAYED_BYPASS */
 	INTEL_EVENT_CONSTRAINT(0xa1, 0x1), /* RS_UOPS_DISPATCH_CYCLES */
+	INTEL_EVENT_CONSTRAINT(0xc9, 0x1), /* ITLB_MISS_RETIRED (T30-9) */
 	INTEL_EVENT_CONSTRAINT(0xcb, 0x1), /* MEM_LOAD_RETIRED */
 	EVENT_CONSTRAINT_END
 };
 
 static struct event_constraint intel_nehalem_event_constraints[] =
 {
-	FIXED_EVENT_CONSTRAINT(0xc0, (0xf|(1ULL<<32))), /* INSTRUCTIONS_RETIRED */
-	FIXED_EVENT_CONSTRAINT(0x3c, (0xf|(1ULL<<33))), /* UNHALTED_CORE_CYCLES */
+	FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+	FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
+	/* FIXED_EVENT_CONSTRAINT(0x013c, 2), CPU_CLK_UNHALTED.REF */
 	INTEL_EVENT_CONSTRAINT(0x40, 0x3), /* L1D_CACHE_LD */
 	INTEL_EVENT_CONSTRAINT(0x41, 0x3), /* L1D_CACHE_ST */
 	INTEL_EVENT_CONSTRAINT(0x42, 0x3), /* L1D_CACHE_LOCK */
@@ -58,8 +66,9 @@ static struct event_constraint intel_nehalem_event_constraints[] =
 
 static struct event_constraint intel_westmere_event_constraints[] =
 {
-	FIXED_EVENT_CONSTRAINT(0xc0, (0xf|(1ULL<<32))), /* INSTRUCTIONS_RETIRED */
-	FIXED_EVENT_CONSTRAINT(0x3c, (0xf|(1ULL<<33))), /* UNHALTED_CORE_CYCLES */
+	FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+	FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
+	/* FIXED_EVENT_CONSTRAINT(0x013c, 2), CPU_CLK_UNHALTED.REF */
 	INTEL_EVENT_CONSTRAINT(0x51, 0x3), /* L1D */
 	INTEL_EVENT_CONSTRAINT(0x60, 0x1), /* OFFCORE_REQUESTS_OUTSTANDING */
 	INTEL_EVENT_CONSTRAINT(0x63, 0x3), /* CACHE_LOCK_CYCLES */
@@ -68,8 +77,9 @@ static struct event_constraint intel_westmere_event_constraints[] =
 
 static struct event_constraint intel_gen_event_constraints[] =
 {
-	FIXED_EVENT_CONSTRAINT(0xc0, (0x3|(1ULL<<32))), /* INSTRUCTIONS_RETIRED */
-	FIXED_EVENT_CONSTRAINT(0x3c, (0x3|(1ULL<<33))), /* UNHALTED_CORE_CYCLES */
+	FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+	FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
+	/* FIXED_EVENT_CONSTRAINT(0x013c, 2), CPU_CLK_UNHALTED.REF */
 	EVENT_CONSTRAINT_END
 };
 
@@ -580,10 +590,9 @@ static void intel_pmu_drain_bts_buffer(void)
 
 	ds->bts_index = ds->bts_buffer_base;
 
+	perf_sample_data_init(&data, 0);
 
 	data.period	= event->hw.last_period;
-	data.addr	= 0;
-	data.raw	= NULL;
 	regs.ip		= 0;
 
 	/*
@@ -732,8 +741,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
 	int bit, loops;
 	u64 ack, status;
 
-	data.addr = 0;
-	data.raw = NULL;
+	perf_sample_data_init(&data, 0);
 
 	cpuc = &__get_cpu_var(cpu_hw_events);
 
@@ -935,7 +943,7 @@ static __init int intel_pmu_init(void)
 		x86_pmu.event_constraints = intel_nehalem_event_constraints;
 		pr_cont("Nehalem/Corei7 events, ");
 		break;
-	case 28:
+	case 28: /* Atom */
 		memcpy(hw_cache_event_ids, atom_hw_cache_event_ids,
 		       sizeof(hw_cache_event_ids));
 
@@ -951,6 +959,7 @@ static __init int intel_pmu_init(void)
 		x86_pmu.event_constraints = intel_westmere_event_constraints;
 		pr_cont("Westmere events, ");
 		break;
+
 	default:
 		/*
 		 * default constraints for v2 and up
diff --git a/arch/x86/kernel/cpu/perf_event_p6.c b/arch/x86/kernel/cpu/perf_event_p6.c
index 1ca5ba0..a4e67b9 100644
--- a/arch/x86/kernel/cpu/perf_event_p6.c
+++ b/arch/x86/kernel/cpu/perf_event_p6.c
@@ -62,7 +62,7 @@ static void p6_pmu_disable_all(void)
 
 	/* p6 only has one enable register */
 	rdmsrl(MSR_P6_EVNTSEL0, val);
-	val &= ~ARCH_PERFMON_EVENTSEL0_ENABLE;
+	val &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
 	wrmsrl(MSR_P6_EVNTSEL0, val);
 }
 
@@ -72,7 +72,7 @@ static void p6_pmu_enable_all(void)
 
 	/* p6 only has one enable register */
 	rdmsrl(MSR_P6_EVNTSEL0, val);
-	val |= ARCH_PERFMON_EVENTSEL0_ENABLE;
+	val |= ARCH_PERFMON_EVENTSEL_ENABLE;
 	wrmsrl(MSR_P6_EVNTSEL0, val);
 }
 
@@ -83,7 +83,7 @@ p6_pmu_disable_event(struct hw_perf_event *hwc, int idx)
 	u64 val = P6_NOP_EVENT;
 
 	if (cpuc->enabled)
-		val |= ARCH_PERFMON_EVENTSEL0_ENABLE;
+		val |= ARCH_PERFMON_EVENTSEL_ENABLE;
 
 	(void)checking_wrmsrl(hwc->config_base + idx, val);
 }
@@ -95,7 +95,7 @@ static void p6_pmu_enable_event(struct hw_perf_event *hwc, int idx)
 
 	val = hwc->config;
 	if (cpuc->enabled)
-		val |= ARCH_PERFMON_EVENTSEL0_ENABLE;
+		val |= ARCH_PERFMON_EVENTSEL_ENABLE;
 
 	(void)checking_wrmsrl(hwc->config_base + idx, val);
 }
diff --git a/arch/x86/kernel/cpu/perfctr-watchdog.c b/arch/x86/kernel/cpu/perfctr-watchdog.c
index 74f4e85..fb329e9 100644
--- a/arch/x86/kernel/cpu/perfctr-watchdog.c
+++ b/arch/x86/kernel/cpu/perfctr-watchdog.c
@@ -680,7 +680,7 @@ static int setup_intel_arch_watchdog(unsigned nmi_hz)
 	cpu_nmi_set_wd_enabled();
 
 	apic_write(APIC_LVTPC, APIC_DM_NMI);
-	evntsel |= ARCH_PERFMON_EVENTSEL0_ENABLE;
+	evntsel |= ARCH_PERFMON_EVENTSEL_ENABLE;
 	wrmsr(evntsel_msr, evntsel, 0);
 	intel_arch_wd_ops.checkbit = 1ULL << (eax.split.bit_width - 1);
 	return 1;
diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
index dce99ab..d5e2a2e 100644
--- a/arch/x86/kernel/dumpstack_64.c
+++ b/arch/x86/kernel/dumpstack_64.c
@@ -120,9 +120,15 @@ fixup_bp_irq_link(unsigned long bp, unsigned long *stack,
 {
 #ifdef CONFIG_FRAME_POINTER
 	struct stack_frame *frame = (struct stack_frame *)bp;
+	unsigned long next;
 
-	if (!in_irq_stack(stack, irq_stack, irq_stack_end))
-		return (unsigned long)frame->next_frame;
+	if (!in_irq_stack(stack, irq_stack, irq_stack_end)) {
+		if (!probe_kernel_address(&frame->next_frame, next))
+			return next;
+		else
+			WARN_ONCE(1, "Perf: bad frame pointer = %p in "
+				  "callchain\n", &frame->next_frame);
+	}
 #endif
 	return bp;
 }
diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
index dca2802..d6cc065 100644
--- a/arch/x86/kernel/hw_breakpoint.c
+++ b/arch/x86/kernel/hw_breakpoint.c
@@ -344,13 +344,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp,
 	}
 
 	/*
-	 * For kernel-addresses, either the address or symbol name can be
-	 * specified.
-	 */
-	if (info->name)
-		info->address = (unsigned long)
-				kallsyms_lookup_name(info->name);
-	/*
 	 * Check that the low-order bits of the address are appropriate
 	 * for the alignment implied by len.
 	 */
@@ -535,8 +528,3 @@ void hw_breakpoint_pmu_read(struct perf_event *bp)
 {
 	/* TODO */
 }
-
-void hw_breakpoint_pmu_unthrottle(struct perf_event *bp)
-{
-	/* TODO */
-}
diff --git a/arch/x86/oprofile/op_model_amd.c b/arch/x86/oprofile/op_model_amd.c
index 6a58256..090cbbe 100644
--- a/arch/x86/oprofile/op_model_amd.c
+++ b/arch/x86/oprofile/op_model_amd.c
@@ -46,17 +46,6 @@
 
 static unsigned long reset_value[NUM_VIRT_COUNTERS];
 
-/* IbsFetchCtl bits/masks */
-#define IBS_FETCH_RAND_EN		(1ULL<<57)
-#define IBS_FETCH_VAL			(1ULL<<49)
-#define IBS_FETCH_ENABLE		(1ULL<<48)
-#define IBS_FETCH_CNT_MASK		0xFFFF0000ULL
-
-/* IbsOpCtl bits */
-#define IBS_OP_CNT_CTL			(1ULL<<19)
-#define IBS_OP_VAL			(1ULL<<18)
-#define IBS_OP_ENABLE			(1ULL<<17)
-
 #define IBS_FETCH_SIZE			6
 #define IBS_OP_SIZE			12
 
@@ -182,7 +171,7 @@ static void op_amd_setup_ctrs(struct op_x86_model_spec const *model,
 			continue;
 		}
 		rdmsrl(msrs->controls[i].addr, val);
-		if (val & ARCH_PERFMON_EVENTSEL0_ENABLE)
+		if (val & ARCH_PERFMON_EVENTSEL_ENABLE)
 			op_x86_warn_in_use(i);
 		val &= model->reserved;
 		wrmsrl(msrs->controls[i].addr, val);
@@ -290,7 +279,7 @@ op_amd_handle_ibs(struct pt_regs * const regs,
 			oprofile_write_commit(&entry);
 
 			/* reenable the IRQ */
-			ctl &= ~(IBS_FETCH_VAL | IBS_FETCH_CNT_MASK);
+			ctl &= ~(IBS_FETCH_VAL | IBS_FETCH_CNT);
 			ctl |= IBS_FETCH_ENABLE;
 			wrmsrl(MSR_AMD64_IBSFETCHCTL, ctl);
 		}
@@ -330,7 +319,7 @@ static inline void op_amd_start_ibs(void)
 		return;
 
 	if (ibs_config.fetch_enabled) {
-		val = (ibs_config.max_cnt_fetch >> 4) & 0xFFFF;
+		val = (ibs_config.max_cnt_fetch >> 4) & IBS_FETCH_MAX_CNT;
 		val |= ibs_config.rand_en ? IBS_FETCH_RAND_EN : 0;
 		val |= IBS_FETCH_ENABLE;
 		wrmsrl(MSR_AMD64_IBSFETCHCTL, val);
@@ -352,7 +341,7 @@ static inline void op_amd_start_ibs(void)
 			 * avoid underflows.
 			 */
 			ibs_op_ctl = min(ibs_op_ctl + IBS_RANDOM_MAXCNT_OFFSET,
-					 0xFFFFULL);
+					 IBS_OP_MAX_CNT);
 		}
 		if (ibs_caps & IBS_CAPS_OPCNT && ibs_config.dispatched_ops)
 			ibs_op_ctl |= IBS_OP_CNT_CTL;
@@ -409,7 +398,7 @@ static void op_amd_start(struct op_msrs const * const msrs)
 		if (!reset_value[op_x86_phys_to_virt(i)])
 			continue;
 		rdmsrl(msrs->controls[i].addr, val);
-		val |= ARCH_PERFMON_EVENTSEL0_ENABLE;
+		val |= ARCH_PERFMON_EVENTSEL_ENABLE;
 		wrmsrl(msrs->controls[i].addr, val);
 	}
 
@@ -429,7 +418,7 @@ static void op_amd_stop(struct op_msrs const * const msrs)
 		if (!reset_value[op_x86_phys_to_virt(i)])
 			continue;
 		rdmsrl(msrs->controls[i].addr, val);
-		val &= ~ARCH_PERFMON_EVENTSEL0_ENABLE;
+		val &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
 		wrmsrl(msrs->controls[i].addr, val);
 	}
 
diff --git a/arch/x86/oprofile/op_model_ppro.c b/arch/x86/oprofile/op_model_ppro.c
index 5d1727b..2bf90fa 100644
--- a/arch/x86/oprofile/op_model_ppro.c
+++ b/arch/x86/oprofile/op_model_ppro.c
@@ -88,7 +88,7 @@ static void ppro_setup_ctrs(struct op_x86_model_spec const *model,
 			continue;
 		}
 		rdmsrl(msrs->controls[i].addr, val);
-		if (val & ARCH_PERFMON_EVENTSEL0_ENABLE)
+		if (val & ARCH_PERFMON_EVENTSEL_ENABLE)
 			op_x86_warn_in_use(i);
 		val &= model->reserved;
 		wrmsrl(msrs->controls[i].addr, val);
@@ -166,7 +166,7 @@ static void ppro_start(struct op_msrs const * const msrs)
 	for (i = 0; i < num_counters; ++i) {
 		if (reset_value[i]) {
 			rdmsrl(msrs->controls[i].addr, val);
-			val |= ARCH_PERFMON_EVENTSEL0_ENABLE;
+			val |= ARCH_PERFMON_EVENTSEL_ENABLE;
 			wrmsrl(msrs->controls[i].addr, val);
 		}
 	}
@@ -184,7 +184,7 @@ static void ppro_stop(struct op_msrs const * const msrs)
 		if (!reset_value[i])
 			continue;
 		rdmsrl(msrs->controls[i].addr, val);
-		val &= ~ARCH_PERFMON_EVENTSEL0_ENABLE;
+		val &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
 		wrmsrl(msrs->controls[i].addr, val);
 	}
 }
diff --git a/include/linux/hw_breakpoint.h b/include/linux/hw_breakpoint.h
index 5977b72..c70d27a 100644
--- a/include/linux/hw_breakpoint.h
+++ b/include/linux/hw_breakpoint.h
@@ -66,14 +66,14 @@ register_wide_hw_breakpoint_cpu(struct perf_event_attr *attr,
 				perf_overflow_handler_t	triggered,
 				int cpu);
 
-extern struct perf_event **
+extern struct perf_event * __percpu *
 register_wide_hw_breakpoint(struct perf_event_attr *attr,
 			    perf_overflow_handler_t triggered);
 
 extern int register_perf_hw_breakpoint(struct perf_event *bp);
 extern int __register_perf_hw_breakpoint(struct perf_event *bp);
 extern void unregister_hw_breakpoint(struct perf_event *bp);
-extern void unregister_wide_hw_breakpoint(struct perf_event **cpu_events);
+extern void unregister_wide_hw_breakpoint(struct perf_event * __percpu *cpu_events);
 
 extern int dbg_reserve_bp_slot(struct perf_event *bp);
 extern int dbg_release_bp_slot(struct perf_event *bp);
@@ -100,7 +100,7 @@ static inline struct perf_event *
 register_wide_hw_breakpoint_cpu(struct perf_event_attr *attr,
 				perf_overflow_handler_t	 triggered,
 				int cpu)		{ return NULL; }
-static inline struct perf_event **
+static inline struct perf_event * __percpu *
 register_wide_hw_breakpoint(struct perf_event_attr *attr,
 			    perf_overflow_handler_t triggered)	{ return NULL; }
 static inline int
@@ -109,7 +109,7 @@ static inline int
 __register_perf_hw_breakpoint(struct perf_event *bp) 	{ return -ENOSYS; }
 static inline void unregister_hw_breakpoint(struct perf_event *bp)	{ }
 static inline void
-unregister_wide_hw_breakpoint(struct perf_event **cpu_events)		{ }
+unregister_wide_hw_breakpoint(struct perf_event * __percpu *cpu_events)	{ }
 static inline int
 reserve_bp_slot(struct perf_event *bp)			{return -ENOSYS; }
 static inline void release_bp_slot(struct perf_event *bp) 		{ }
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 7b18b4f..6f8cd7d 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -487,9 +487,8 @@ struct hw_perf_event {
 			struct hrtimer	hrtimer;
 		};
 #ifdef CONFIG_HAVE_HW_BREAKPOINT
-		union { /* breakpoint */
-			struct arch_hw_breakpoint	info;
-		};
+		/* breakpoint */
+		struct arch_hw_breakpoint	info;
 #endif
 	};
 	atomic64_t			prev_count;
@@ -802,6 +801,13 @@ struct perf_sample_data {
 	struct perf_raw_record		*raw;
 };
 
+static inline
+void perf_sample_data_init(struct perf_sample_data *data, u64 addr)
+{
+	data->addr = addr;
+	data->raw  = NULL;
+}
+
 extern void perf_output_sample(struct perf_output_handle *handle,
 			       struct perf_event_header *header,
 			       struct perf_sample_data *data,
@@ -858,6 +864,21 @@ extern int sysctl_perf_event_paranoid;
 extern int sysctl_perf_event_mlock;
 extern int sysctl_perf_event_sample_rate;
 
+static inline bool perf_paranoid_tracepoint_raw(void)
+{
+	return sysctl_perf_event_paranoid > -1;
+}
+
+static inline bool perf_paranoid_cpu(void)
+{
+	return sysctl_perf_event_paranoid > 0;
+}
+
+static inline bool perf_paranoid_kernel(void)
+{
+	return sysctl_perf_event_paranoid > 1;
+}
+
 extern void perf_event_init(void);
 extern void perf_tp_event(int event_id, u64 addr, u64 count, void *record, int entry_size);
 extern void perf_bp_event(struct perf_event *event, void *data);
diff --git a/kernel/hw_breakpoint.c b/kernel/hw_breakpoint.c
index 967e661..03808ed 100644
--- a/kernel/hw_breakpoint.c
+++ b/kernel/hw_breakpoint.c
@@ -413,17 +413,17 @@ EXPORT_SYMBOL_GPL(unregister_hw_breakpoint);
  *
  * @return a set of per_cpu pointers to perf events
  */
-struct perf_event **
+struct perf_event * __percpu *
 register_wide_hw_breakpoint(struct perf_event_attr *attr,
 			    perf_overflow_handler_t triggered)
 {
-	struct perf_event **cpu_events, **pevent, *bp;
+	struct perf_event * __percpu *cpu_events, **pevent, *bp;
 	long err;
 	int cpu;
 
 	cpu_events = alloc_percpu(typeof(*cpu_events));
 	if (!cpu_events)
-		return ERR_PTR(-ENOMEM);
+		return (void __percpu __force *)ERR_PTR(-ENOMEM);
 
 	get_online_cpus();
 	for_each_online_cpu(cpu) {
@@ -451,7 +451,7 @@ fail:
 	put_online_cpus();
 
 	free_percpu(cpu_events);
-	return ERR_PTR(err);
+	return (void __percpu __force *)ERR_PTR(err);
 }
 EXPORT_SYMBOL_GPL(register_wide_hw_breakpoint);
 
@@ -459,7 +459,7 @@ EXPORT_SYMBOL_GPL(register_wide_hw_breakpoint);
  * unregister_wide_hw_breakpoint - unregister a wide breakpoint in the kernel
  * @cpu_events: the per cpu set of events to unregister
  */
-void unregister_wide_hw_breakpoint(struct perf_event **cpu_events)
+void unregister_wide_hw_breakpoint(struct perf_event * __percpu *cpu_events)
 {
 	int cpu;
 	struct perf_event **pevent;
@@ -489,5 +489,4 @@ struct pmu perf_ops_bp = {
 	.enable		= arch_install_hw_breakpoint,
 	.disable	= arch_uninstall_hw_breakpoint,
 	.read		= hw_breakpoint_pmu_read,
-	.unthrottle	= hw_breakpoint_pmu_unthrottle
 };
diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index f40560b..4393b9e 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -56,21 +56,6 @@ static atomic_t nr_task_events __read_mostly;
  */
 int sysctl_perf_event_paranoid __read_mostly = 1;
 
-static inline bool perf_paranoid_tracepoint_raw(void)
-{
-	return sysctl_perf_event_paranoid > -1;
-}
-
-static inline bool perf_paranoid_cpu(void)
-{
-	return sysctl_perf_event_paranoid > 0;
-}
-
-static inline bool perf_paranoid_kernel(void)
-{
-	return sysctl_perf_event_paranoid > 1;
-}
-
 int sysctl_perf_event_mlock __read_mostly = 512; /* 'free' kb per user */
 
 /*
@@ -4123,8 +4108,7 @@ void __perf_sw_event(u32 event_id, u64 nr, int nmi,
 	if (rctx < 0)
 		return;
 
-	data.addr = addr;
-	data.raw  = NULL;
+	perf_sample_data_init(&data, addr);
 
 	do_perf_sw_event(PERF_TYPE_SOFTWARE, event_id, nr, nmi, &data, regs);
 
@@ -4169,11 +4153,10 @@ static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer)
 	struct perf_event *event;
 	u64 period;
 
-	event	= container_of(hrtimer, struct perf_event, hw.hrtimer);
+	event = container_of(hrtimer, struct perf_event, hw.hrtimer);
 	event->pmu->read(event);
 
-	data.addr = 0;
-	data.raw = NULL;
+	perf_sample_data_init(&data, 0);
 	data.period = event->hw.last_period;
 	regs = get_irq_regs();
 	/*
@@ -4337,17 +4320,15 @@ static const struct pmu perf_ops_task_clock = {
 void perf_tp_event(int event_id, u64 addr, u64 count, void *record,
 			  int entry_size)
 {
+	struct pt_regs *regs = get_irq_regs();
+	struct perf_sample_data data;
 	struct perf_raw_record raw = {
 		.size = entry_size,
 		.data = record,
 	};
 
-	struct perf_sample_data data = {
-		.addr = addr,
-		.raw = &raw,
-	};
-
-	struct pt_regs *regs = get_irq_regs();
+	perf_sample_data_init(&data, addr);
+	data.raw = &raw;
 
 	if (!regs)
 		regs = task_pt_regs(current);
@@ -4463,8 +4444,7 @@ void perf_bp_event(struct perf_event *bp, void *data)
 	struct perf_sample_data sample;
 	struct pt_regs *regs = data;
 
-	sample.raw = NULL;
-	sample.addr = bp->attr.bp_addr;
+	perf_sample_data_init(&sample, bp->attr.bp_addr);
 
 	if (!perf_exclude_event(bp, regs))
 		perf_swevent_add(bp, 1, 1, &sample, regs);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b520ec1..8e5ec5e 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -532,6 +532,14 @@ config LOCK_STAT
 
 	 For more details, see Documentation/lockstat.txt
 
+	 This also enables lock events required by "perf lock",
+	 subcommand of perf.
+	 If you want to use "perf lock", you also need to turn on
+	 CONFIG_EVENT_TRACING.
+
+	 CONFIG_LOCK_STAT defines "contended" and "acquired" lock events.
+	 (CONFIG_LOCKDEP defines "acquire" and "release" events.)
+
 config DEBUG_LOCKDEP
 	bool "Lock dependency engine debugging"
 	depends on DEBUG_KERNEL && LOCKDEP
diff --git a/samples/hw_breakpoint/data_breakpoint.c b/samples/hw_breakpoint/data_breakpoint.c
index c69cbe9..bd0f337 100644
--- a/samples/hw_breakpoint/data_breakpoint.c
+++ b/samples/hw_breakpoint/data_breakpoint.c
@@ -34,7 +34,7 @@
 #include <linux/perf_event.h>
 #include <linux/hw_breakpoint.h>
 
-struct perf_event **sample_hbp;
+struct perf_event * __percpu *sample_hbp;
 
 static char ksym_name[KSYM_NAME_LEN] = "pid_max";
 module_param_string(ksym, ksym_name, KSYM_NAME_LEN, S_IRUGO);
@@ -61,8 +61,8 @@ static int __init hw_break_module_init(void)
 	attr.bp_type = HW_BREAKPOINT_W | HW_BREAKPOINT_R;
 
 	sample_hbp = register_wide_hw_breakpoint(&attr, sample_hbp_handler);
-	if (IS_ERR(sample_hbp)) {
-		ret = PTR_ERR(sample_hbp);
+	if (IS_ERR((void __force *)sample_hbp)) {
+		ret = PTR_ERR((void __force *)sample_hbp);
 		goto fail;
 	}
 
diff --git a/tools/perf/Documentation/perf-lock.txt b/tools/perf/Documentation/perf-lock.txt
new file mode 100644
index 0000000..b317102
--- /dev/null
+++ b/tools/perf/Documentation/perf-lock.txt
@@ -0,0 +1,29 @@
+perf-lock(1)
+============
+
+NAME
+----
+perf-lock - Analyze lock events
+
+SYNOPSIS
+--------
+[verse]
+'perf lock' {record|report|trace}
+
+DESCRIPTION
+-----------
+You can analyze various lock behaviours
+and statistics with this 'perf lock' command.
+
+  'perf lock record <command>' records lock events
+  between start and end <command>. And this command
+  produces the file "perf.data" which contains tracing
+  results of lock events.
+
+  'perf lock trace' shows raw lock events.
+
+  'perf lock report' reports statistical data.
+
+SEE ALSO
+--------
+linkperf:perf[1]
diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
index fb9ab2a..e12c844 100644
--- a/tools/perf/builtin-lock.c
+++ b/tools/perf/builtin-lock.c
@@ -460,6 +460,150 @@ process_raw_event(void *data, int cpu,
 		process_lock_release_event(data, event, cpu, timestamp, thread);
 }
 
+struct raw_event_queue {
+	u64			timestamp;
+	int			cpu;
+	void			*data;
+	struct thread		*thread;
+	struct list_head	list;
+};
+
+static LIST_HEAD(raw_event_head);
+
+#define FLUSH_PERIOD	(5 * NSEC_PER_SEC)
+
+static u64 flush_limit = ULLONG_MAX;
+static u64 last_flush = 0;
+struct raw_event_queue *last_inserted;
+
+static void flush_raw_event_queue(u64 limit)
+{
+	struct raw_event_queue *tmp, *iter;
+
+	list_for_each_entry_safe(iter, tmp, &raw_event_head, list) {
+		if (iter->timestamp > limit)
+			return;
+
+		if (iter == last_inserted)
+			last_inserted = NULL;
+
+		process_raw_event(iter->data, iter->cpu, iter->timestamp,
+				  iter->thread);
+
+		last_flush = iter->timestamp;
+		list_del(&iter->list);
+		free(iter->data);
+		free(iter);
+	}
+}
+
+static void __queue_raw_event_end(struct raw_event_queue *new)
+{
+	struct raw_event_queue *iter;
+
+	list_for_each_entry_reverse(iter, &raw_event_head, list) {
+		if (iter->timestamp < new->timestamp) {
+			list_add(&new->list, &iter->list);
+			return;
+		}
+	}
+
+	list_add(&new->list, &raw_event_head);
+}
+
+static void __queue_raw_event_before(struct raw_event_queue *new,
+				     struct raw_event_queue *iter)
+{
+	list_for_each_entry_continue_reverse(iter, &raw_event_head, list) {
+		if (iter->timestamp < new->timestamp) {
+			list_add(&new->list, &iter->list);
+			return;
+		}
+	}
+
+	list_add(&new->list, &raw_event_head);
+}
+
+static void __queue_raw_event_after(struct raw_event_queue *new,
+				     struct raw_event_queue *iter)
+{
+	list_for_each_entry_continue(iter, &raw_event_head, list) {
+		if (iter->timestamp > new->timestamp) {
+			list_add_tail(&new->list, &iter->list);
+			return;
+		}
+	}
+	list_add_tail(&new->list, &raw_event_head);
+}
+
+/* The queue is ordered by time */
+static void __queue_raw_event(struct raw_event_queue *new)
+{
+	if (!last_inserted) {
+		__queue_raw_event_end(new);
+		return;
+	}
+
+	/*
+	 * Most of the time the current event has a timestamp
+	 * very close to the last event inserted, unless we just switched
+	 * to another event buffer. Having a sorting based on a list and
+	 * on the last inserted event that is close to the current one is
+	 * probably more efficient than an rbtree based sorting.
+	 */
+	if (last_inserted->timestamp >= new->timestamp)
+		__queue_raw_event_before(new, last_inserted);
+	else
+		__queue_raw_event_after(new, last_inserted);
+}
+
+static void queue_raw_event(void *data, int raw_size, int cpu,
+			    u64 timestamp, struct thread *thread)
+{
+	struct raw_event_queue *new;
+
+	if (flush_limit == ULLONG_MAX)
+		flush_limit = timestamp + FLUSH_PERIOD;
+
+	if (timestamp < last_flush) {
+		printf("Warning: Timestamp below last timeslice flush\n");
+		return;
+	}
+
+	new = malloc(sizeof(*new));
+	if (!new)
+		die("Not enough memory\n");
+
+	new->timestamp = timestamp;
+	new->cpu = cpu;
+	new->thread = thread;
+
+	new->data = malloc(raw_size);
+	if (!new->data)
+		die("Not enough memory\n");
+
+	memcpy(new->data, data, raw_size);
+
+	__queue_raw_event(new);
+	last_inserted = new;
+
+	/*
+	 * We want to have a slice of events covering 2 * FLUSH_PERIOD
+	 * If FLUSH_PERIOD is big enough, it ensures every events that occured
+	 * in the first half of the timeslice have all been buffered and there
+	 * are none remaining (we need that because of the weakly ordered
+	 * event recording we have). Then once we reach the 2 * FLUSH_PERIOD
+	 * timeslice, we flush the first half to be gentle with the memory
+	 * (the second half can still get new events in the middle, so wait
+	 * another period to flush it)
+	 */
+	if (new->timestamp > flush_limit &&
+		new->timestamp - flush_limit > FLUSH_PERIOD) {
+		flush_limit += FLUSH_PERIOD;
+		flush_raw_event_queue(flush_limit);
+	}
+}
+
 static int process_sample_event(event_t *event, struct perf_session *session)
 {
 	struct thread *thread;
@@ -480,7 +624,7 @@ static int process_sample_event(event_t *event, struct perf_session *session)
 	if (profile_cpu != -1 && profile_cpu != (int) data.cpu)
 		return 0;
 
-	process_raw_event(data.raw_data, data.cpu, data.time, thread);
+	queue_raw_event(data.raw_data, data.raw_size, data.cpu, data.time, thread);
 
 	return 0;
 }
@@ -576,6 +720,7 @@ static void __cmd_report(void)
 	setup_pager();
 	select_key();
 	read_events();
+	flush_raw_event_queue(ULLONG_MAX);
 	sort_result();
 	print_result();
 }
@@ -608,7 +753,6 @@ static const char *record_args[] = {
 	"record",
 	"-a",
 	"-R",
-	"-M",
 	"-f",
 	"-m", "1024",
 	"-c", "1",
diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 5db687f..407041d 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -573,7 +573,8 @@ int cmd_trace(int argc, const char **argv, const char *prefix __used)
 
 	if (symbol__init() < 0)
 		return -1;
-	setup_pager();
+	if (!script_name)
+		setup_pager();
 
 	session = perf_session__new(input_name, O_RDONLY, 0);
 	if (session == NULL)
@@ -608,7 +609,6 @@ int cmd_trace(int argc, const char **argv, const char *prefix __used)
 			return -1;
 		}
 
-		perf_header__read(&session->header, input);
 		err = scripting_ops->generate_script("perf-trace");
 		goto out;
 	}
diff --git a/tools/perf/command-list.txt b/tools/perf/command-list.txt
index 9afcff2..db6ee94 100644
--- a/tools/perf/command-list.txt
+++ b/tools/perf/command-list.txt
@@ -18,3 +18,4 @@ perf-top			mainporcelain common
 perf-trace			mainporcelain common
 perf-probe			mainporcelain common
 perf-kmem			mainporcelain common
+perf-lock			mainporcelain common
diff --git a/tools/perf/perf-archive.sh b/tools/perf/perf-archive.sh
index 45fbe2f..910468e 100644
--- a/tools/perf/perf-archive.sh
+++ b/tools/perf/perf-archive.sh
@@ -9,8 +9,9 @@ fi
 
 DEBUGDIR=~/.debug/
 BUILDIDS=$(mktemp /tmp/perf-archive-buildids.XXXXXX)
+NOBUILDID=0000000000000000000000000000000000000000
 
-perf buildid-list -i $PERF_DATA --with-hits > $BUILDIDS
+perf buildid-list -i $PERF_DATA --with-hits | grep -v "^$NOBUILDID " > $BUILDIDS
 if [ ! -s $BUILDIDS ] ; then
 	echo "perf archive: no build-ids found"
 	rm -f $BUILDIDS
diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index 75f941b..6fb379b 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -65,9 +65,7 @@
  * Use the __kuser_memory_barrier helper in the CPU helper page. See
  * arch/arm/kernel/entry-armv.S in the kernel source for details.
  */
-#define rmb()		asm volatile("mov r0, #0xffff0fff; mov lr, pc;" \
-				     "sub pc, r0, #95" ::: "r0", "lr", "cc", \
-				     "memory")
+#define rmb()		((void(*)(void))0xffff0fa0)()
 #define cpu_relax()	asm volatile("":::"memory")
 #endif
 
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index c971e81..53181db 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -508,8 +508,8 @@ void show_perf_probe_events(void)
 	struct str_node *ent;
 
 	setup_pager();
-
 	memset(&pp, 0, sizeof(pp));
+
 	fd = open_kprobe_events(O_RDONLY, 0);
 	rawlist = get_trace_kprobe_event_rawlist(fd);
 	close(fd);

             reply	other threads:[~2010-03-11 19:12 UTC|newest]

Thread overview: 376+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-11 19:12 Ingo Molnar [this message]
  -- strict thread matches above, loose matches on Subject: below --
2023-01-06 11:57 [GIT PULL] perf fixes Ingo Molnar
2023-01-06 21:19 ` pr-tracker-bot
2022-10-02 10:56 Ingo Molnar
2022-10-02 16:47 ` Linus Torvalds
2022-10-03 10:55   ` Ingo Molnar
2022-10-02 17:20 ` pr-tracker-bot
2022-08-28 14:35 Ingo Molnar
2022-08-28 18:18 ` pr-tracker-bot
2022-08-06 19:10 Ingo Molnar
2022-08-07  0:50 ` pr-tracker-bot
2021-07-11 13:26 Ingo Molnar
2021-07-11 18:22 ` pr-tracker-bot
2021-06-12 12:48 Ingo Molnar
2021-06-12 19:09 ` pr-tracker-bot
2021-03-21 10:56 Ingo Molnar
2021-03-21 18:45 ` pr-tracker-bot
2020-08-15 11:21 Ingo Molnar
2020-08-16  1:55 ` pr-tracker-bot
2020-04-25  9:19 Ingo Molnar
2020-04-25 19:30 ` pr-tracker-bot
2020-03-24  9:19 Ingo Molnar
2020-03-24 17:15 ` pr-tracker-bot
2020-03-02  7:23 Ingo Molnar
2020-03-03 23:35 ` pr-tracker-bot
2020-02-15  8:53 Ingo Molnar
2020-02-15 21:25 ` pr-tracker-bot
2020-01-18 17:58 Ingo Molnar
2020-01-18 21:05 ` pr-tracker-bot
2019-12-21 16:16 Ingo Molnar
2019-12-21 18:55 ` pr-tracker-bot
2019-12-17 11:34 Ingo Molnar
2019-12-17 19:06 ` Linus Torvalds
2019-12-18  6:58   ` Ingo Molnar
2019-12-17 19:20 ` pr-tracker-bot
2019-12-01 22:15 Ingo Molnar
2019-12-02  4:40 ` pr-tracker-bot
2019-11-16 21:33 Ingo Molnar
2019-11-17  0:35 ` pr-tracker-bot
2019-11-01 17:48 Ingo Molnar
2019-11-01 18:48 ` Linus Torvalds
2019-11-01 20:30   ` Ingo Molnar
2019-11-01 21:01     ` Ingo Molnar
2019-11-01 22:15     ` Linus Torvalds
2019-11-01 19:10 ` pr-tracker-bot
2019-10-12 13:31 Ingo Molnar
2019-10-12 22:35 ` pr-tracker-bot
2019-07-14 12:01 Ingo Molnar
2019-07-14 18:45 ` pr-tracker-bot
2019-06-29  8:54 Ingo Molnar
2019-06-29 11:45 ` pr-tracker-bot
2019-06-02 17:39 Ingo Molnar
2019-06-02 18:15 ` pr-tracker-bot
2019-05-16 16:05 Ingo Molnar
2019-05-16 18:20 ` pr-tracker-bot
2019-05-05 12:47 Ingo Molnar
2019-05-05 22:10 ` pr-tracker-bot
2019-04-20  7:43 Ingo Molnar
2019-04-20 19:25 ` pr-tracker-bot
2019-04-12 13:06 Ingo Molnar
2019-04-13  4:05 ` pr-tracker-bot
2019-02-17 10:10 Ingo Molnar
2019-02-17 16:50 ` pr-tracker-bot
2019-02-10  9:01 Ingo Molnar
2019-02-10 18:30 ` pr-tracker-bot
2019-01-11  7:44 Ingo Molnar
2019-01-11 18:00 ` pr-tracker-bot
2018-11-30  6:25 Ingo Molnar
2018-11-30 21:00 ` pr-tracker-bot
2018-11-17 10:55 Ingo Molnar
2018-11-18 20:05 ` pr-tracker-bot
2018-10-20  8:10 Ingo Molnar
2018-10-20 13:28 ` Greg Kroah-Hartman
2018-10-11  9:12 Ingo Molnar
2018-10-11 12:32 ` Greg Kroah-Hartman
2018-10-11  8:59 Ingo Molnar
2018-10-05  9:42 Ingo Molnar
2018-10-05  9:55 ` Ingo Molnar
2018-10-05 23:30   ` Greg Kroah-Hartman
2018-09-15 13:11 Ingo Molnar
2018-07-30 17:53 Ingo Molnar
2018-07-13 19:59 Ingo Molnar
2018-06-30  8:44 Ingo Molnar
2018-06-04  9:04 Ingo Molnar
2018-03-31 10:40 Ingo Molnar
2018-03-25  8:53 Ingo Molnar
2018-02-06 21:29 Ingo Molnar
2017-12-06 22:17 Ingo Molnar
2017-11-26 12:40 Ingo Molnar
2017-11-05 14:40 Ingo Molnar
2017-11-09  8:13 ` Markus Trippelsdorf
2017-10-14 16:04 Ingo Molnar
2017-09-13 18:00 Ingo Molnar
2017-09-12 15:32 Ingo Molnar
2017-07-21 10:15 Ingo Molnar
2017-06-10  8:39 Ingo Molnar
2017-05-12  7:31 Ingo Molnar
2017-03-07 20:30 Ingo Molnar
2017-02-28  8:01 Ingo Molnar
2017-02-11 18:12 Ingo Molnar
2017-02-02 21:01 Ingo Molnar
2017-01-18  9:27 Ingo Molnar
2017-01-15  9:59 Ingo Molnar
2016-12-23 22:50 Ingo Molnar
2016-12-07 18:45 Ingo Molnar
2016-11-23  9:00 Ingo Molnar
2016-11-14  7:56 Ingo Molnar
2016-10-28 19:41 Ingo Molnar
2016-10-18 11:07 Ingo Molnar
2016-09-13 18:14 Ingo Molnar
2016-08-18 20:38 Ingo Molnar
2016-08-12 19:35 Ingo Molnar
2016-07-26 14:13 Ingo Molnar
2016-07-08 13:42 Ingo Molnar
2016-06-10 12:50 Ingo Molnar
2016-05-13 18:51 Ingo Molnar
2016-05-06 11:26 Ingo Molnar
2016-04-28 17:56 Ingo Molnar
2016-04-03 11:00 Ingo Molnar
2016-02-20 11:14 Ingo Molnar
2016-01-14 10:03 Ingo Molnar
2016-01-08 12:46 Ingo Molnar
2015-12-08  4:22 Ingo Molnar
2015-09-17  8:02 Ingo Molnar
2015-09-02 18:11 Ingo Molnar
2015-08-22 12:19 Ingo Molnar
2015-08-14  7:12 Ingo Molnar
2015-07-18  2:53 Ingo Molnar
2015-07-06 15:29 Ingo Molnar
2015-06-13 14:29 Ingo Molnar
2015-06-05  8:37 Ingo Molnar
2015-05-15  7:17 Ingo Molnar
2015-05-06 12:52 Ingo Molnar
2015-04-03 13:11 Ingo Molnar
2015-03-17 16:48 Ingo Molnar
2015-03-01 17:00 Ingo Molnar
2015-02-20 13:40 Ingo Molnar
2015-01-30 18:44 Ingo Molnar
2015-01-17 14:15 Ingo Molnar
2015-01-11  8:42 Ingo Molnar
2014-11-20  7:46 Ingo Molnar
2014-10-31 11:13 Ingo Molnar
2014-11-03  9:02 ` Paul Bolle
2014-11-03 10:04   ` Peter Zijlstra
2014-09-19 10:46 Ingo Molnar
2014-08-24 20:25 Ingo Molnar
2014-07-16 11:11 Ingo Molnar
2014-06-18 16:39 Ingo Molnar
2014-05-22  8:04 Ingo Molnar
2014-05-01  6:34 Ingo Molnar
2014-04-20  8:02 Ingo Molnar
2014-04-19 10:41 Ingo Molnar
2014-04-16 13:04 Ingo Molnar
2014-03-22  9:06 Ingo Molnar
2014-03-16 16:34 Ingo Molnar
2014-03-02  8:51 Ingo Molnar
2014-02-22 19:16 Ingo Molnar
2014-02-09  8:01 Ingo Molnar
2014-01-25  7:30 Ingo Molnar
2014-01-19 12:08 Ingo Molnar
2013-12-19 16:52 Ingo Molnar
2013-12-02 14:39 Ingo Molnar
2013-11-15 19:46 Ingo Molnar
2013-11-01  9:56 Ingo Molnar
2013-10-29 10:04 Ingo Molnar
2013-10-26 12:24 Ingo Molnar
2013-10-28  8:28 ` Markus Trippelsdorf
2013-10-28  9:02   ` ------------------------------ Markus Trippelsdorf
2013-10-28  9:34     ` Markus Trippelsdorf
2013-10-28 12:34       ` Arnaldo Carvalho de Melo
2013-10-28 12:42         ` Arnaldo Carvalho de Melo
2013-10-28 12:59           ` Markus Trippelsdorf
2013-10-29  9:50           ` Stephane Eranian
2013-10-29 10:06             ` Ingo Molnar
2013-10-29 12:47             ` Arnaldo Carvalho de Melo
2013-10-08 12:12 Ingo Molnar
2013-09-28 18:03 Ingo Molnar
2013-09-29 11:47 ` Markus Trippelsdorf
2013-09-29 21:33   ` Andi Kleen
2013-09-29 22:47     ` Markus Trippelsdorf
2013-09-30  6:27     ` Ingo Molnar
2013-09-30 18:54       ` Andi Kleen
2013-10-01  8:42         ` Ingo Molnar
2013-09-25 18:00 Ingo Molnar
2013-09-18 16:06 Ingo Molnar
2013-09-12 13:38 Ingo Molnar
2013-09-12 18:03 ` Linus Torvalds
2013-09-12 18:10   ` Linus Torvalds
2013-09-12 18:43     ` Arnaldo Carvalho de Melo
2013-09-12 19:12       ` Arnaldo Carvalho de Melo
2013-09-12 19:13         ` Linus Torvalds
2013-09-12 19:55       ` Ingo Molnar
2013-09-12 19:58       ` David Ahern
2013-09-12 20:02         ` Arnaldo Carvalho de Melo
2013-09-12 20:31           ` Ingo Molnar
2013-09-12 20:43             ` Ingo Molnar
2013-09-12 20:18         ` Ingo Molnar
2013-09-12 20:38           ` Arnaldo Carvalho de Melo
2013-09-12 20:46             ` Ingo Molnar
2013-09-12 21:09               ` David Ahern
2013-09-12 21:18                 ` Ingo Molnar
2013-09-12 22:10                   ` David Ahern
2013-09-13  5:09                     ` Ingo Molnar
2013-09-13  9:32                       ` Jean Pihet
2013-09-13  9:45                         ` Ingo Molnar
2013-09-13 17:15                           ` Jean Pihet
2013-09-12 18:51     ` Linus Torvalds
2013-09-12 20:33       ` Ingo Molnar
2013-09-12 20:38         ` Linus Torvalds
2013-09-12 20:49           ` Ingo Molnar
2013-09-12 20:52             ` Linus Torvalds
2013-09-12 21:01               ` Ingo Molnar
2013-09-12 20:10     ` Ingo Molnar
2013-08-13 16:51 Ingo Molnar
2013-07-10  8:52 Ingo Molnar
2013-06-26  8:52 Ingo Molnar
2013-06-20  8:58 Ingo Molnar
2013-05-05 10:10 Ingo Molnar
2013-04-21  8:16 Ingo Molnar
2013-04-14 15:20 Ingo Molnar
2013-03-21  9:56 Ingo Molnar
2013-03-11 14:28 Ingo Molnar
2013-02-26  7:02 Ingo Molnar
2013-03-14 20:32 ` Linus Torvalds
2013-03-14 21:06   ` Linus Torvalds
2013-03-14 22:09     ` Stephane Eranian
2013-03-14 22:17       ` Linus Torvalds
2013-03-14 22:19         ` Stephane Eranian
2013-03-14 22:42           ` Stephane Eranian
2013-03-14 22:53             ` Stephane Eranian
2013-03-14 23:11               ` Stephane Eranian
2013-03-15  0:24                 ` Stephane Eranian
2013-03-15  1:06                   ` Linus Torvalds
2013-03-15  8:01                     ` Stephane Eranian
2013-03-15 10:50                       ` Stephane Eranian
2013-02-04 18:20 Ingo Molnar
2012-12-01 11:11 Ingo Molnar
2012-10-26 14:44 Ingo Molnar
2012-10-23 11:02 Ingo Molnar
2012-10-20  0:56 Ingo Molnar
2012-09-21 19:08 Ingo Molnar
2012-09-13 14:39 Ingo Molnar
2012-08-23 10:59 Ingo Molnar
2012-08-20  9:08 Ingo Molnar
2012-08-21  7:59 ` Ingo Molnar
2012-08-05 17:43 Ingo Molnar
2012-08-03 16:40 Ingo Molnar
2012-07-14  7:51 Ingo Molnar
2012-06-22 13:36 Ingo Molnar
2012-06-22 18:07 ` Linus Torvalds
2012-06-22 18:38   ` Hagen Paul Pfeifer
2012-06-22 18:52     ` Linus Torvalds
2012-06-22 19:06       ` Hagen Paul Pfeifer
2012-06-22 19:54         ` Steven Rostedt
     [not found]           ` <86448d73-2e19-416f-8104-ce72aa5d76eb@email.android.com>
2012-06-22 23:18             ` Steven Rostedt
2012-06-23  0:51               ` Arjan van de Ven
2012-06-23  1:57                 ` Steven Rostedt
2012-06-23 18:25                 ` H. Peter Anvin
2012-06-22 18:50   ` Steven Rostedt
2012-06-15 18:48 Ingo Molnar
2012-06-08  9:20 Ingo Molnar
2012-05-30 15:39 Ingo Molnar
2012-05-17  8:19 Ingo Molnar
2012-04-27  6:32 Ingo Molnar
2012-04-16 17:48 Ingo Molnar
2012-04-14 10:54 Ingo Molnar
2012-04-03 22:40 Ingo Molnar
2012-03-13 16:56 Ingo Molnar
2012-03-05  9:27 Ingo Molnar
2012-03-02 10:47 Ingo Molnar
2012-02-10 12:45 Ingo Molnar
2012-02-02 10:00 Ingo Molnar
2012-01-26 18:11 Ingo Molnar
2011-12-29 21:02 Ingo Molnar
2011-12-09  6:16 Ingo Molnar
2011-12-05 19:13 Ingo Molnar
2011-11-07 18:49 Ingo Molnar
2011-11-07 19:00 ` Linus Torvalds
2011-11-07 19:50   ` Ingo Molnar
2011-08-22 17:00 Ingo Molnar
2011-08-11  8:17 Ingo Molnar
2011-07-07 18:11 Ingo Molnar
2011-06-19  8:44 Ingo Molnar
2011-06-13  9:53 Ingo Molnar
2011-06-08 13:46 Ingo Molnar
2011-05-28 16:34 Ingo Molnar
2011-05-24  2:41 Ingo Molnar
2011-05-23 13:41 Ingo Molnar
2011-05-23 22:10 ` Eric Dumazet
2011-05-23 22:19   ` Frederic Weisbecker
2011-05-23 22:22     ` Eric Dumazet
2011-05-20 17:18 Ingo Molnar
2011-05-17 22:07 Ingo Molnar
2011-05-07 18:20 Ingo Molnar
2011-04-29 18:17 Ingo Molnar
2011-04-22 13:42 Ingo Molnar
2011-04-19 15:56 Ingo Molnar
2011-04-16 10:03 Ingo Molnar
2011-04-07 17:48 Ingo Molnar
2011-04-02 10:25 Ingo Molnar
2011-03-25 13:11 Ingo Molnar
2011-03-10  7:53 Ingo Molnar
2011-02-28 17:34 Ingo Molnar
2011-02-22 16:03 Ingo Molnar
2011-02-15 16:58 Ingo Molnar
2011-02-06 11:27 Ingo Molnar
2011-02-03 15:47 Ingo Molnar
2011-01-24 13:34 Ingo Molnar
2011-01-24 19:48 ` Linus Torvalds
2011-01-24 20:07   ` Ingo Molnar
2011-01-24 20:11     ` Ingo Molnar
2011-01-24 20:17       ` Ingo Molnar
2011-01-24 20:17     ` Linus Torvalds
2011-01-24 20:27       ` Linus Torvalds
2011-01-24 20:38         ` Arnaldo Carvalho de Melo
2011-01-24 21:13           ` Linus Torvalds
2011-01-24 21:25           ` Ingo Molnar
2011-01-24 22:00             ` Arnaldo Carvalho de Melo
2011-01-25  0:16               ` Ingo Molnar
2011-01-24 20:37       ` Davidlohr Bueso
2011-01-24 20:14   ` Arnaldo Carvalho de Melo
2011-01-18 18:59 Ingo Molnar
2011-01-18  9:42 Ingo Molnar
2011-01-15 15:24 Ingo Molnar
2011-01-11 11:32 Ingo Molnar
2011-01-03 19:04 Ingo Molnar
2010-12-23 12:56 Ingo Molnar
2010-12-19 15:34 Ingo Molnar
2010-12-08  7:55 Ingo Molnar
2010-11-28 17:36 Ingo Molnar
2010-11-26 13:20 Ingo Molnar
2010-11-11 10:38 Ingo Molnar
2010-10-30 18:21 Ingo Molnar
2010-10-13 15:21 Ingo Molnar
2010-10-05 14:49 Ingo Molnar
2010-09-26  8:38 Ingo Molnar
2010-09-21 19:39 Ingo Molnar
2010-09-10 14:26 Ingo Molnar
2010-08-25 17:44 Ingo Molnar
2010-08-24 19:06 Ingo Molnar
2010-08-19 14:55 Ingo Molnar
2010-08-18  8:14 Ingo Molnar
2010-07-23 19:41 Ingo Molnar
2010-07-16 17:30 Ingo Molnar
2010-07-08  4:36 Frederic Weisbecker
2010-07-08  4:40 ` Frederic Weisbecker
2010-07-08  6:36 ` Ingo Molnar
2010-07-04 20:24 Ingo Molnar
2010-06-10 10:25 Ingo Molnar
2010-06-02 12:28 Ingo Molnar
2010-05-31 23:02 Frederic Weisbecker
2010-06-01  6:59 ` Ingo Molnar
2010-05-30 19:27 Ingo Molnar
2010-05-20  9:44 Frederic Weisbecker
2010-05-20 12:40 ` Ingo Molnar
2010-05-11 19:18 Ingo Molnar
2010-05-12  0:39 ` Linus Torvalds
2010-04-06 17:49 Ingo Molnar
2010-04-04 10:15 Ingo Molnar
2010-04-03 10:47 Frederic Weisbecker
2010-03-28  5:11 Frederic Weisbecker
2010-03-29  3:33 ` Ingo Molnar
2010-03-26 15:16 Ingo Molnar
2010-03-16 16:06 Ingo Molnar
2010-02-22 16:50 Ingo Molnar
2010-02-22 17:01 ` Linus Torvalds
2010-02-22 17:15   ` Frederic Weisbecker
2010-02-14  9:06 Ingo Molnar
2010-01-31 17:32 Ingo Molnar
2010-01-21 15:38 Ingo Molnar
2009-12-31 12:00 Ingo Molnar
2009-12-18 18:59 Ingo Molnar
2009-09-22  7:51 Ingo Molnar
2009-09-22 14:49 ` Linus Torvalds
2009-09-22 14:59   ` Ingo Molnar
2009-09-22 15:13     ` Linus Torvalds

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100311191204.GA368@elte.hu \
    --to=mingo@elte.hu \
    --cc=a.p.zijlstra@chello.nl \
    --cc=acme@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulus@samba.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).