linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] hw-breakpoints allocation constraints updates
@ 2010-04-12 23:01 Frederic Weisbecker
  2010-04-12 23:01 ` [RFC PATCH 1/2] hw-breakpoints: Separate constraint space for data and instruction breakpoints Frederic Weisbecker
  2010-04-12 23:01 ` [RFC PATCH 2/2] hw-breakpoints: Handle breakpoint weight in allocation constraints Frederic Weisbecker
  0 siblings, 2 replies; 5+ messages in thread
From: Frederic Weisbecker @ 2010-04-12 23:01 UTC (permalink / raw)
  To: LKML
  Cc: LKML, Frederic Weisbecker, Will Deacon, Mahesh Salgaonkar,
	K . Prasad, Paul Mundt, Benjamin Herrenschmidt, Paul Mackerras


Hi,

These patches tend to extend the breakpoint allocation constraints to
be more non-x86 friendly.

It can separate the data and instruction breakpoint space and
handle address ranges or other kind of tricky things that makes
a breakpoint use more than one address register.

There is still one kind of constraint it can't handle, something
I've found on ARMv7 implementation: linking an breakpoint to a
watchpoint. But if this needs arises, I will be glad to add that
support.

Beware, it's only compile-tested for now.

Tell me what you think.

Thanks.


Frederic Weisbecker (2):
  hw-breakpoints: Separate constraint space for data and instruction
    breakpoints
  hw-breakpoints: Handle breakpoint weight in allocation constraints

 arch/Kconfig                  |   11 ++++
 arch/x86/Kconfig              |    1 +
 include/linux/hw_breakpoint.h |    9 ++-
 kernel/hw_breakpoint.c        |  129 ++++++++++++++++++++++++++++++-----------
 4 files changed, 112 insertions(+), 38 deletions(-)


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [RFC PATCH 1/2] hw-breakpoints: Separate constraint space for data and instruction breakpoints
  2010-04-12 23:01 [RFC PATCH 0/2] hw-breakpoints allocation constraints updates Frederic Weisbecker
@ 2010-04-12 23:01 ` Frederic Weisbecker
  2010-04-16 14:35   ` Will Deacon
  2010-04-12 23:01 ` [RFC PATCH 2/2] hw-breakpoints: Handle breakpoint weight in allocation constraints Frederic Weisbecker
  1 sibling, 1 reply; 5+ messages in thread
From: Frederic Weisbecker @ 2010-04-12 23:01 UTC (permalink / raw)
  To: LKML
  Cc: LKML, Frederic Weisbecker, Will Deacon, Mahesh Salgaonkar,
	K . Prasad, Paul Mundt, Benjamin Herrenschmidt, Paul Mackerras,
	Ingo Molnar

There are two outstanding fashions for archs to implement hardware
breakpoints.

The first is to separate breakpoint address pattern definition
space between data and instruction breakpoints. We then have
typically distinct instruction address breakpoint registers
and data address breakpoint registers, delivered with
separate control registers for data and instruction breakpoints
as well. This is the case of PowerPc and ARM for example.

The second consists in having merged breakpoint address space
definition between data and instruction breakpoint. Address
registers can host either instruction or data address and
the access mode for the breakpoint is defined in a control
register. This is the case of x86.

This patch adds a new CONFIG_HAVE_MIXED_BREAKPOINTS_REGS config
that archs can select if they belong to the second case. Those
will have their slot allocation merged for instructions and
data breakpoints.

The others will have a separate slot tracking between data and
instruction breakpoints.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Cc: K. Prasad <prasad@linux.vnet.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
---
 arch/Kconfig                  |   11 +++++
 arch/x86/Kconfig              |    1 +
 include/linux/hw_breakpoint.h |    9 +++-
 kernel/hw_breakpoint.c        |   86 ++++++++++++++++++++++++++++-------------
 4 files changed, 77 insertions(+), 30 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index f06010f..acda512 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -137,6 +137,17 @@ config HAVE_HW_BREAKPOINT
 	bool
 	depends on PERF_EVENTS
 
+config HAVE_MIXED_BREAKPOINTS_REGS
+	bool
+	depends on HAVE_HW_BREAKPOINT
+	help
+	  Depending on the arch implementation of hardware breakpoints,
+	  some of them have separate registers for data and instruction
+	  breakpoints addresses, others have mixed registers to store
+	  them but define the access type in a control register.
+	  Select this option if your arch implements breakpoints under the
+	  latter fashion.
+
 config HAVE_USER_RETURN_NOTIFIER
 	bool
 
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 7191b6e..7f40c72 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -53,6 +53,7 @@ config X86
 	select HAVE_KERNEL_LZMA
 	select HAVE_KERNEL_LZO
 	select HAVE_HW_BREAKPOINT
+	select HAVE_MIXED_BREAKPOINTS_REGS
 	select PERF_EVENTS
 	select ANON_INODES
 	select HAVE_ARCH_KMEMCHECK
diff --git a/include/linux/hw_breakpoint.h b/include/linux/hw_breakpoint.h
index c70d27a..42ed2b4 100644
--- a/include/linux/hw_breakpoint.h
+++ b/include/linux/hw_breakpoint.h
@@ -9,9 +9,12 @@ enum {
 };
 
 enum {
-	HW_BREAKPOINT_R = 1,
-	HW_BREAKPOINT_W = 2,
-	HW_BREAKPOINT_X = 4,
+	HW_BREAKPOINT_EMPTY	= 0,
+	HW_BREAKPOINT_R		= 1,
+	HW_BREAKPOINT_W		= 2,
+	HW_BREAKPOINT_RW	= HW_BREAKPOINT_R | HW_BREAKPOINT_W,
+	HW_BREAKPOINT_X		= 4,
+	HW_BREAKPOINT_INVALID   = HW_BREAKPOINT_RW | HW_BREAKPOINT_X,
 };
 
 #ifdef __KERNEL__
diff --git a/kernel/hw_breakpoint.c b/kernel/hw_breakpoint.c
index 03808ed..cb8c013 100644
--- a/kernel/hw_breakpoint.c
+++ b/kernel/hw_breakpoint.c
@@ -45,18 +45,28 @@
 
 #include <linux/hw_breakpoint.h>
 
+enum bp_type_idx {
+	TYPE_INST 	= 0,
+#ifdef CONFIG_HAVE_MIXED_BREAKPOINTS_REGS
+	TYPE_DATA	= 0,
+#else
+	TYPE_DATA	= 1,
+#endif
+	TYPE_MAX
+};
+
 /*
  * Constraints data
  */
 
 /* Number of pinned cpu breakpoints in a cpu */
-static DEFINE_PER_CPU(unsigned int, nr_cpu_bp_pinned);
+static DEFINE_PER_CPU(unsigned int, nr_cpu_bp_pinned[TYPE_MAX]);
 
 /* Number of pinned task breakpoints in a cpu */
-static DEFINE_PER_CPU(unsigned int, nr_task_bp_pinned[HBP_NUM]);
+static DEFINE_PER_CPU(unsigned int, nr_task_bp_pinned[TYPE_MAX][HBP_NUM]);
 
 /* Number of non-pinned cpu/task breakpoints in a cpu */
-static DEFINE_PER_CPU(unsigned int, nr_bp_flexible);
+static DEFINE_PER_CPU(unsigned int, nr_bp_flexible[TYPE_MAX]);
 
 /* Gather the number of total pinned and un-pinned bp in a cpuset */
 struct bp_busy_slots {
@@ -67,14 +77,22 @@ struct bp_busy_slots {
 /* Serialize accesses to the above constraints */
 static DEFINE_MUTEX(nr_bp_mutex);
 
+static inline enum bp_type_idx find_slot_idx(struct perf_event *bp)
+{
+	if (bp->attr.bp_type & HW_BREAKPOINT_RW)
+		return TYPE_DATA;
+
+	return TYPE_INST;
+}
+
 /*
  * Report the maximum number of pinned breakpoints a task
  * have in this cpu
  */
-static unsigned int max_task_bp_pinned(int cpu)
+static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type)
 {
 	int i;
-	unsigned int *tsk_pinned = per_cpu(nr_task_bp_pinned, cpu);
+	unsigned int *tsk_pinned = per_cpu(nr_task_bp_pinned[type], cpu);
 
 	for (i = HBP_NUM -1; i >= 0; i--) {
 		if (tsk_pinned[i] > 0)
@@ -84,7 +102,7 @@ static unsigned int max_task_bp_pinned(int cpu)
 	return 0;
 }
 
-static int task_bp_pinned(struct task_struct *tsk)
+static int task_bp_pinned(struct task_struct *tsk, enum bp_type_idx type)
 {
 	struct perf_event_context *ctx = tsk->perf_event_ctxp;
 	struct list_head *list;
@@ -105,7 +123,8 @@ static int task_bp_pinned(struct task_struct *tsk)
 	 */
 	list_for_each_entry(bp, list, event_entry) {
 		if (bp->attr.type == PERF_TYPE_BREAKPOINT)
-			count++;
+			if (find_slot_idx(bp) == type)
+				count++;
 	}
 
 	raw_spin_unlock_irqrestore(&ctx->lock, flags);
@@ -118,18 +137,19 @@ static int task_bp_pinned(struct task_struct *tsk)
  * a given cpu (cpu > -1) or in all of them (cpu = -1).
  */
 static void
-fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp)
+fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp,
+		    enum bp_type_idx type)
 {
 	int cpu = bp->cpu;
 	struct task_struct *tsk = bp->ctx->task;
 
 	if (cpu >= 0) {
-		slots->pinned = per_cpu(nr_cpu_bp_pinned, cpu);
+		slots->pinned = per_cpu(nr_cpu_bp_pinned[type], cpu);
 		if (!tsk)
-			slots->pinned += max_task_bp_pinned(cpu);
+			slots->pinned += max_task_bp_pinned(cpu, type);
 		else
-			slots->pinned += task_bp_pinned(tsk);
-		slots->flexible = per_cpu(nr_bp_flexible, cpu);
+			slots->pinned += task_bp_pinned(tsk, type);
+		slots->flexible = per_cpu(nr_bp_flexible[type], cpu);
 
 		return;
 	}
@@ -137,16 +157,16 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp)
 	for_each_online_cpu(cpu) {
 		unsigned int nr;
 
-		nr = per_cpu(nr_cpu_bp_pinned, cpu);
+		nr = per_cpu(nr_cpu_bp_pinned[type], cpu);
 		if (!tsk)
-			nr += max_task_bp_pinned(cpu);
+			nr += max_task_bp_pinned(cpu, type);
 		else
-			nr += task_bp_pinned(tsk);
+			nr += task_bp_pinned(tsk, type);
 
 		if (nr > slots->pinned)
 			slots->pinned = nr;
 
-		nr = per_cpu(nr_bp_flexible, cpu);
+		nr = per_cpu(nr_bp_flexible[type], cpu);
 
 		if (nr > slots->flexible)
 			slots->flexible = nr;
@@ -156,14 +176,15 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp)
 /*
  * Add a pinned breakpoint for the given task in our constraint table
  */
-static void toggle_bp_task_slot(struct task_struct *tsk, int cpu, bool enable)
+static void toggle_bp_task_slot(struct task_struct *tsk, int cpu, bool enable,
+				enum bp_type_idx type)
 {
 	unsigned int *tsk_pinned;
 	int count = 0;
 
-	count = task_bp_pinned(tsk);
+	count = task_bp_pinned(tsk, type);
 
-	tsk_pinned = per_cpu(nr_task_bp_pinned, cpu);
+	tsk_pinned = per_cpu(nr_task_bp_pinned[type], cpu);
 	if (enable) {
 		tsk_pinned[count]++;
 		if (count > 0)
@@ -178,7 +199,8 @@ static void toggle_bp_task_slot(struct task_struct *tsk, int cpu, bool enable)
 /*
  * Add/remove the given breakpoint in our constraint table
  */
-static void toggle_bp_slot(struct perf_event *bp, bool enable)
+static void
+toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type)
 {
 	int cpu = bp->cpu;
 	struct task_struct *tsk = bp->ctx->task;
@@ -186,20 +208,20 @@ static void toggle_bp_slot(struct perf_event *bp, bool enable)
 	/* Pinned counter task profiling */
 	if (tsk) {
 		if (cpu >= 0) {
-			toggle_bp_task_slot(tsk, cpu, enable);
+			toggle_bp_task_slot(tsk, cpu, enable, type);
 			return;
 		}
 
 		for_each_online_cpu(cpu)
-			toggle_bp_task_slot(tsk, cpu, enable);
+			toggle_bp_task_slot(tsk, cpu, enable, type);
 		return;
 	}
 
 	/* Pinned counter cpu profiling */
 	if (enable)
-		per_cpu(nr_cpu_bp_pinned, bp->cpu)++;
+		per_cpu(nr_cpu_bp_pinned[type], bp->cpu)++;
 	else
-		per_cpu(nr_cpu_bp_pinned, bp->cpu)--;
+		per_cpu(nr_cpu_bp_pinned[type], bp->cpu)--;
 }
 
 /*
@@ -246,14 +268,21 @@ static void toggle_bp_slot(struct perf_event *bp, bool enable)
 static int __reserve_bp_slot(struct perf_event *bp)
 {
 	struct bp_busy_slots slots = {0};
+	enum bp_type_idx type;
 
-	fetch_bp_busy_slots(&slots, bp);
+	/* Basic checks */
+	if (bp->attr.bp_type == HW_BREAKPOINT_EMPTY ||
+	    bp->attr.bp_type == HW_BREAKPOINT_INVALID)
+		return -EINVAL;
+
+	type = find_slot_idx(bp);
+	fetch_bp_busy_slots(&slots, bp, type);
 
 	/* Flexible counters need to keep at least one slot */
 	if (slots.pinned + (!!slots.flexible) == HBP_NUM)
 		return -ENOSPC;
 
-	toggle_bp_slot(bp, true);
+	toggle_bp_slot(bp, true, type);
 
 	return 0;
 }
@@ -273,7 +302,10 @@ int reserve_bp_slot(struct perf_event *bp)
 
 static void __release_bp_slot(struct perf_event *bp)
 {
-	toggle_bp_slot(bp, false);
+	enum bp_type_idx type;
+
+	type = find_slot_idx(bp);
+	toggle_bp_slot(bp, false, type);
 }
 
 void release_bp_slot(struct perf_event *bp)
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC PATCH 2/2] hw-breakpoints: Handle breakpoint weight in allocation constraints
  2010-04-12 23:01 [RFC PATCH 0/2] hw-breakpoints allocation constraints updates Frederic Weisbecker
  2010-04-12 23:01 ` [RFC PATCH 1/2] hw-breakpoints: Separate constraint space for data and instruction breakpoints Frederic Weisbecker
@ 2010-04-12 23:01 ` Frederic Weisbecker
  1 sibling, 0 replies; 5+ messages in thread
From: Frederic Weisbecker @ 2010-04-12 23:01 UTC (permalink / raw)
  To: LKML
  Cc: LKML, Frederic Weisbecker, Will Deacon, Mahesh Salgaonkar,
	K . Prasad, Paul Mundt, Benjamin Herrenschmidt, Paul Mackerras,
	Ingo Molnar

Depending on their nature and on what an arch supports, breakpoints
may consume more than one address register. For example a simple
absolute address match usually only requires one address register.
But an address range match may consume two registers.

Currently our slot allocation constraints, that tend to reflect the
limited arch's resources, always consider that a breakpoint consumes
one slot.

Then provide a way for archs to tell us the weight of a breakpoint
through a new hw_breakpoint_weight() helper. This weight will be
computed against the generic allocation constraints instead of
a constant value.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Cc: K. Prasad <prasad@linux.vnet.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
---
 kernel/hw_breakpoint.c |   63 ++++++++++++++++++++++++++++++++++-------------
 1 files changed, 45 insertions(+), 18 deletions(-)

diff --git a/kernel/hw_breakpoint.c b/kernel/hw_breakpoint.c
index cb8c013..34b7655 100644
--- a/kernel/hw_breakpoint.c
+++ b/kernel/hw_breakpoint.c
@@ -77,6 +77,11 @@ struct bp_busy_slots {
 /* Serialize accesses to the above constraints */
 static DEFINE_MUTEX(nr_bp_mutex);
 
+__weak int hw_breakpoint_weight(struct perf_event *bp)
+{
+	return 1;
+}
+
 static inline enum bp_type_idx find_slot_idx(struct perf_event *bp)
 {
 	if (bp->attr.bp_type & HW_BREAKPOINT_RW)
@@ -124,7 +129,7 @@ static int task_bp_pinned(struct task_struct *tsk, enum bp_type_idx type)
 	list_for_each_entry(bp, list, event_entry) {
 		if (bp->attr.type == PERF_TYPE_BREAKPOINT)
 			if (find_slot_idx(bp) == type)
-				count++;
+				count += hw_breakpoint_weight(bp);
 	}
 
 	raw_spin_unlock_irqrestore(&ctx->lock, flags);
@@ -174,25 +179,40 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp,
 }
 
 /*
+ * For now, continue to consider flexible as pinned, until we can
+ * ensure no flexible event can ever be scheduled before a pinned event
+ * in a same cpu.
+ */
+static void
+fetch_this_slot(struct bp_busy_slots *slots, int weight)
+{
+	slots->pinned += weight;
+}
+
+/*
  * Add a pinned breakpoint for the given task in our constraint table
  */
 static void toggle_bp_task_slot(struct task_struct *tsk, int cpu, bool enable,
-				enum bp_type_idx type)
+				enum bp_type_idx type, int weight)
 {
 	unsigned int *tsk_pinned;
-	int count = 0;
+	int old_count = 0;
+	int old_idx = 0;
+	int idx = 0;
 
-	count = task_bp_pinned(tsk, type);
+	old_count = task_bp_pinned(tsk, type);
+	old_idx = old_count - 1;
+	idx = old_idx + weight;
 
 	tsk_pinned = per_cpu(nr_task_bp_pinned[type], cpu);
 	if (enable) {
-		tsk_pinned[count]++;
-		if (count > 0)
-			tsk_pinned[count-1]--;
+		tsk_pinned[idx]++;
+		if (old_count > 0)
+			tsk_pinned[old_idx]--;
 	} else {
-		tsk_pinned[count]--;
-		if (count > 0)
-			tsk_pinned[count-1]++;
+		tsk_pinned[idx]--;
+		if (old_count > 0)
+			tsk_pinned[old_idx]++;
 	}
 }
 
@@ -200,7 +220,8 @@ static void toggle_bp_task_slot(struct task_struct *tsk, int cpu, bool enable,
  * Add/remove the given breakpoint in our constraint table
  */
 static void
-toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type)
+toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type,
+	       int weight)
 {
 	int cpu = bp->cpu;
 	struct task_struct *tsk = bp->ctx->task;
@@ -208,20 +229,20 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type)
 	/* Pinned counter task profiling */
 	if (tsk) {
 		if (cpu >= 0) {
-			toggle_bp_task_slot(tsk, cpu, enable, type);
+			toggle_bp_task_slot(tsk, cpu, enable, type, weight);
 			return;
 		}
 
 		for_each_online_cpu(cpu)
-			toggle_bp_task_slot(tsk, cpu, enable, type);
+			toggle_bp_task_slot(tsk, cpu, enable, type, weight);
 		return;
 	}
 
 	/* Pinned counter cpu profiling */
 	if (enable)
-		per_cpu(nr_cpu_bp_pinned[type], bp->cpu)++;
+		per_cpu(nr_cpu_bp_pinned[type], bp->cpu) += weight;
 	else
-		per_cpu(nr_cpu_bp_pinned[type], bp->cpu)--;
+		per_cpu(nr_cpu_bp_pinned[type], bp->cpu) -= weight;
 }
 
 /*
@@ -269,6 +290,7 @@ static int __reserve_bp_slot(struct perf_event *bp)
 {
 	struct bp_busy_slots slots = {0};
 	enum bp_type_idx type;
+	int weight;
 
 	/* Basic checks */
 	if (bp->attr.bp_type == HW_BREAKPOINT_EMPTY ||
@@ -276,13 +298,16 @@ static int __reserve_bp_slot(struct perf_event *bp)
 		return -EINVAL;
 
 	type = find_slot_idx(bp);
+	weight = hw_breakpoint_weight(bp);
+
 	fetch_bp_busy_slots(&slots, bp, type);
+	fetch_this_slot(&slots, weight);
 
 	/* Flexible counters need to keep at least one slot */
-	if (slots.pinned + (!!slots.flexible) == HBP_NUM)
+	if (slots.pinned + (!!slots.flexible) > HBP_NUM)
 		return -ENOSPC;
 
-	toggle_bp_slot(bp, true, type);
+	toggle_bp_slot(bp, true, type, weight);
 
 	return 0;
 }
@@ -303,9 +328,11 @@ int reserve_bp_slot(struct perf_event *bp)
 static void __release_bp_slot(struct perf_event *bp)
 {
 	enum bp_type_idx type;
+	int weight;
 
 	type = find_slot_idx(bp);
-	toggle_bp_slot(bp, false, type);
+	weight = hw_breakpoint_weight(bp);
+	toggle_bp_slot(bp, false, type, weight);
 }
 
 void release_bp_slot(struct perf_event *bp)
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH 1/2] hw-breakpoints: Separate constraint space for data and instruction breakpoints
  2010-04-12 23:01 ` [RFC PATCH 1/2] hw-breakpoints: Separate constraint space for data and instruction breakpoints Frederic Weisbecker
@ 2010-04-16 14:35   ` Will Deacon
  2010-04-16 14:56     ` Frederic Weisbecker
  0 siblings, 1 reply; 5+ messages in thread
From: Will Deacon @ 2010-04-16 14:35 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mahesh Salgaonkar, K . Prasad, Paul Mundt,
	Benjamin Herrenschmidt, Paul Mackerras, Ingo Molnar

Hello Frederic,

Thanks for this.

On Tue, 2010-04-13 at 00:01 +0100, Frederic Weisbecker wrote:
> There are two outstanding fashions for archs to implement hardware
> breakpoints.
> 
> The first is to separate breakpoint address pattern definition
> space between data and instruction breakpoints. We then have
> typically distinct instruction address breakpoint registers
> and data address breakpoint registers, delivered with
> separate control registers for data and instruction breakpoints
> as well. This is the case of PowerPc and ARM for example.
> 
> The second consists in having merged breakpoint address space
> definition between data and instruction breakpoint. Address
> registers can host either instruction or data address and
> the access mode for the breakpoint is defined in a control
> register. This is the case of x86.
> 
> This patch adds a new CONFIG_HAVE_MIXED_BREAKPOINTS_REGS config
> that archs can select if they belong to the second case. Those
> will have their slot allocation merged for instructions and
> data breakpoints.
> 
> The others will have a separate slot tracking between data and
> instruction breakpoints.

This looks useful for supporting architectures with separate
data/instruction breakpoints.

A couple of points:

1.) Will this affect the arch backend at all?

2.) On ARM, it is possible to have different numbers of breakpoint registers
    and watchpoint registers [which we do not know until runtime]. This
    means that we have to define HBP_NUM as a potential upper bound,
    which seems a bit wasteful. Perhaps there could be a mechanism to 
    register the available resources at runtime?

Thanks,

Will




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH 1/2] hw-breakpoints: Separate constraint space for data and instruction breakpoints
  2010-04-16 14:35   ` Will Deacon
@ 2010-04-16 14:56     ` Frederic Weisbecker
  0 siblings, 0 replies; 5+ messages in thread
From: Frederic Weisbecker @ 2010-04-16 14:56 UTC (permalink / raw)
  To: Will Deacon
  Cc: LKML, Mahesh Salgaonkar, K . Prasad, Paul Mundt,
	Benjamin Herrenschmidt, Paul Mackerras, Ingo Molnar

On Fri, Apr 16, 2010 at 03:35:30PM +0100, Will Deacon wrote:
> Hello Frederic,
> 
> Thanks for this.
> 
> On Tue, 2010-04-13 at 00:01 +0100, Frederic Weisbecker wrote:
> > There are two outstanding fashions for archs to implement hardware
> > breakpoints.
> > 
> > The first is to separate breakpoint address pattern definition
> > space between data and instruction breakpoints. We then have
> > typically distinct instruction address breakpoint registers
> > and data address breakpoint registers, delivered with
> > separate control registers for data and instruction breakpoints
> > as well. This is the case of PowerPc and ARM for example.
> > 
> > The second consists in having merged breakpoint address space
> > definition between data and instruction breakpoint. Address
> > registers can host either instruction or data address and
> > the access mode for the breakpoint is defined in a control
> > register. This is the case of x86.
> > 
> > This patch adds a new CONFIG_HAVE_MIXED_BREAKPOINTS_REGS config
> > that archs can select if they belong to the second case. Those
> > will have their slot allocation merged for instructions and
> > data breakpoints.
> > 
> > The others will have a separate slot tracking between data and
> > instruction breakpoints.
> 
> This looks useful for supporting architectures with separate
> data/instruction breakpoints.
> 
> A couple of points:
> 
> 1.) Will this affect the arch backend at all?


No change is required from the backend. In fact this config probably
only fits for x86, unless there are other archs that have shared
register addresses... Archs that only support data or inst breakpoints
should also select it, but only to save a bit of memory, there would
be no functional changes.


 
> 2.) On ARM, it is possible to have different numbers of breakpoint registers
>     and watchpoint registers [which we do not know until runtime]. This
>     means that we have to define HBP_NUM as a potential upper bound,
>     which seems a bit wasteful. Perhaps there could be a mechanism to 
>     register the available resources at runtime?


Ah that's interesting. Ok, I will update my patch to integrate that.


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-04-16 14:56 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-04-12 23:01 [RFC PATCH 0/2] hw-breakpoints allocation constraints updates Frederic Weisbecker
2010-04-12 23:01 ` [RFC PATCH 1/2] hw-breakpoints: Separate constraint space for data and instruction breakpoints Frederic Weisbecker
2010-04-16 14:35   ` Will Deacon
2010-04-16 14:56     ` Frederic Weisbecker
2010-04-12 23:01 ` [RFC PATCH 2/2] hw-breakpoints: Handle breakpoint weight in allocation constraints Frederic Weisbecker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).