linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm
@ 2020-07-08  4:50 Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit Ravi Bangoria
                   ` (8 more replies)
  0 siblings, 9 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

Last series[1] was to add basic infrastructure support for more than
one watchpoint on Book3S powerpc. This series actually enables the 2nd 
DAWR for baremetal and powervm. Kvm guest is still not supported.

v2: https://lore.kernel.org/linuxppc-dev/20200604033443.70591-1-ravi.bangoria@linux.ibm.com/

v2->v3:
 - patch #2 is new. It fixes an issue with DAWR exception constraint
 - Rename dawr1 to debug-facilities-v31 in dt cpu feature, suggested
   by Nick Piggin.
 - Rebased to powerpc/next

[1]: https://lore.kernel.org/linuxppc-dev/20200514111741.97993-1-ravi.bangoria@linux.ibm.com/

Ravi Bangoria (9):
  powerpc/watchpoint: Fix 512 byte boundary limit
  powerpc/watchpoint: Fix DAWR exception constraint
  powerpc/watchpoint: Enable watchpoint functionality on power10 guest
  powerpc/dt_cpu_ftrs: Add feature for 2nd DAWR
  powerpc/watchpoint: Set CPU_FTR_DAWR1 based on pa-features bit
  powerpc/watchpoint: Rename current H_SET_MODE DAWR macro
  powerpc/watchpoint: Guest support for 2nd DAWR hcall
  powerpc/watchpoint: Return available watchpoints dynamically
  powerpc/watchpoint: Remove 512 byte boundary

 arch/powerpc/include/asm/cputable.h       | 13 ++-
 arch/powerpc/include/asm/hvcall.h         |  3 +-
 arch/powerpc/include/asm/hw_breakpoint.h  |  5 +-
 arch/powerpc/include/asm/machdep.h        |  2 +-
 arch/powerpc/include/asm/plpar_wrappers.h |  7 +-
 arch/powerpc/kernel/dawr.c                |  2 +-
 arch/powerpc/kernel/dt_cpu_ftrs.c         |  7 ++
 arch/powerpc/kernel/hw_breakpoint.c       | 98 +++++++++++++++--------
 arch/powerpc/kernel/prom.c                |  2 +
 arch/powerpc/kvm/book3s_hv.c              |  2 +-
 arch/powerpc/platforms/pseries/setup.c    |  7 +-
 11 files changed, 103 insertions(+), 45 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit
  2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
@ 2020-07-08  4:50 ` Ravi Bangoria
  2020-07-08  7:44   ` Jordan Niethe
  2020-07-15  1:01   ` Jordan Niethe
  2020-07-08  4:50 ` [PATCH v3 2/9] powerpc/watchpoint: Fix DAWR exception constraint Ravi Bangoria
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

Milton Miller reported that we are aligning start and end address to
wrong size SZ_512M. It should be SZ_512. Fix that.

While doing this change I also found a case where ALIGN() comparison
fails. Within a given aligned range, ALIGN() of two addresses does not
match when start address is pointing to the first byte and end address
is pointing to any other byte except the first one. But that's not true
for ALIGN_DOWN(). ALIGN_DOWN() of any two addresses within that range
will always point to the first byte. So use ALIGN_DOWN() instead of
ALIGN().

Fixes: e68ef121c1f4 ("powerpc/watchpoint: Use builtin ALIGN*() macros")
Reported-by: Milton Miller <miltonm@us.ibm.com>
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
---
 arch/powerpc/kernel/hw_breakpoint.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
index 0000daf0e1da..031e6defc08e 100644
--- a/arch/powerpc/kernel/hw_breakpoint.c
+++ b/arch/powerpc/kernel/hw_breakpoint.c
@@ -419,7 +419,7 @@ static int hw_breakpoint_validate_len(struct arch_hw_breakpoint *hw)
 	if (dawr_enabled()) {
 		max_len = DAWR_MAX_LEN;
 		/* DAWR region can't cross 512 bytes boundary */
-		if (ALIGN(start_addr, SZ_512M) != ALIGN(end_addr - 1, SZ_512M))
+		if (ALIGN_DOWN(start_addr, SZ_512) != ALIGN_DOWN(end_addr - 1, SZ_512))
 			return -EINVAL;
 	} else if (IS_ENABLED(CONFIG_PPC_8xx)) {
 		/* 8xx can setup a range without limitation */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 2/9] powerpc/watchpoint: Fix DAWR exception constraint
  2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit Ravi Bangoria
@ 2020-07-08  4:50 ` Ravi Bangoria
  2020-07-15  2:19   ` Jordan Niethe
  2020-07-08  4:50 ` [PATCH v3 3/9] powerpc/watchpoint: Enable watchpoint functionality on power10 guest Ravi Bangoria
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

Pedro Miraglia Franco de Carvalho noticed that on p8, DAR value is
inconsistent with different type of load/store. Like for byte,word
etc. load/stores, DAR is set to the address of the first byte of
overlap between watch range and real access. But for quadword load/
store it's set to the address of the first byte of real access. This
issue has been fixed in p10. In p10(ISA 3.1), DAR is always set to
the address of the first byte of overlap. Commit 27985b2a640e
("powerpc/watchpoint: Don't ignore extraneous exceptions blindly")
wrongly assumes that DAR is set to the address of the first byte of
overlap for all load/stores on p8 as well. Fix that. With the fix,
we now rely on 'ea' provided by analyse_instr(). If analyse_instr()
fails, generate event unconditionally on p8, and on p10 generate
event only if DAR is within a DAWR range.

Note: 8xx is not affected.

Fixes: 27985b2a640e ("powerpc/watchpoint: Don't ignore extraneous exceptions blindly")
Fixes: 74c6881019b7 ("powerpc/watchpoint: Prepare handler to handle more than one watchpoint")
Reported-by: Pedro Miraglia Franco de Carvalho <pedromfc@br.ibm.com>
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
---
 arch/powerpc/kernel/hw_breakpoint.c | 93 +++++++++++++++++++----------
 1 file changed, 63 insertions(+), 30 deletions(-)

diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
index 031e6defc08e..7a66c370a105 100644
--- a/arch/powerpc/kernel/hw_breakpoint.c
+++ b/arch/powerpc/kernel/hw_breakpoint.c
@@ -498,11 +498,11 @@ static bool dar_in_user_range(unsigned long dar, struct arch_hw_breakpoint *info
 	return ((info->address <= dar) && (dar - info->address < info->len));
 }
 
-static bool dar_user_range_overlaps(unsigned long dar, int size,
-				    struct arch_hw_breakpoint *info)
+static bool ea_user_range_overlaps(unsigned long ea, int size,
+				   struct arch_hw_breakpoint *info)
 {
-	return ((dar < info->address + info->len) &&
-		(dar + size > info->address));
+	return ((ea < info->address + info->len) &&
+		(ea + size > info->address));
 }
 
 static bool dar_in_hw_range(unsigned long dar, struct arch_hw_breakpoint *info)
@@ -515,20 +515,22 @@ static bool dar_in_hw_range(unsigned long dar, struct arch_hw_breakpoint *info)
 	return ((hw_start_addr <= dar) && (hw_end_addr > dar));
 }
 
-static bool dar_hw_range_overlaps(unsigned long dar, int size,
-				  struct arch_hw_breakpoint *info)
+static bool ea_hw_range_overlaps(unsigned long ea, int size,
+				 struct arch_hw_breakpoint *info)
 {
 	unsigned long hw_start_addr, hw_end_addr;
 
 	hw_start_addr = ALIGN_DOWN(info->address, HW_BREAKPOINT_SIZE);
 	hw_end_addr = ALIGN(info->address + info->len, HW_BREAKPOINT_SIZE);
 
-	return ((dar < hw_end_addr) && (dar + size > hw_start_addr));
+	return ((ea < hw_end_addr) && (ea + size > hw_start_addr));
 }
 
 /*
  * If hw has multiple DAWR registers, we also need to check all
  * dawrx constraint bits to confirm this is _really_ a valid event.
+ * If type is UNKNOWN, but privilege level matches, consider it as
+ * a positive match.
  */
 static bool check_dawrx_constraints(struct pt_regs *regs, int type,
 				    struct arch_hw_breakpoint *info)
@@ -536,7 +538,12 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
 	if (OP_IS_LOAD(type) && !(info->type & HW_BRK_TYPE_READ))
 		return false;
 
-	if (OP_IS_STORE(type) && !(info->type & HW_BRK_TYPE_WRITE))
+	/*
+	 * The Cache Management instructions other than dcbz never
+	 * cause a match. i.e. if type is CACHEOP, the instruction
+	 * is dcbz, and dcbz is treated as Store.
+	 */
+	if ((OP_IS_STORE(type) || type == CACHEOP) && !(info->type & HW_BRK_TYPE_WRITE))
 		return false;
 
 	if (is_kernel_addr(regs->nip) && !(info->type & HW_BRK_TYPE_KERNEL))
@@ -553,7 +560,8 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
  * including extraneous exception. Otherwise return false.
  */
 static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
-			      int type, int size, struct arch_hw_breakpoint *info)
+			      unsigned long ea, int type, int size,
+			      struct arch_hw_breakpoint *info)
 {
 	bool in_user_range = dar_in_user_range(regs->dar, info);
 	bool dawrx_constraints;
@@ -569,11 +577,10 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
 	}
 
 	if (unlikely(ppc_inst_equal(instr, ppc_inst(0)))) {
-		if (in_user_range)
-			return true;
-
-		if (dar_in_hw_range(regs->dar, info)) {
-			info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
+		if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+			if (dar_in_hw_range(regs->dar, info))
+				return true;
+		} else {
 			return true;
 		}
 		return false;
@@ -581,10 +588,20 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
 
 	dawrx_constraints = check_dawrx_constraints(regs, type, info);
 
-	if (dar_user_range_overlaps(regs->dar, size, info))
+	if (type == UNKNOWN) {
+		if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+			if (dar_in_hw_range(regs->dar, info))
+				return dawrx_constraints;
+		} else {
+			return dawrx_constraints;
+		}
+		return false;
+	}
+
+	if (ea_user_range_overlaps(ea, size, info))
 		return dawrx_constraints;
 
-	if (dar_hw_range_overlaps(regs->dar, size, info)) {
+	if (ea_hw_range_overlaps(ea, size, info)) {
 		if (dawrx_constraints) {
 			info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
 			return true;
@@ -593,8 +610,17 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
 	return false;
 }
 
+static int cache_op_size(void)
+{
+#ifdef __powerpc64__
+	return ppc64_caches.l1d.block_size;
+#else
+	return L1_CACHE_BYTES;
+#endif
+}
+
 static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
-			     int *type, int *size, bool *larx_stcx)
+			     int *type, int *size, unsigned long *ea)
 {
 	struct instruction_op op;
 
@@ -602,16 +628,23 @@ static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
 		return;
 
 	analyse_instr(&op, regs, *instr);
-
-	/*
-	 * Set size = 8 if analyse_instr() fails. If it's a userspace
-	 * watchpoint(valid or extraneous), we can notify user about it.
-	 * If it's a kernel watchpoint, instruction  emulation will fail
-	 * in stepping_handler() and watchpoint will be disabled.
-	 */
 	*type = GETTYPE(op.type);
-	*size = !(*type == UNKNOWN) ? GETSIZE(op.type) : 8;
-	*larx_stcx = (*type == LARX || *type == STCX);
+	*ea = op.ea;
+#ifdef __powerpc64__
+	if (!(regs->msr & MSR_64BIT))
+		*ea &= 0xffffffffUL;
+#endif
+
+	*size = GETSIZE(op.type);
+	if (*type == CACHEOP) {
+		*size = cache_op_size();
+		*ea &= ~(*size - 1);
+	}
+}
+
+static bool is_larx_stcx_instr(int type)
+{
+	return type == LARX || type == STCX;
 }
 
 /*
@@ -678,7 +711,7 @@ int hw_breakpoint_handler(struct die_args *args)
 	struct ppc_inst instr = ppc_inst(0);
 	int type = 0;
 	int size = 0;
-	bool larx_stcx = false;
+	unsigned long ea;
 
 	/* Disable breakpoints during exception handling */
 	hw_breakpoint_disable();
@@ -692,7 +725,7 @@ int hw_breakpoint_handler(struct die_args *args)
 	rcu_read_lock();
 
 	if (!IS_ENABLED(CONFIG_PPC_8xx))
-		get_instr_detail(regs, &instr, &type, &size, &larx_stcx);
+		get_instr_detail(regs, &instr, &type, &size, &ea);
 
 	for (i = 0; i < nr_wp_slots(); i++) {
 		bp[i] = __this_cpu_read(bp_per_reg[i]);
@@ -702,7 +735,7 @@ int hw_breakpoint_handler(struct die_args *args)
 		info[i] = counter_arch_bp(bp[i]);
 		info[i]->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ;
 
-		if (check_constraints(regs, instr, type, size, info[i])) {
+		if (check_constraints(regs, instr, ea, type, size, info[i])) {
 			if (!IS_ENABLED(CONFIG_PPC_8xx) &&
 			    ppc_inst_equal(instr, ppc_inst(0))) {
 				handler_error(bp[i], info[i]);
@@ -744,7 +777,7 @@ int hw_breakpoint_handler(struct die_args *args)
 	}
 
 	if (!IS_ENABLED(CONFIG_PPC_8xx)) {
-		if (larx_stcx) {
+		if (is_larx_stcx_instr(type)) {
 			for (i = 0; i < nr_wp_slots(); i++) {
 				if (!hit[i])
 					continue;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 3/9] powerpc/watchpoint: Enable watchpoint functionality on power10 guest
  2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 2/9] powerpc/watchpoint: Fix DAWR exception constraint Ravi Bangoria
@ 2020-07-08  4:50 ` Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 4/9] powerpc/dt_cpu_ftrs: Add feature for 2nd DAWR Ravi Bangoria
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

CPU_FTR_DAWR is by default enabled for host via CPU_FTRS_DT_CPU_BASE
(controlled by CONFIG_PPC_DT_CPU_FTRS). But cpu-features device-tree
node is not PAPR compatible and thus not yet used by kvm or pHyp
guests. Enable watchpoint functionality on power10 guest (both kvm
and powervm) by adding CPU_FTR_DAWR to CPU_FTRS_POWER10. Note that
this change does not enable 2nd DAWR support.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
---
 arch/powerpc/include/asm/cputable.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
index bac2252c839e..e506d429b1af 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -478,7 +478,7 @@ static inline void cpu_feature_keys_init(void) { }
 	    CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
 	    CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_ARCH_207S | \
 	    CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | CPU_FTR_PKEY | \
-	    CPU_FTR_ARCH_31)
+	    CPU_FTR_ARCH_31 | CPU_FTR_DAWR)
 #define CPU_FTRS_CELL	(CPU_FTR_LWSYNC | \
 	    CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
 	    CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 4/9] powerpc/dt_cpu_ftrs: Add feature for 2nd DAWR
  2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
                   ` (2 preceding siblings ...)
  2020-07-08  4:50 ` [PATCH v3 3/9] powerpc/watchpoint: Enable watchpoint functionality on power10 guest Ravi Bangoria
@ 2020-07-08  4:50 ` Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 5/9] powerpc/watchpoint: Set CPU_FTR_DAWR1 based on pa-features bit Ravi Bangoria
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

Add new device-tree feature for 2nd DAWR. If this feature is present,
2nd DAWR is supported, otherwise not.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
---
 arch/powerpc/include/asm/cputable.h | 7 +++++--
 arch/powerpc/kernel/dt_cpu_ftrs.c   | 7 +++++++
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
index e506d429b1af..3445c86e1f6f 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -214,6 +214,7 @@ static inline void cpu_feature_keys_init(void) { }
 #define CPU_FTR_P9_TLBIE_ERAT_BUG	LONG_ASM_CONST(0x0001000000000000)
 #define CPU_FTR_P9_RADIX_PREFETCH_BUG	LONG_ASM_CONST(0x0002000000000000)
 #define CPU_FTR_ARCH_31			LONG_ASM_CONST(0x0004000000000000)
+#define CPU_FTR_DAWR1			LONG_ASM_CONST(0x0008000000000000)
 
 #ifndef __ASSEMBLY__
 
@@ -497,14 +498,16 @@ static inline void cpu_feature_keys_init(void) { }
 #define CPU_FTRS_POSSIBLE	\
 	    (CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | CPU_FTRS_POWER8 | \
 	     CPU_FTR_ALTIVEC_COMP | CPU_FTR_VSX_COMP | CPU_FTRS_POWER9 | \
-	     CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | CPU_FTRS_POWER10)
+	     CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | CPU_FTRS_POWER10 | \
+	     CPU_FTR_DAWR1)
 #else
 #define CPU_FTRS_POSSIBLE	\
 	    (CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \
 	     CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \
 	     CPU_FTRS_POWER8 | CPU_FTRS_CELL | CPU_FTRS_PA6T | \
 	     CPU_FTR_VSX_COMP | CPU_FTR_ALTIVEC_COMP | CPU_FTRS_POWER9 | \
-	     CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | CPU_FTRS_POWER10)
+	     CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | CPU_FTRS_POWER10 | \
+	     CPU_FTR_DAWR1)
 #endif /* CONFIG_CPU_LITTLE_ENDIAN */
 #endif
 #else
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index a0edeb391e3e..be694567cebd 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -573,6 +573,12 @@ static int __init feat_enable_mma(struct dt_cpu_feature *f)
 	return 1;
 }
 
+static int __init feat_enable_debug_facilities_v31(struct dt_cpu_feature *f)
+{
+	cur_cpu_spec->cpu_features |= CPU_FTR_DAWR1;
+	return 1;
+}
+
 struct dt_cpu_feature_match {
 	const char *name;
 	int (*enable)(struct dt_cpu_feature *f);
@@ -648,6 +654,7 @@ static struct dt_cpu_feature_match __initdata
 	{"wait-v3", feat_enable, 0},
 	{"prefix-instructions", feat_enable, 0},
 	{"matrix-multiply-assist", feat_enable_mma, 0},
+	{"debug-facilities-v31", feat_enable_debug_facilities_v31, 0},
 };
 
 static bool __initdata using_dt_cpu_ftrs;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 5/9] powerpc/watchpoint: Set CPU_FTR_DAWR1 based on pa-features bit
  2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
                   ` (3 preceding siblings ...)
  2020-07-08  4:50 ` [PATCH v3 4/9] powerpc/dt_cpu_ftrs: Add feature for 2nd DAWR Ravi Bangoria
@ 2020-07-08  4:50 ` Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 6/9] powerpc/watchpoint: Rename current H_SET_MODE DAWR macro Ravi Bangoria
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

As per the PAPR, bit 0 of byte 64 in pa-features property indicates
availability of 2nd DAWR registers. i.e. If this bit is set, 2nd
DAWR is present, otherwise not. Host generally uses "cpu-features",
which masks "pa-features". But "cpu-features" are still not used for
guests and thus this change is mostly applicable for guests only.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
---
 arch/powerpc/kernel/prom.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 9cc49f265c86..c76c09b97bc8 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -175,6 +175,8 @@ static struct ibm_pa_feature {
 	 */
 	{ .pabyte = 22, .pabit = 0, .cpu_features = CPU_FTR_TM_COMP,
 	  .cpu_user_ftrs2 = PPC_FEATURE2_HTM_COMP | PPC_FEATURE2_HTM_NOSC_COMP },
+
+	{ .pabyte = 64, .pabit = 0, .cpu_features = CPU_FTR_DAWR1 },
 };
 
 static void __init scan_features(unsigned long node, const unsigned char *ftrs,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 6/9] powerpc/watchpoint: Rename current H_SET_MODE DAWR macro
  2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
                   ` (4 preceding siblings ...)
  2020-07-08  4:50 ` [PATCH v3 5/9] powerpc/watchpoint: Set CPU_FTR_DAWR1 based on pa-features bit Ravi Bangoria
@ 2020-07-08  4:50 ` Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 7/9] powerpc/watchpoint: Guest support for 2nd DAWR hcall Ravi Bangoria
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

Current H_SET_MODE hcall macro name for setting/resetting DAWR0 is
H_SET_MODE_RESOURCE_SET_DAWR. Add suffix 0 to macro name as well.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
---
 arch/powerpc/include/asm/hvcall.h         | 2 +-
 arch/powerpc/include/asm/plpar_wrappers.h | 2 +-
 arch/powerpc/kvm/book3s_hv.c              | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index e90c073e437e..a7f6f1aeda6b 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -354,7 +354,7 @@
 
 /* Values for 2nd argument to H_SET_MODE */
 #define H_SET_MODE_RESOURCE_SET_CIABR		1
-#define H_SET_MODE_RESOURCE_SET_DAWR		2
+#define H_SET_MODE_RESOURCE_SET_DAWR0		2
 #define H_SET_MODE_RESOURCE_ADDR_TRANS_MODE	3
 #define H_SET_MODE_RESOURCE_LE			4
 
diff --git a/arch/powerpc/include/asm/plpar_wrappers.h b/arch/powerpc/include/asm/plpar_wrappers.h
index 4497c8afb573..93eb133d572c 100644
--- a/arch/powerpc/include/asm/plpar_wrappers.h
+++ b/arch/powerpc/include/asm/plpar_wrappers.h
@@ -312,7 +312,7 @@ static inline long plpar_set_ciabr(unsigned long ciabr)
 
 static inline long plpar_set_watchpoint0(unsigned long dawr0, unsigned long dawrx0)
 {
-	return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_DAWR, dawr0, dawrx0);
+	return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_DAWR0, dawr0, dawrx0);
 }
 
 static inline long plpar_signal_sys_reset(long cpu)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 6bf66649ab92..7ad692c2d7c7 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -764,7 +764,7 @@ static int kvmppc_h_set_mode(struct kvm_vcpu *vcpu, unsigned long mflags,
 			return H_P3;
 		vcpu->arch.ciabr  = value1;
 		return H_SUCCESS;
-	case H_SET_MODE_RESOURCE_SET_DAWR:
+	case H_SET_MODE_RESOURCE_SET_DAWR0:
 		if (!kvmppc_power8_compatible(vcpu))
 			return H_P2;
 		if (!ppc_breakpoint_available())
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 7/9] powerpc/watchpoint: Guest support for 2nd DAWR hcall
  2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
                   ` (5 preceding siblings ...)
  2020-07-08  4:50 ` [PATCH v3 6/9] powerpc/watchpoint: Rename current H_SET_MODE DAWR macro Ravi Bangoria
@ 2020-07-08  4:50 ` Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 8/9] powerpc/watchpoint: Return available watchpoints dynamically Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 9/9] powerpc/watchpoint: Remove 512 byte boundary Ravi Bangoria
  8 siblings, 0 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

2nd DAWR can be set/unset using H_SET_MODE hcall with resource value 5.
Enable powervm guest support with that. This has no effect on kvm guest
because kvm will return error if guest does hcall with resource value 5.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
---
 arch/powerpc/include/asm/hvcall.h         | 1 +
 arch/powerpc/include/asm/machdep.h        | 2 +-
 arch/powerpc/include/asm/plpar_wrappers.h | 5 +++++
 arch/powerpc/kernel/dawr.c                | 2 +-
 arch/powerpc/platforms/pseries/setup.c    | 7 +++++--
 5 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index a7f6f1aeda6b..3f170b9496a1 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -357,6 +357,7 @@
 #define H_SET_MODE_RESOURCE_SET_DAWR0		2
 #define H_SET_MODE_RESOURCE_ADDR_TRANS_MODE	3
 #define H_SET_MODE_RESOURCE_LE			4
+#define H_SET_MODE_RESOURCE_SET_DAWR1		5
 
 /* Values for argument to H_SIGNAL_SYS_RESET */
 #define H_SIGNAL_SYS_RESET_ALL			-1
diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
index 7bcb64444a39..a90b892f0bfe 100644
--- a/arch/powerpc/include/asm/machdep.h
+++ b/arch/powerpc/include/asm/machdep.h
@@ -131,7 +131,7 @@ struct machdep_calls {
 				    unsigned long dabrx);
 
 	/* Set DAWR for this platform, leave empty for default implementation */
-	int		(*set_dawr)(unsigned long dawr,
+	int		(*set_dawr)(int nr, unsigned long dawr,
 				    unsigned long dawrx);
 
 #ifdef CONFIG_PPC32	/* XXX for now */
diff --git a/arch/powerpc/include/asm/plpar_wrappers.h b/arch/powerpc/include/asm/plpar_wrappers.h
index 93eb133d572c..d7a1acc83593 100644
--- a/arch/powerpc/include/asm/plpar_wrappers.h
+++ b/arch/powerpc/include/asm/plpar_wrappers.h
@@ -315,6 +315,11 @@ static inline long plpar_set_watchpoint0(unsigned long dawr0, unsigned long dawr
 	return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_DAWR0, dawr0, dawrx0);
 }
 
+static inline long plpar_set_watchpoint1(unsigned long dawr1, unsigned long dawrx1)
+{
+	return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_DAWR1, dawr1, dawrx1);
+}
+
 static inline long plpar_signal_sys_reset(long cpu)
 {
 	return plpar_hcall_norets(H_SIGNAL_SYS_RESET, cpu);
diff --git a/arch/powerpc/kernel/dawr.c b/arch/powerpc/kernel/dawr.c
index 500f52fa4711..cdc2dccb987d 100644
--- a/arch/powerpc/kernel/dawr.c
+++ b/arch/powerpc/kernel/dawr.c
@@ -37,7 +37,7 @@ int set_dawr(int nr, struct arch_hw_breakpoint *brk)
 	dawrx |= (mrd & 0x3f) << (63 - 53);
 
 	if (ppc_md.set_dawr)
-		return ppc_md.set_dawr(dawr, dawrx);
+		return ppc_md.set_dawr(nr, dawr, dawrx);
 
 	if (nr == 0) {
 		mtspr(SPRN_DAWR0, dawr);
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 2db8469e475f..d516ee8eb7fc 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -831,12 +831,15 @@ static int pseries_set_xdabr(unsigned long dabr, unsigned long dabrx)
 	return plpar_hcall_norets(H_SET_XDABR, dabr, dabrx);
 }
 
-static int pseries_set_dawr(unsigned long dawr, unsigned long dawrx)
+static int pseries_set_dawr(int nr, unsigned long dawr, unsigned long dawrx)
 {
 	/* PAPR says we can't set HYP */
 	dawrx &= ~DAWRX_HYP;
 
-	return  plpar_set_watchpoint0(dawr, dawrx);
+	if (nr == 0)
+		return plpar_set_watchpoint0(dawr, dawrx);
+	else
+		return plpar_set_watchpoint1(dawr, dawrx);
 }
 
 #define CMO_CHARACTERISTICS_TOKEN 44
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 8/9] powerpc/watchpoint: Return available watchpoints dynamically
  2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
                   ` (6 preceding siblings ...)
  2020-07-08  4:50 ` [PATCH v3 7/9] powerpc/watchpoint: Guest support for 2nd DAWR hcall Ravi Bangoria
@ 2020-07-08  4:50 ` Ravi Bangoria
  2020-07-08  4:50 ` [PATCH v3 9/9] powerpc/watchpoint: Remove 512 byte boundary Ravi Bangoria
  8 siblings, 0 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

So far Book3S Powerpc supported only one watchpoint. Power10 is
introducing 2nd DAWR. Enable 2nd DAWR support for Power10.
Availability of 2nd DAWR will depend on CPU_FTR_DAWR1.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
---
 arch/powerpc/include/asm/cputable.h      | 4 +++-
 arch/powerpc/include/asm/hw_breakpoint.h | 5 +++--
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
index 3445c86e1f6f..36a0851a7a9b 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -633,7 +633,9 @@ enum {
  * Maximum number of hw breakpoint supported on powerpc. Number of
  * breakpoints supported by actual hw might be less than this.
  */
-#define HBP_NUM_MAX	1
+#define HBP_NUM_MAX	2
+#define HBP_NUM_ONE	1
+#define HBP_NUM_TWO	2
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/powerpc/include/asm/hw_breakpoint.h b/arch/powerpc/include/asm/hw_breakpoint.h
index cb424799da0d..d4eab1694bcd 100644
--- a/arch/powerpc/include/asm/hw_breakpoint.h
+++ b/arch/powerpc/include/asm/hw_breakpoint.h
@@ -5,10 +5,11 @@
  * Copyright 2010, IBM Corporation.
  * Author: K.Prasad <prasad@linux.vnet.ibm.com>
  */
-
 #ifndef _PPC_BOOK3S_64_HW_BREAKPOINT_H
 #define _PPC_BOOK3S_64_HW_BREAKPOINT_H
 
+#include <asm/cpu_has_feature.h>
+
 #ifdef	__KERNEL__
 struct arch_hw_breakpoint {
 	unsigned long	address;
@@ -46,7 +47,7 @@ struct arch_hw_breakpoint {
 
 static inline int nr_wp_slots(void)
 {
-	return HBP_NUM_MAX;
+	return cpu_has_feature(CPU_FTR_DAWR1) ? HBP_NUM_TWO : HBP_NUM_ONE;
 }
 
 #ifdef CONFIG_HAVE_HW_BREAKPOINT
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 9/9] powerpc/watchpoint: Remove 512 byte boundary
  2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
                   ` (7 preceding siblings ...)
  2020-07-08  4:50 ` [PATCH v3 8/9] powerpc/watchpoint: Return available watchpoints dynamically Ravi Bangoria
@ 2020-07-08  4:50 ` Ravi Bangoria
  8 siblings, 0 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-08  4:50 UTC (permalink / raw)
  To: mpe, mikey
  Cc: ravi.bangoria, apopple, paulus, npiggin, christophe.leroy,
	naveen.n.rao, peterz, jolsa, oleg, fweisbec, mingo, pedromfc,
	miltonm, linuxppc-dev, linux-kernel

Power10 has removed 512 bytes boundary from match criteria. i.e. The watch
range can cross 512 bytes boundary.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
---
 arch/powerpc/kernel/hw_breakpoint.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
index 7a66c370a105..270fbb4d01ce 100644
--- a/arch/powerpc/kernel/hw_breakpoint.c
+++ b/arch/powerpc/kernel/hw_breakpoint.c
@@ -418,8 +418,9 @@ static int hw_breakpoint_validate_len(struct arch_hw_breakpoint *hw)
 
 	if (dawr_enabled()) {
 		max_len = DAWR_MAX_LEN;
-		/* DAWR region can't cross 512 bytes boundary */
-		if (ALIGN_DOWN(start_addr, SZ_512) != ALIGN_DOWN(end_addr - 1, SZ_512))
+		/* DAWR region can't cross 512 bytes boundary on p10 predecessors */
+		if (!cpu_has_feature(CPU_FTR_ARCH_31) &&
+		    (ALIGN_DOWN(start_addr, SZ_512) != ALIGN_DOWN(end_addr - 1, SZ_512)))
 			return -EINVAL;
 	} else if (IS_ENABLED(CONFIG_PPC_8xx)) {
 		/* 8xx can setup a range without limitation */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit
  2020-07-08  4:50 ` [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit Ravi Bangoria
@ 2020-07-08  7:44   ` Jordan Niethe
  2020-07-08  7:52     ` Christophe Leroy
  2020-07-15  1:01   ` Jordan Niethe
  1 sibling, 1 reply; 15+ messages in thread
From: Jordan Niethe @ 2020-07-08  7:44 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: Michael Ellerman, mikey, apopple, Paul Mackerras,
	Nicholas Piggin, Christophe Leroy, naveen.n.rao, peterz, jolsa,
	oleg, fweisbec, mingo, pedromfc, miltonm, linuxppc-dev,
	linux-kernel

On Wed, Jul 8, 2020 at 2:53 PM Ravi Bangoria
<ravi.bangoria@linux.ibm.com> wrote:
>
> Milton Miller reported that we are aligning start and end address to
> wrong size SZ_512M. It should be SZ_512. Fix that.
>
> While doing this change I also found a case where ALIGN() comparison
> fails. Within a given aligned range, ALIGN() of two addresses does not
> match when start address is pointing to the first byte and end address
> is pointing to any other byte except the first one. But that's not true
> for ALIGN_DOWN(). ALIGN_DOWN() of any two addresses within that range
> will always point to the first byte. So use ALIGN_DOWN() instead of
> ALIGN().
>
> Fixes: e68ef121c1f4 ("powerpc/watchpoint: Use builtin ALIGN*() macros")
> Reported-by: Milton Miller <miltonm@us.ibm.com>
> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
> ---
>  arch/powerpc/kernel/hw_breakpoint.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
> index 0000daf0e1da..031e6defc08e 100644
> --- a/arch/powerpc/kernel/hw_breakpoint.c
> +++ b/arch/powerpc/kernel/hw_breakpoint.c
> @@ -419,7 +419,7 @@ static int hw_breakpoint_validate_len(struct arch_hw_breakpoint *hw)
>         if (dawr_enabled()) {
>                 max_len = DAWR_MAX_LEN;
>                 /* DAWR region can't cross 512 bytes boundary */
> -               if (ALIGN(start_addr, SZ_512M) != ALIGN(end_addr - 1, SZ_512M))
> +               if (ALIGN_DOWN(start_addr, SZ_512) != ALIGN_DOWN(end_addr - 1, SZ_512))
I wonder if you should use end_addr - 1, but rather end_addr. For example:
512 -> 1023, because of the -1, 1024 will now be included in this
range meaning 513 bytes?

>                         return -EINVAL;
>         } else if (IS_ENABLED(CONFIG_PPC_8xx)) {
>                 /* 8xx can setup a range without limitation */
> --
> 2.26.2
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit
  2020-07-08  7:44   ` Jordan Niethe
@ 2020-07-08  7:52     ` Christophe Leroy
  0 siblings, 0 replies; 15+ messages in thread
From: Christophe Leroy @ 2020-07-08  7:52 UTC (permalink / raw)
  To: Jordan Niethe, Ravi Bangoria
  Cc: Michael Ellerman, mikey, apopple, Paul Mackerras,
	Nicholas Piggin, Christophe Leroy, naveen.n.rao, peterz, jolsa,
	oleg, fweisbec, mingo, pedromfc, miltonm, linuxppc-dev,
	linux-kernel



Le 08/07/2020 à 09:44, Jordan Niethe a écrit :
> On Wed, Jul 8, 2020 at 2:53 PM Ravi Bangoria
> <ravi.bangoria@linux.ibm.com> wrote:
>>
>> Milton Miller reported that we are aligning start and end address to
>> wrong size SZ_512M. It should be SZ_512. Fix that.
>>
>> While doing this change I also found a case where ALIGN() comparison
>> fails. Within a given aligned range, ALIGN() of two addresses does not
>> match when start address is pointing to the first byte and end address
>> is pointing to any other byte except the first one. But that's not true
>> for ALIGN_DOWN(). ALIGN_DOWN() of any two addresses within that range
>> will always point to the first byte. So use ALIGN_DOWN() instead of
>> ALIGN().
>>
>> Fixes: e68ef121c1f4 ("powerpc/watchpoint: Use builtin ALIGN*() macros")
>> Reported-by: Milton Miller <miltonm@us.ibm.com>
>> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
>> ---
>>   arch/powerpc/kernel/hw_breakpoint.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
>> index 0000daf0e1da..031e6defc08e 100644
>> --- a/arch/powerpc/kernel/hw_breakpoint.c
>> +++ b/arch/powerpc/kernel/hw_breakpoint.c
>> @@ -419,7 +419,7 @@ static int hw_breakpoint_validate_len(struct arch_hw_breakpoint *hw)
>>          if (dawr_enabled()) {
>>                  max_len = DAWR_MAX_LEN;
>>                  /* DAWR region can't cross 512 bytes boundary */
>> -               if (ALIGN(start_addr, SZ_512M) != ALIGN(end_addr - 1, SZ_512M))
>> +               if (ALIGN_DOWN(start_addr, SZ_512) != ALIGN_DOWN(end_addr - 1, SZ_512))
> I wonder if you should use end_addr - 1, but rather end_addr. For example:
> 512 -> 1023, because of the -1, 1024 will now be included in this
> range meaning 513 bytes?

end_addr is not part of the range.

If you want the range [512;1023], it means addr 512 len 512, that is 
end_addr = addr + len = 1024

Christophe

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit
  2020-07-08  4:50 ` [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit Ravi Bangoria
  2020-07-08  7:44   ` Jordan Niethe
@ 2020-07-15  1:01   ` Jordan Niethe
  1 sibling, 0 replies; 15+ messages in thread
From: Jordan Niethe @ 2020-07-15  1:01 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: Michael Ellerman, mikey, apopple, Paul Mackerras,
	Nicholas Piggin, Christophe Leroy, naveen.n.rao, peterz, jolsa,
	oleg, fweisbec, mingo, pedromfc, miltonm, linuxppc-dev,
	linux-kernel

On Wed, Jul 8, 2020 at 2:53 PM Ravi Bangoria
<ravi.bangoria@linux.ibm.com> wrote:
>
> Milton Miller reported that we are aligning start and end address to
> wrong size SZ_512M. It should be SZ_512. Fix that.
>
> While doing this change I also found a case where ALIGN() comparison
> fails. Within a given aligned range, ALIGN() of two addresses does not
> match when start address is pointing to the first byte and end address
> is pointing to any other byte except the first one. But that's not true
> for ALIGN_DOWN(). ALIGN_DOWN() of any two addresses within that range
> will always point to the first byte. So use ALIGN_DOWN() instead of
> ALIGN().
>
> Fixes: e68ef121c1f4 ("powerpc/watchpoint: Use builtin ALIGN*() macros")
> Reported-by: Milton Miller <miltonm@us.ibm.com>
> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
I tested this with the ptrace-hwbreak selftest. Can confirm without
also changing to ALIGN_DOWN() then these tests will fail.
Tested-by: Jordan Niethe <jniethe5@gmail.com>
> ---
>  arch/powerpc/kernel/hw_breakpoint.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
> index 0000daf0e1da..031e6defc08e 100644
> --- a/arch/powerpc/kernel/hw_breakpoint.c
> +++ b/arch/powerpc/kernel/hw_breakpoint.c
> @@ -419,7 +419,7 @@ static int hw_breakpoint_validate_len(struct arch_hw_breakpoint *hw)
>         if (dawr_enabled()) {
>                 max_len = DAWR_MAX_LEN;
>                 /* DAWR region can't cross 512 bytes boundary */
> -               if (ALIGN(start_addr, SZ_512M) != ALIGN(end_addr - 1, SZ_512M))
> +               if (ALIGN_DOWN(start_addr, SZ_512) != ALIGN_DOWN(end_addr - 1, SZ_512))
>                         return -EINVAL;
>         } else if (IS_ENABLED(CONFIG_PPC_8xx)) {
>                 /* 8xx can setup a range without limitation */
> --
> 2.26.2
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/9] powerpc/watchpoint: Fix DAWR exception constraint
  2020-07-08  4:50 ` [PATCH v3 2/9] powerpc/watchpoint: Fix DAWR exception constraint Ravi Bangoria
@ 2020-07-15  2:19   ` Jordan Niethe
  2020-07-15  4:08     ` Ravi Bangoria
  0 siblings, 1 reply; 15+ messages in thread
From: Jordan Niethe @ 2020-07-15  2:19 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: Michael Ellerman, mikey, apopple, Paul Mackerras,
	Nicholas Piggin, Christophe Leroy, naveen.n.rao, peterz, jolsa,
	oleg, fweisbec, mingo, pedromfc, miltonm, linuxppc-dev,
	linux-kernel

On Wed, Jul 8, 2020 at 2:52 PM Ravi Bangoria
<ravi.bangoria@linux.ibm.com> wrote:
>
> Pedro Miraglia Franco de Carvalho noticed that on p8, DAR value is
> inconsistent with different type of load/store. Like for byte,word
> etc. load/stores, DAR is set to the address of the first byte of
> overlap between watch range and real access. But for quadword load/
> store it's set to the address of the first byte of real access. This
> issue has been fixed in p10. In p10(ISA 3.1), DAR is always set to
> the address of the first byte of overlap. Commit 27985b2a640e
> ("powerpc/watchpoint: Don't ignore extraneous exceptions blindly")
> wrongly assumes that DAR is set to the address of the first byte of
> overlap for all load/stores on p8 as well. Fix that. With the fix,
> we now rely on 'ea' provided by analyse_instr(). If analyse_instr()
> fails, generate event unconditionally on p8, and on p10 generate
> event only if DAR is within a DAWR range.
>
> Note: 8xx is not affected.
>
> Fixes: 27985b2a640e ("powerpc/watchpoint: Don't ignore extraneous exceptions blindly")
> Fixes: 74c6881019b7 ("powerpc/watchpoint: Prepare handler to handle more than one watchpoint")
> Reported-by: Pedro Miraglia Franco de Carvalho <pedromfc@br.ibm.com>
> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
> ---
>  arch/powerpc/kernel/hw_breakpoint.c | 93 +++++++++++++++++++----------
>  1 file changed, 63 insertions(+), 30 deletions(-)
>
> diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
> index 031e6defc08e..7a66c370a105 100644
> --- a/arch/powerpc/kernel/hw_breakpoint.c
> +++ b/arch/powerpc/kernel/hw_breakpoint.c
> @@ -498,11 +498,11 @@ static bool dar_in_user_range(unsigned long dar, struct arch_hw_breakpoint *info
>         return ((info->address <= dar) && (dar - info->address < info->len));
>  }
>
> -static bool dar_user_range_overlaps(unsigned long dar, int size,
> -                                   struct arch_hw_breakpoint *info)
> +static bool ea_user_range_overlaps(unsigned long ea, int size,
> +                                  struct arch_hw_breakpoint *info)
>  {
> -       return ((dar < info->address + info->len) &&
> -               (dar + size > info->address));
> +       return ((ea < info->address + info->len) &&
> +               (ea + size > info->address));
>  }
>
>  static bool dar_in_hw_range(unsigned long dar, struct arch_hw_breakpoint *info)
> @@ -515,20 +515,22 @@ static bool dar_in_hw_range(unsigned long dar, struct arch_hw_breakpoint *info)
>         return ((hw_start_addr <= dar) && (hw_end_addr > dar));
>  }
>
> -static bool dar_hw_range_overlaps(unsigned long dar, int size,
> -                                 struct arch_hw_breakpoint *info)
> +static bool ea_hw_range_overlaps(unsigned long ea, int size,
> +                                struct arch_hw_breakpoint *info)
>  {
>         unsigned long hw_start_addr, hw_end_addr;
>
>         hw_start_addr = ALIGN_DOWN(info->address, HW_BREAKPOINT_SIZE);
>         hw_end_addr = ALIGN(info->address + info->len, HW_BREAKPOINT_SIZE);
>
> -       return ((dar < hw_end_addr) && (dar + size > hw_start_addr));
> +       return ((ea < hw_end_addr) && (ea + size > hw_start_addr));
>  }
>
>  /*
>   * If hw has multiple DAWR registers, we also need to check all
>   * dawrx constraint bits to confirm this is _really_ a valid event.
> + * If type is UNKNOWN, but privilege level matches, consider it as
> + * a positive match.
>   */
>  static bool check_dawrx_constraints(struct pt_regs *regs, int type,
>                                     struct arch_hw_breakpoint *info)
> @@ -536,7 +538,12 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
>         if (OP_IS_LOAD(type) && !(info->type & HW_BRK_TYPE_READ))
>                 return false;
>
> -       if (OP_IS_STORE(type) && !(info->type & HW_BRK_TYPE_WRITE))
> +       /*
> +        * The Cache Management instructions other than dcbz never
> +        * cause a match. i.e. if type is CACHEOP, the instruction
> +        * is dcbz, and dcbz is treated as Store.
> +        */
> +       if ((OP_IS_STORE(type) || type == CACHEOP) && !(info->type & HW_BRK_TYPE_WRITE))
>                 return false;
This change seems seperate to this commit?
>
>         if (is_kernel_addr(regs->nip) && !(info->type & HW_BRK_TYPE_KERNEL))
> @@ -553,7 +560,8 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
>   * including extraneous exception. Otherwise return false.
>   */
>  static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
> -                             int type, int size, struct arch_hw_breakpoint *info)
> +                             unsigned long ea, int type, int size,
> +                             struct arch_hw_breakpoint *info)
>  {
>         bool in_user_range = dar_in_user_range(regs->dar, info);
>         bool dawrx_constraints;
> @@ -569,11 +577,10 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>         }
>
>         if (unlikely(ppc_inst_equal(instr, ppc_inst(0)))) {
> -               if (in_user_range)
> -                       return true;
> -
> -               if (dar_in_hw_range(regs->dar, info)) {
> -                       info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
> +               if (cpu_has_feature(CPU_FTR_ARCH_31)) {
> +                       if (dar_in_hw_range(regs->dar, info))
> +                               return true;
> +               } else {
>                         return true;
I think this would be clearer as:
        if (cpu_has_feature(CPU_FTR_ARCH_31) &&
!(dar_in_hw_range(regs->dar, info)))
            return false;
        else
            return true;

>                 }
>                 return false;
> @@ -581,10 +588,20 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>
>         dawrx_constraints = check_dawrx_constraints(regs, type, info);
>
> -       if (dar_user_range_overlaps(regs->dar, size, info))
> +       if (type == UNKNOWN) {
> +               if (cpu_has_feature(CPU_FTR_ARCH_31)) {
> +                       if (dar_in_hw_range(regs->dar, info))
> +                               return dawrx_constraints;
> +               } else {
> +                       return dawrx_constraints;
> +               }
> +               return false;
> +       }
Similar thing here, it could be:
        if ((cpu_has_feature(CPU_FTR_ARCH_31)) &&
!(dar_in_hw_range(regs->dar, info)))
            return false;
        else
            return dawrx_constraints;
> +
> +       if (ea_user_range_overlaps(ea, size, info))
>                 return dawrx_constraints;
>
> -       if (dar_hw_range_overlaps(regs->dar, size, info)) {
> +       if (ea_hw_range_overlaps(ea, size, info)) {
>                 if (dawrx_constraints) {
>                         info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
>                         return true;
> @@ -593,8 +610,17 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>         return false;
>  }
>
> +static int cache_op_size(void)
> +{
> +#ifdef __powerpc64__
> +       return ppc64_caches.l1d.block_size;
> +#else
> +       return L1_CACHE_BYTES;
> +#endif
> +}
> +
>  static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
> -                            int *type, int *size, bool *larx_stcx)
> +                            int *type, int *size, unsigned long *ea)
>  {
>         struct instruction_op op;
>
> @@ -602,16 +628,23 @@ static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
>                 return;
>
>         analyse_instr(&op, regs, *instr);
> -
> -       /*
> -        * Set size = 8 if analyse_instr() fails. If it's a userspace
> -        * watchpoint(valid or extraneous), we can notify user about it.
> -        * If it's a kernel watchpoint, instruction  emulation will fail
> -        * in stepping_handler() and watchpoint will be disabled.
> -        */
>         *type = GETTYPE(op.type);
> -       *size = !(*type == UNKNOWN) ? GETSIZE(op.type) : 8;
> -       *larx_stcx = (*type == LARX || *type == STCX);
> +       *ea = op.ea;
> +#ifdef __powerpc64__
> +       if (!(regs->msr & MSR_64BIT))
> +               *ea &= 0xffffffffUL;
> +#endif
> +
> +       *size = GETSIZE(op.type);
> +       if (*type == CACHEOP) {
> +               *size = cache_op_size();
> +               *ea &= ~(*size - 1);
> +       }
Again related to CACHEOP, should these changes be mentioned in the
commit message?
> +}
> +
> +static bool is_larx_stcx_instr(int type)
> +{
> +       return type == LARX || type == STCX;
>  }
>
>  /*
> @@ -678,7 +711,7 @@ int hw_breakpoint_handler(struct die_args *args)
>         struct ppc_inst instr = ppc_inst(0);
>         int type = 0;
>         int size = 0;
> -       bool larx_stcx = false;
> +       unsigned long ea;
>
>         /* Disable breakpoints during exception handling */
>         hw_breakpoint_disable();
> @@ -692,7 +725,7 @@ int hw_breakpoint_handler(struct die_args *args)
>         rcu_read_lock();
>
>         if (!IS_ENABLED(CONFIG_PPC_8xx))
> -               get_instr_detail(regs, &instr, &type, &size, &larx_stcx);
> +               get_instr_detail(regs, &instr, &type, &size, &ea);
>
>         for (i = 0; i < nr_wp_slots(); i++) {
>                 bp[i] = __this_cpu_read(bp_per_reg[i]);
> @@ -702,7 +735,7 @@ int hw_breakpoint_handler(struct die_args *args)
>                 info[i] = counter_arch_bp(bp[i]);
>                 info[i]->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ;
>
> -               if (check_constraints(regs, instr, type, size, info[i])) {
> +               if (check_constraints(regs, instr, ea, type, size, info[i])) {
>                         if (!IS_ENABLED(CONFIG_PPC_8xx) &&
>                             ppc_inst_equal(instr, ppc_inst(0))) {
>                                 handler_error(bp[i], info[i]);
> @@ -744,7 +777,7 @@ int hw_breakpoint_handler(struct die_args *args)
>         }
>
>         if (!IS_ENABLED(CONFIG_PPC_8xx)) {
> -               if (larx_stcx) {
> +               if (is_larx_stcx_instr(type)) {
>                         for (i = 0; i < nr_wp_slots(); i++) {
>                                 if (!hit[i])
>                                         continue;
> --
> 2.26.2
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/9] powerpc/watchpoint: Fix DAWR exception constraint
  2020-07-15  2:19   ` Jordan Niethe
@ 2020-07-15  4:08     ` Ravi Bangoria
  0 siblings, 0 replies; 15+ messages in thread
From: Ravi Bangoria @ 2020-07-15  4:08 UTC (permalink / raw)
  To: Jordan Niethe
  Cc: Michael Ellerman, mikey, apopple, Paul Mackerras,
	Nicholas Piggin, Christophe Leroy, naveen.n.rao, peterz, jolsa,
	oleg, fweisbec, mingo, pedromfc, miltonm, linuxppc-dev,
	linux-kernel, Ravi Bangoria

Hi Jordan,

>> @@ -536,7 +538,12 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
>>          if (OP_IS_LOAD(type) && !(info->type & HW_BRK_TYPE_READ))
>>                  return false;
>>
>> -       if (OP_IS_STORE(type) && !(info->type & HW_BRK_TYPE_WRITE))
>> +       /*
>> +        * The Cache Management instructions other than dcbz never
>> +        * cause a match. i.e. if type is CACHEOP, the instruction
>> +        * is dcbz, and dcbz is treated as Store.
>> +        */
>> +       if ((OP_IS_STORE(type) || type == CACHEOP) && !(info->type & HW_BRK_TYPE_WRITE))
>>                  return false;
> This change seems seperate to this commit?

I also thought about it but was not sure. See below ...

>>
>>          if (is_kernel_addr(regs->nip) && !(info->type & HW_BRK_TYPE_KERNEL))
>> @@ -553,7 +560,8 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
>>    * including extraneous exception. Otherwise return false.
>>    */
>>   static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>> -                             int type, int size, struct arch_hw_breakpoint *info)
>> +                             unsigned long ea, int type, int size,
>> +                             struct arch_hw_breakpoint *info)
>>   {
>>          bool in_user_range = dar_in_user_range(regs->dar, info);
>>          bool dawrx_constraints;
>> @@ -569,11 +577,10 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>>          }
>>
>>          if (unlikely(ppc_inst_equal(instr, ppc_inst(0)))) {
>> -               if (in_user_range)
>> -                       return true;
>> -
>> -               if (dar_in_hw_range(regs->dar, info)) {
>> -                       info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
>> +               if (cpu_has_feature(CPU_FTR_ARCH_31)) {
>> +                       if (dar_in_hw_range(regs->dar, info))
>> +                               return true;
>> +               } else {
>>                          return true;
> I think this would be clearer as:
>          if (cpu_has_feature(CPU_FTR_ARCH_31) &&
> !(dar_in_hw_range(regs->dar, info)))
>              return false;
>          else
>              return true;

ok

> 
>>                  }
>>                  return false;
>> @@ -581,10 +588,20 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>>
>>          dawrx_constraints = check_dawrx_constraints(regs, type, info);
>>
>> -       if (dar_user_range_overlaps(regs->dar, size, info))
>> +       if (type == UNKNOWN) {
>> +               if (cpu_has_feature(CPU_FTR_ARCH_31)) {
>> +                       if (dar_in_hw_range(regs->dar, info))
>> +                               return dawrx_constraints;
>> +               } else {
>> +                       return dawrx_constraints;
>> +               }
>> +               return false;
>> +       }
> Similar thing here, it could be:
>          if ((cpu_has_feature(CPU_FTR_ARCH_31)) &&
> !(dar_in_hw_range(regs->dar, info)))
>              return false;
>          else
>              return dawrx_constraints;

ok

>> +
>> +       if (ea_user_range_overlaps(ea, size, info))
>>                  return dawrx_constraints;
>>
>> -       if (dar_hw_range_overlaps(regs->dar, size, info)) {
>> +       if (ea_hw_range_overlaps(ea, size, info)) {
>>                  if (dawrx_constraints) {
>>                          info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
>>                          return true;
>> @@ -593,8 +610,17 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>>          return false;
>>   }
>>
>> +static int cache_op_size(void)
>> +{
>> +#ifdef __powerpc64__
>> +       return ppc64_caches.l1d.block_size;
>> +#else
>> +       return L1_CACHE_BYTES;
>> +#endif
>> +}
>> +
>>   static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
>> -                            int *type, int *size, bool *larx_stcx)
>> +                            int *type, int *size, unsigned long *ea)
>>   {
>>          struct instruction_op op;
>>
>> @@ -602,16 +628,23 @@ static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
>>                  return;
>>
>>          analyse_instr(&op, regs, *instr);
>> -
>> -       /*
>> -        * Set size = 8 if analyse_instr() fails. If it's a userspace
>> -        * watchpoint(valid or extraneous), we can notify user about it.
>> -        * If it's a kernel watchpoint, instruction  emulation will fail
>> -        * in stepping_handler() and watchpoint will be disabled.
>> -        */
>>          *type = GETTYPE(op.type);
>> -       *size = !(*type == UNKNOWN) ? GETSIZE(op.type) : 8;
>> -       *larx_stcx = (*type == LARX || *type == STCX);
>> +       *ea = op.ea;
>> +#ifdef __powerpc64__
>> +       if (!(regs->msr & MSR_64BIT))
>> +               *ea &= 0xffffffffUL;
>> +#endif
>> +
>> +       *size = GETSIZE(op.type);
>> +       if (*type == CACHEOP) {
>> +               *size = cache_op_size();
>> +               *ea &= ~(*size - 1);
>> +       }
> Again related to CACHEOP, should these changes be mentioned in the
> commit message?

For CACHEOP, ea returned by analyse_instr() needs to be aligned down to cache
block size manually. Also, for CACHEOP, size returned by analyse_instr() is 0
and thus size also needs to be calculated manually. This was missed in
27985b2a640e. So it kind of relates to other changes of the patch but needs
special treatment as well. Will see if I can split it.

Thanks for the review,
Ravi

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-07-15  4:09 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-08  4:50 [PATCH v3 0/9] powerpc/watchpoint: Enable 2nd DAWR on baremetal and powervm Ravi Bangoria
2020-07-08  4:50 ` [PATCH v3 1/9] powerpc/watchpoint: Fix 512 byte boundary limit Ravi Bangoria
2020-07-08  7:44   ` Jordan Niethe
2020-07-08  7:52     ` Christophe Leroy
2020-07-15  1:01   ` Jordan Niethe
2020-07-08  4:50 ` [PATCH v3 2/9] powerpc/watchpoint: Fix DAWR exception constraint Ravi Bangoria
2020-07-15  2:19   ` Jordan Niethe
2020-07-15  4:08     ` Ravi Bangoria
2020-07-08  4:50 ` [PATCH v3 3/9] powerpc/watchpoint: Enable watchpoint functionality on power10 guest Ravi Bangoria
2020-07-08  4:50 ` [PATCH v3 4/9] powerpc/dt_cpu_ftrs: Add feature for 2nd DAWR Ravi Bangoria
2020-07-08  4:50 ` [PATCH v3 5/9] powerpc/watchpoint: Set CPU_FTR_DAWR1 based on pa-features bit Ravi Bangoria
2020-07-08  4:50 ` [PATCH v3 6/9] powerpc/watchpoint: Rename current H_SET_MODE DAWR macro Ravi Bangoria
2020-07-08  4:50 ` [PATCH v3 7/9] powerpc/watchpoint: Guest support for 2nd DAWR hcall Ravi Bangoria
2020-07-08  4:50 ` [PATCH v3 8/9] powerpc/watchpoint: Return available watchpoints dynamically Ravi Bangoria
2020-07-08  4:50 ` [PATCH v3 9/9] powerpc/watchpoint: Remove 512 byte boundary Ravi Bangoria

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).