From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03C29C2BA1E for ; Mon, 6 Apr 2020 09:59:38 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2C13A2072A for ; Mon, 6 Apr 2020 09:59:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C13A2072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=popple.id.au Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48wmGt1XTWzDqdw for ; Mon, 6 Apr 2020 19:59:34 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=popple.id.au (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=alistair@popple.id.au; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=popple.id.au Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48wm6R0WpWzDqTw for ; Mon, 6 Apr 2020 19:52:14 +1000 (AEST) Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0369Y3Og005636 for ; Mon, 6 Apr 2020 05:52:11 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 306pchwpkb-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 06 Apr 2020 05:52:11 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 6 Apr 2020 10:51:58 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 6 Apr 2020 10:51:56 +0100 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0369q6ec44957852 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 6 Apr 2020 09:52:06 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8ECCAA4051; Mon, 6 Apr 2020 09:52:06 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DF310A4040; Mon, 6 Apr 2020 09:52:05 +0000 (GMT) Received: from ozlabs.au.ibm.com (unknown [9.192.253.14]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 6 Apr 2020 09:52:05 +0000 (GMT) Received: from townsend.localnet (unknown [9.102.53.209]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.au.ibm.com (Postfix) with ESMTPSA id EFC59A027C; Mon, 6 Apr 2020 19:51:59 +1000 (AEST) From: Alistair Popple To: Jordan Niethe Subject: Re: [PATCH v5 18/21] powerpc64: Add prefixed instructions to instruction data type Date: Mon, 06 Apr 2020 19:52:03 +1000 In-Reply-To: <20200406080936.7180-19-jniethe5@gmail.com> References: <20200406080936.7180-1-jniethe5@gmail.com> <20200406080936.7180-19-jniethe5@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-TM-AS-GCONF: 00 x-cbid: 20040609-0012-0000-0000-0000039F3DD7 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 20040609-0013-0000-0000-000021DC5B1C Message-Id: <7182352.hY56U9iWWN@townsend> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-04-06_05:2020-04-03, 2020-04-06 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=1 priorityscore=1501 lowpriorityscore=0 spamscore=0 bulkscore=0 phishscore=0 mlxlogscore=999 impostorscore=0 malwarescore=0 clxscore=1034 adultscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2004060078 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: npiggin@gmail.com, bala24@linux.ibm.com, linuxppc-dev@lists.ozlabs.org, dja@axtens.net Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" > diff --git a/arch/powerpc/include/asm/inst.h > b/arch/powerpc/include/asm/inst.h index 70b37a35a91a..7e23e7146c66 100644 > --- a/arch/powerpc/include/asm/inst.h > +++ b/arch/powerpc/include/asm/inst.h > @@ -8,23 +8,67 @@ > > struct ppc_inst { > u32 val; > +#ifdef __powerpc64__ > + u32 suffix; > +#endif /* __powerpc64__ */ > } __packed; > > -#define ppc_inst(x) ((struct ppc_inst){ .val = x }) > +static inline int ppc_inst_opcode(struct ppc_inst x) > +{ > + return x.val >> 26; > +} > > static inline u32 ppc_inst_val(struct ppc_inst x) > { > return x.val; > } > > -static inline bool ppc_inst_len(struct ppc_inst x) > +#ifdef __powerpc64__ > +#define ppc_inst(x) ((struct ppc_inst){ .val = (x), .suffix = 0xff }) > + > +#define ppc_inst_prefix(x, y) ((struct ppc_inst){ .val = (x), .suffix = (y) > }) + > +static inline u32 ppc_inst_suffix(struct ppc_inst x) > { > - return sizeof(struct ppc_inst); > + return x.suffix; > } > > -static inline int ppc_inst_opcode(struct ppc_inst x) > +static inline bool ppc_inst_prefixed(struct ppc_inst x) { > + return ((ppc_inst_val(x) >> 26) == 1) && ppc_inst_suffix(x) != 0xff; > +} > + > +static inline struct ppc_inst ppc_inst_swab(struct ppc_inst x) > { > - return x.val >> 26; > + return ppc_inst_prefix(swab32(ppc_inst_val(x)), > + swab32(ppc_inst_suffix(x))); > +} > + > +static inline struct ppc_inst ppc_inst_read(const struct ppc_inst *ptr) > +{ > + u32 val, suffix = 0xff; > + val = *(u32 *)ptr; > + if ((val >> 26) == 1) > + suffix = *((u32 *)ptr + 1); > + return ppc_inst_prefix(val, suffix); > +} > + > +static inline void ppc_inst_write(struct ppc_inst *ptr, struct ppc_inst x) > +{ > + if (ppc_inst_prefixed(x)) { > + *(u32 *)ptr = x.val; > + *((u32 *)ptr + 1) = x.suffix; > + } else { > + *(u32 *)ptr = x.val; > + } > +} > + > +#else > + > +#define ppc_inst(x) ((struct ppc_inst){ .val = x }) > + > +static inline bool ppc_inst_prefixed(ppc_inst x) > +{ > + return 0; > } > > static inline struct ppc_inst ppc_inst_swab(struct ppc_inst x) > @@ -32,14 +76,31 @@ static inline struct ppc_inst ppc_inst_swab(struct > ppc_inst x) return ppc_inst(swab32(ppc_inst_val(x))); > } > > +static inline u32 ppc_inst_val(struct ppc_inst x) > +{ > + return x.val; > +} > + > static inline struct ppc_inst ppc_inst_read(const struct ppc_inst *ptr) > { > return *ptr; > } > > +static inline void ppc_inst_write(struct ppc_inst *ptr, struct ppc_inst x) > +{ > + *ptr = x; > +} > + > +#endif /* __powerpc64__ */ > + > static inline bool ppc_inst_equal(struct ppc_inst x, struct ppc_inst y) > { > return !memcmp(&x, &y, sizeof(struct ppc_inst)); > } Apologies for not picking this up earlier, I was hoping to get to the bottom of the issue I was seeing before you sent out v5. However the above definition of instruction equality does not seem correct because it does not consider the case when an instruction is not prefixed - a non-prefixed instruction should be considered equal if the first 32-bit opcode/value is the same. Something like: if (ppc_inst_prefixed(x) != ppc_inst_prefixed(y)) return false; else if (ppc_inst_prefixed(x)) return !memcmp(&x, &y, sizeof(struct ppc_inst)); else return x.val == y.val; This was causing failures in ftrace_modify_code() as it would falsely detect two non-prefixed instructions as being not equal due to differences in the suffix. - Alistair > +static inline int ppc_inst_len(struct ppc_inst x) > +{ > + return (ppc_inst_prefixed(x)) ? 8 : 4; > +} > + > #endif /* _ASM_INST_H */ > diff --git a/arch/powerpc/include/asm/kprobes.h > b/arch/powerpc/include/asm/kprobes.h index 66b3f2983b22..4fc0e15e23a5 > 100644 > --- a/arch/powerpc/include/asm/kprobes.h > +++ b/arch/powerpc/include/asm/kprobes.h > @@ -43,7 +43,7 @@ extern kprobe_opcode_t optprobe_template_ret[]; > extern kprobe_opcode_t optprobe_template_end[]; > > /* Fixed instruction size for powerpc */ > -#define MAX_INSN_SIZE 1 > +#define MAX_INSN_SIZE 2 > #define MAX_OPTIMIZED_LENGTH sizeof(kprobe_opcode_t) /* 4 bytes */ > #define MAX_OPTINSN_SIZE (optprobe_template_end - optprobe_template_entry) > #define RELATIVEJUMP_SIZE sizeof(kprobe_opcode_t) /* 4 bytes */ > diff --git a/arch/powerpc/include/asm/uaccess.h > b/arch/powerpc/include/asm/uaccess.h index c0a35e4586a5..5a3f486ddf02 > 100644 > --- a/arch/powerpc/include/asm/uaccess.h > +++ b/arch/powerpc/include/asm/uaccess.h > @@ -105,11 +105,34 @@ static inline int __access_ok(unsigned long addr, > unsigned long size, #define __put_user_inatomic(x, ptr) \ > __put_user_nosleep((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr))) > > -#define __get_user_instr(x, ptr) \ > - __get_user_nocheck((x).val, (u32 *)(ptr), sizeof(u32), true) > +#define __get_user_instr(x, ptr) \ > +({ \ > + long __gui_ret = 0; \ > + unsigned int prefix, suffix; \ > + __gui_ret = __get_user(prefix, (unsigned int __user *)ptr); \ > + if (!__gui_ret && (prefix >> 26) == 1) { \ > + __gui_ret = __get_user(suffix, (unsigned int __user *)ptr + 1); \ > + (x) = ppc_inst_prefix(prefix, suffix); \ > + } else { \ > + (x) = ppc_inst(prefix); \ > + } \ > + __gui_ret; \ > +}) > + > +#define __get_user_instr_inatomic(x, ptr) \ > +({ \ > + long __gui_ret = 0; \ > + unsigned int prefix, suffix; \ > + __gui_ret = __get_user_inatomic(prefix, (unsigned int __user *)ptr); \ > + if (!__gui_ret && (prefix >> 26) == 1) { \ > + __gui_ret = __get_user_inatomic(suffix, (unsigned int __user *)ptr + > 1); \ + (x) = ppc_inst_prefix(prefix, suffix); \ > + } else { \ > + (x) = ppc_inst(prefix); \ > + } \ > + __gui_ret; \ > +}) > > -#define __get_user_instr_inatomic(x, ptr) \ > - __get_user_nosleep((x).val, (u32 *)(ptr), sizeof(u32)) > extern long __put_user_bad(void); > > /* > diff --git a/arch/powerpc/include/asm/uprobes.h > b/arch/powerpc/include/asm/uprobes.h index 7e3b329ba2d3..5bf65f5d44a9 > 100644 > --- a/arch/powerpc/include/asm/uprobes.h > +++ b/arch/powerpc/include/asm/uprobes.h > @@ -15,7 +15,7 @@ > > typedef ppc_opcode_t uprobe_opcode_t; > > -#define MAX_UINSN_BYTES 4 > +#define MAX_UINSN_BYTES 8 > #define UPROBE_XOL_SLOT_BYTES (MAX_UINSN_BYTES) > > /* The following alias is needed for reference from arch-agnostic code */ > diff --git a/arch/powerpc/kernel/optprobes.c > b/arch/powerpc/kernel/optprobes.c index 684640b8fa2e..689daf430161 100644 > --- a/arch/powerpc/kernel/optprobes.c > +++ b/arch/powerpc/kernel/optprobes.c > @@ -159,38 +159,38 @@ void patch_imm32_load_insns(unsigned int val, > kprobe_opcode_t *addr) > > /* > * Generate instructions to load provided immediate 64-bit value > - * to register 'r3' and patch these instructions at 'addr'. > + * to register 'reg' and patch these instructions at 'addr'. > */ > -void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr) > +void patch_imm64_load_insns(unsigned long val, int reg, kprobe_opcode_t > *addr) { > - /* lis r3,(op)@highest */ > - patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_ADDIS | > ___PPC_RT(3) | + /* lis reg,(op)@highest */ > + patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_ADDIS | > ___PPC_RT(reg) | ((val >> 48) & 0xffff))); > addr++; > > - /* ori r3,r3,(op)@higher */ > - patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_ORI | > ___PPC_RA(3) | - ___PPC_RS(3) | ((val >> 32) & 0xffff))); > + /* ori reg,reg,(op)@higher */ > + patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_ORI | > ___PPC_RA(reg) | + ___PPC_RS(reg) | ((val >> 32) & 0xffff))); > addr++; > > - /* rldicr r3,r3,32,31 */ > - patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_RLDICR | > ___PPC_RA(3) | - ___PPC_RS(3) | __PPC_SH64(32) | __PPC_ME64(31))); > + /* rldicr reg,reg,32,31 */ > + patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_RLDICR | > ___PPC_RA(reg) | + ___PPC_RS(reg) | __PPC_SH64(32) | __PPC_ME64(31))); > addr++; > > - /* oris r3,r3,(op)@h */ > - patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_ORIS | > ___PPC_RA(3) | - ___PPC_RS(3) | ((val >> 16) & 0xffff))); > + /* oris reg,reg,(op)@h */ > + patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_ORIS | > ___PPC_RA(reg) | + ___PPC_RS(reg) | ((val >> 16) & 0xffff))); > addr++; > > - /* ori r3,r3,(op)@l */ > - patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_ORI | > ___PPC_RA(3) | - ___PPC_RS(3) | (val & 0xffff))); > + /* ori reg,reg,(op)@l */ > + patch_instruction((struct ppc_inst *)addr, ppc_inst(PPC_INST_ORI | > ___PPC_RA(reg) | + ___PPC_RS(reg) | (val & 0xffff))); > } > > int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct > kprobe *p) { > - struct ppc_inst branch_op_callback, branch_emulate_step; > + struct ppc_inst branch_op_callback, branch_emulate_step, temp; > kprobe_opcode_t *op_callback_addr, *emulate_step_addr, *buff; > long b_offset; > unsigned long nip, size; > @@ -240,7 +240,7 @@ int arch_prepare_optimized_kprobe(struct > optimized_kprobe *op, struct kprobe *p) * Fixup the template with > instructions to: > * 1. load the address of the actual probepoint > */ > - patch_imm64_load_insns((unsigned long)op, buff + TMPL_OP_IDX); > + patch_imm64_load_insns((unsigned long)op, 3, buff + TMPL_OP_IDX); > > /* > * 2. branch to optimized_callback() and emulate_step() > @@ -271,7 +271,11 @@ int arch_prepare_optimized_kprobe(struct > optimized_kprobe *op, struct kprobe *p) /* > * 3. load instruction to be emulated into relevant register, and > */ > - patch_imm32_load_insns(*p->ainsn.insn, buff + TMPL_INSN_IDX); > + temp = ppc_inst_read((struct ppc_inst *)p->ainsn.insn); > + patch_imm64_load_insns(ppc_inst_val(temp) | > + ((u64)ppc_inst_suffix(temp) << 32), > + 4, > + buff + TMPL_INSN_IDX); > > /* > * 4. branch back from trampoline > diff --git a/arch/powerpc/kernel/optprobes_head.S > b/arch/powerpc/kernel/optprobes_head.S index cf383520843f..ff8ba4d3824d > 100644 > --- a/arch/powerpc/kernel/optprobes_head.S > +++ b/arch/powerpc/kernel/optprobes_head.S > @@ -94,6 +94,9 @@ optprobe_template_insn: > /* 2, Pass instruction to be emulated in r4 */ > nop > nop > + nop > + nop > + nop > > .global optprobe_template_call_emulate > optprobe_template_call_emulate: > diff --git a/arch/powerpc/kernel/trace/ftrace.c > b/arch/powerpc/kernel/trace/ftrace.c index e78742613b36..16041a5c86d5 > 100644 > --- a/arch/powerpc/kernel/trace/ftrace.c > +++ b/arch/powerpc/kernel/trace/ftrace.c > @@ -41,11 +41,35 @@ > #define NUM_FTRACE_TRAMPS 8 > static unsigned long ftrace_tramps[NUM_FTRACE_TRAMPS]; > > +#ifdef __powerpc64__ > static long > probe_kernel_read_inst(struct ppc_inst *inst, const void *src) > { > - return probe_kernel_read((void *)inst, src, MCOUNT_INSN_SIZE); > + u32 val, suffix = 0; > + long err; > + > + err = probe_kernel_read((void *)&val, > + src, sizeof(val)); > + if (err) > + return err; > + > + if ((val >> 26) == 1) > + err = probe_kernel_read((void *)&suffix, > + src + 4, MCOUNT_INSN_SIZE); > + if (err) > + return err; > + > + *inst = ppc_inst_prefix(val, suffix); > + > + return 0; > } > +#else > +static long > +probe_kernel_read_inst(struct ppc_inst *inst, const void *src) > +{ > + return probe_kernel_read((void *)inst, src, MCOUNT_INSN_SIZE) > +} > +#endif > > static struct ppc_inst > ftrace_call_replace(unsigned long ip, unsigned long addr, int link) > diff --git a/arch/powerpc/lib/code-patching.c > b/arch/powerpc/lib/code-patching.c index c329ad657302..b4007e03d8fa 100644 > --- a/arch/powerpc/lib/code-patching.c > +++ b/arch/powerpc/lib/code-patching.c > @@ -24,12 +24,19 @@ static int __patch_instruction(struct ppc_inst > *exec_addr, struct ppc_inst instr { > int err = 0; > > - __put_user_asm(ppc_inst_val(instr), patch_addr, err, "stw"); > - if (err) > - return err; > - > - asm ("dcbst 0, %0; sync; icbi 0,%1; sync; isync" :: "r" (patch_addr), > - "r" (exec_addr)); > + if (!ppc_inst_prefixed(instr)) { > + __put_user_asm(ppc_inst_val(instr), patch_addr, err, "stw"); > + if (err) > + return err; > + asm ("dcbst 0, %0; sync; icbi 0,%1; sync; isync" :: "r" (patch_addr), > + "r" (exec_addr)); > + } else { > + __put_user_asm((u64)ppc_inst_suffix(instr) << 32 | ppc_inst_val(instr), > patch_addr, err, "std"); + if (err) > + return err; > + asm ("dcbst 0, %0; sync; icbi 0,%1; sync; isync" :: "r" (patch_addr), > + "r" (exec_addr)); > + } > > return 0; > } > diff --git a/arch/powerpc/lib/feature-fixups.c > b/arch/powerpc/lib/feature-fixups.c index f00dd13b1c3c..5519cec83cc8 100644 > --- a/arch/powerpc/lib/feature-fixups.c > +++ b/arch/powerpc/lib/feature-fixups.c > @@ -84,12 +84,13 @@ static int patch_feature_section(unsigned long value, > struct fixup_entry *fcur) src = alt_start; > dest = start; > > - for (; src < alt_end; src++, dest++) { > + for (; src < alt_end; src = (void *)src + > ppc_inst_len(ppc_inst_read(src)), + (dest = (void *)dest + > ppc_inst_len(ppc_inst_read(dest)))) { if (patch_alt_instruction(src, dest, > alt_start, alt_end)) > return 1; > } > > - for (; dest < end; dest++) > + for (; dest < end; dest = (void *)dest + > ppc_inst_len(ppc_inst(PPC_INST_NOP))) raw_patch_instruction(dest, > ppc_inst(PPC_INST_NOP)); > > return 0; > diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c > index 52ddd3122dc8..8b285bf11218 100644 > --- a/arch/powerpc/lib/sstep.c > +++ b/arch/powerpc/lib/sstep.c > @@ -1169,10 +1169,12 @@ int analyse_instr(struct instruction_op *op, const > struct pt_regs *regs, unsigned long int imm; > unsigned long int val, val2; > unsigned int mb, me, sh; > - unsigned int word; > + unsigned int word, suffix; > long ival; > > word = ppc_inst_val(instr); > + suffix = ppc_inst_suffix(instr); > + > op->type = COMPUTE; > > opcode = word >> 26; > diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c > index 6f3bcdcfc9c7..b704aebb099a 100644 > --- a/arch/powerpc/xmon/xmon.c > +++ b/arch/powerpc/xmon/xmon.c > @@ -761,8 +761,8 @@ static int xmon_bpt(struct pt_regs *regs) > > /* Are we at the trap at bp->instr[1] for some bp? */ > bp = in_breakpoint_table(regs->nip, &offset); > - if (bp != NULL && offset == 4) { > - regs->nip = bp->address + 4; > + if (bp != NULL && (offset == 4 || offset == 8)) { > + regs->nip = bp->address + offset; > atomic_dec(&bp->ref_count); > return 1; > } > @@ -863,7 +863,7 @@ static struct bpt *in_breakpoint_table(unsigned long > nip, unsigned long *offp) if (off >= sizeof(bpt_table)) > return NULL; > *offp = off % BPT_SIZE; > - if (*offp != 0 && *offp != 4) > + if (*offp != 0 && *offp != 4 && *offp != 8) > return NULL; > return bpts + (off / BPT_SIZE); > } > diff --git a/arch/powerpc/xmon/xmon_bpts.S b/arch/powerpc/xmon/xmon_bpts.S > index ebb2dbc70ca8..09058eb6abbd 100644 > --- a/arch/powerpc/xmon/xmon_bpts.S > +++ b/arch/powerpc/xmon/xmon_bpts.S > @@ -3,6 +3,8 @@ > #include > #include "xmon_bpts.h" > > +/* Prefixed instructions can not cross 64 byte boundaries */ > +.align 6 > .global bpt_table > bpt_table: > - .space NBPTS * 8 > + .space NBPTS * 16