From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752020AbcDULRp (ORCPT ); Thu, 21 Apr 2016 07:17:45 -0400 Received: from e23smtp01.au.ibm.com ([202.81.31.143]:33570 "EHLO e23smtp01.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751807AbcDULRo (ORCPT ); Thu, 21 Apr 2016 07:17:44 -0400 X-IBM-Helo: d23dlp03.au.ibm.com X-IBM-MailFrom: naveen.n.rao@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Thu, 21 Apr 2016 16:45:31 +0530 From: "Naveen N. Rao" To: Anju T Cc: linux-kernel@vger.kernel.org, mpe@ellerman.id.au, maddy@linux.vnet.ibm.com, jolsa@redhat.com, dsahern@gmail.com, acme@redhat.com, sukadev@linux.vnet.ibm.com, hemant@linux.vnet.ibm.com, linuxppc-dev@lists.ozlabs.org Subject: Re: [PATCH V11 2/4] perf/powerpc: add support for sampling intr machine state Message-ID: <20160421111531.GD28637@naverao1-tp.ibm.com> References: <1455944568-7231-1-git-send-email-anju@linux.vnet.ibm.com> <1455944568-7231-3-git-send-email-anju@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1455944568-7231-3-git-send-email-anju@linux.vnet.ibm.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16042111-1618-0000-0000-0000457964FA Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2016/02/20 10:32AM, Anju T wrote: > The perf infrastructure uses a bit mask to find out valid > registers to display. Define a register mask for supported > registers defined in asm/perf_regs.h. The bit positions also > correspond to register IDs which is used by perf infrastructure > to fetch the register values. CONFIG_HAVE_PERF_REGS enables > sampling of the interrupted machine state. Please also update Documentation/features/perf/perf-regs/arch-support.txt as part of this patch. > > Signed-off-by: Anju T > --- > arch/powerpc/Kconfig | 1 + > arch/powerpc/perf/Makefile | 1 + > arch/powerpc/perf/perf_regs.c | 91 +++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 93 insertions(+) > create mode 100644 arch/powerpc/perf/perf_regs.c > > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig > index 9a7057e..c4ce60d 100644 > --- a/arch/powerpc/Kconfig > +++ b/arch/powerpc/Kconfig > @@ -119,6 +119,7 @@ config PPC > select GENERIC_ATOMIC64 if PPC32 > select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE > select HAVE_PERF_EVENTS > + select HAVE_PERF_REGS > select HAVE_REGS_AND_STACK_ACCESS_API > select HAVE_HW_BREAKPOINT if PERF_EVENTS && PPC_BOOK3S_64 > select ARCH_WANT_IPC_PARSE_VERSION > diff --git a/arch/powerpc/perf/Makefile b/arch/powerpc/perf/Makefile > index f9c083a..2f2d3d2 100644 > --- a/arch/powerpc/perf/Makefile > +++ b/arch/powerpc/perf/Makefile > @@ -8,6 +8,7 @@ obj64-$(CONFIG_PPC_PERF_CTRS) += power4-pmu.o ppc970-pmu.o power5-pmu.o \ > power8-pmu.o > obj32-$(CONFIG_PPC_PERF_CTRS) += mpc7450-pmu.o > > +obj-$(CONFIG_PERF_EVENTS) += perf_regs.o > obj-$(CONFIG_FSL_EMB_PERF_EVENT) += core-fsl-emb.o > obj-$(CONFIG_FSL_EMB_PERF_EVENT_E500) += e500-pmu.o e6500-pmu.o > > diff --git a/arch/powerpc/perf/perf_regs.c b/arch/powerpc/perf/perf_regs.c > new file mode 100644 > index 0000000..ae0759c > --- /dev/null > +++ b/arch/powerpc/perf/perf_regs.c > @@ -0,0 +1,91 @@ You may want to add a copyright header here. > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#define PT_REGS_OFFSET(id, r) [id] = offsetof(struct pt_regs, r) Also, run this through checkpatch.pl. There is a whitespace issue reported here. > + > +#define REG_RESERVED (~((1ULL << PERF_REG_POWERPC_MAX) - 1)) > + > +static unsigned int pt_regs_offset[PERF_REG_POWERPC_MAX] = { > + PT_REGS_OFFSET(PERF_REG_POWERPC_R0, gpr[0]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R1, gpr[1]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R2, gpr[2]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R3, gpr[3]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R4, gpr[4]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R5, gpr[5]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R6, gpr[6]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R7, gpr[7]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R8, gpr[8]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R9, gpr[9]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R10, gpr[10]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R11, gpr[11]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R12, gpr[12]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R13, gpr[13]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R14, gpr[14]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R15, gpr[15]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R16, gpr[16]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R17, gpr[17]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R18, gpr[18]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R19, gpr[19]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R20, gpr[20]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R21, gpr[21]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R22, gpr[22]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R23, gpr[23]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R24, gpr[24]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R25, gpr[25]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R26, gpr[26]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R27, gpr[27]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R28, gpr[28]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R29, gpr[29]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R30, gpr[30]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_R31, gpr[31]), > + PT_REGS_OFFSET(PERF_REG_POWERPC_NIP, nip), > + PT_REGS_OFFSET(PERF_REG_POWERPC_MSR, msr), > + PT_REGS_OFFSET(PERF_REG_POWERPC_ORIG_R3, orig_gpr3), > + PT_REGS_OFFSET(PERF_REG_POWERPC_CTR, ctr), > + PT_REGS_OFFSET(PERF_REG_POWERPC_LNK, link), > + PT_REGS_OFFSET(PERF_REG_POWERPC_XER, xer), > + PT_REGS_OFFSET(PERF_REG_POWERPC_CCR, ccr), > + PT_REGS_OFFSET(PERF_REG_POWERPC_SOFTE, softe), > + PT_REGS_OFFSET(PERF_REG_POWERPC_TRAP, trap), > + PT_REGS_OFFSET(PERF_REG_POWERPC_DAR, dar), > + PT_REGS_OFFSET(PERF_REG_POWERPC_DSISR, dsisr), > +}; > + > +u64 perf_reg_value(struct pt_regs *regs, int idx) > +{ > + if (WARN_ON_ONCE(idx >= PERF_REG_POWERPC_MAX)) > + return 0; > + > + return regs_get_register(regs, pt_regs_offset[idx]); > +} > + > +int perf_reg_validate(u64 mask) > +{ > + if (!mask || mask & REG_RESERVED) > + return -EINVAL; > + return 0; > +} > + > +u64 perf_reg_abi(struct task_struct *task) > +{ > +#ifdef __powerpc64__ > + if (!test_tsk_thread_flag(task, TIF_32BIT)) > + return PERF_SAMPLE_REGS_ABI_64; > + else The 'else' above is redundant. - Naveen > +#endif > + return PERF_SAMPLE_REGS_ABI_32; > +} > + > +void perf_get_regs_user(struct perf_regs *regs_user, > + struct pt_regs *regs, > + struct pt_regs *regs_user_copy) > +{ > + regs_user->regs = task_pt_regs(current); > + regs_user->abi = perf_reg_abi(current); > +} > -- > 2.1.0 >