linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2)
@ 2015-10-26 17:49 Torsten Duwe
  2015-10-26 17:56 ` [PATCH v3 2/8] ppc use ftrace_modify_all_code default Torsten Duwe
                   ` (8 more replies)
  0 siblings, 9 replies; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 17:49 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Hi all,

here is the current status of ftrace with regs, trace ops and live patching
for ppc64le. It seems I broke the ftrace graph caller and I spent most of
last week trying to fix it; Steven, maybe you could have a look? I startet
out with -mprofile-kernel and now found that the ordinary -pg is very
different. -mprofile-kernel only does the very bare minimal prologue
(set TOC, save LR) and then calls _mcount, which poses some problems.
I managed to get them resolved up to the point of the graph return ...

I tested intensively with the ftrace self tests, and, without the graph
caller, this set passes all of them on ppc64le. I tried not to break BE,
but may have missed an ifdef or two.

patch 2 (ftrace_modify_all_code default) is an independent prerequisite,
I would even call it a fix -- please consider applying it even if you
don't like the rest.

patch 5 has proven to be very useful during development; as mentioned earlier,
many of these functions may get called during a recoverable fault. The whole
recursion will probably terminate if all goes well, but I'd rather be
defensive here.

Torsten Duwe (8):
  ppc64le FTRACE_WITH_REGS implementation
  ppc use ftrace_modify_all_code default
  ppc64 ftrace_with_regs configuration variables
  ppc64 ftrace_with_regs: spare early boot and low level
  ppc64 ftrace: disable profiling for some functions
  ppc64 ftrace: disable profiling for some files
  Implement kernel live patching for ppc64le (ABIv2)
  Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it
    is selected.

 arch/powerpc/Kconfig                 |   7 ++
 arch/powerpc/Makefile                |   7 ++
 arch/powerpc/include/asm/ftrace.h    |   5 ++
 arch/powerpc/include/asm/livepatch.h |  27 +++++++
 arch/powerpc/kernel/Makefile         |  13 +--
 arch/powerpc/kernel/entry_64.S       | 153 ++++++++++++++++++++++++++++++++++-
 arch/powerpc/kernel/ftrace.c         |  88 +++++++++++++++-----
 arch/powerpc/kernel/livepatch.c      |  20 +++++
 arch/powerpc/kernel/module_64.c      |  39 ++++++++-
 arch/powerpc/kernel/process.c        |   2 +-
 arch/powerpc/lib/Makefile            |   4 +-
 arch/powerpc/mm/fault.c              |   2 +-
 arch/powerpc/mm/hash_utils_64.c      |  18 ++---
 arch/powerpc/mm/hugetlbpage-hash64.c |   2 +-
 arch/powerpc/mm/hugetlbpage.c        |   4 +-
 arch/powerpc/mm/mem.c                |   2 +-
 arch/powerpc/mm/pgtable_64.c         |   2 +-
 arch/powerpc/mm/slb.c                |   6 +-
 arch/powerpc/mm/slice.c              |   8 +-
 kernel/trace/Kconfig                 |   5 ++
 20 files changed, 359 insertions(+), 55 deletions(-)
 create mode 100644 arch/powerpc/include/asm/livepatch.h
 create mode 100644 arch/powerpc/kernel/livepatch.c

-- 
1.8.5.6


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 2/8] ppc use ftrace_modify_all_code default
  2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
@ 2015-10-26 17:56 ` Torsten Duwe
  2015-10-26 17:57 ` [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 17:56 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Convert ppc's arch_ftrace_update_code from its own function copy
to use the generic default functionality (without stop_machine --
our instructions are properly aligned and the replacements atomic ;)

With this we gain error checking and the much-needed function_trace_op
handling.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/kernel/ftrace.c | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 310137f..e419c7b 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -511,20 +511,12 @@ void ftrace_replace_code(int enable)
 	}
 }
 
+/* Use the default ftrace_modify_all_code, but without
+ * stop_machine().
+ */
 void arch_ftrace_update_code(int command)
 {
-	if (command & FTRACE_UPDATE_CALLS)
-		ftrace_replace_code(1);
-	else if (command & FTRACE_DISABLE_CALLS)
-		ftrace_replace_code(0);
-
-	if (command & FTRACE_UPDATE_TRACE_FUNC)
-		ftrace_update_ftrace_func(ftrace_trace_function);
-
-	if (command & FTRACE_START_FUNC_RET)
-		ftrace_enable_ftrace_graph_caller();
-	else if (command & FTRACE_STOP_FUNC_RET)
-		ftrace_disable_ftrace_graph_caller();
+	ftrace_modify_all_code(command);
 }
 
 int __init ftrace_dyn_arch_init(void)
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2)
  2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
  2015-10-26 17:56 ` [PATCH v3 2/8] ppc use ftrace_modify_all_code default Torsten Duwe
@ 2015-10-26 17:57 ` Torsten Duwe
  2015-10-27  6:20   ` kbuild test robot
  2015-10-26 17:58 ` [PATCH v3 4/8] ppc64 ftrace_with_regs: spare early boot and low level Torsten Duwe
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 17:57 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

  * Makefile:
    - globally use -mprofile-kernel in case it's configured.
  * arch/powerpc/Kconfig / kernel/trace/Kconfig:
    - declare that ppc64 HAVE_MPROFILE_KERNEL and
      HAVE_DYNAMIC_FTRACE_WITH_REGS, and use it.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/Kconfig  | 2 ++
 arch/powerpc/Makefile | 7 +++++++
 kernel/trace/Kconfig  | 5 +++++
 3 files changed, 14 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9a7057e..0e6011c 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -97,8 +97,10 @@ config PPC
 	select OF_RESERVED_MEM
 	select HAVE_FTRACE_MCOUNT_RECORD
 	select HAVE_DYNAMIC_FTRACE
+	select HAVE_DYNAMIC_FTRACE_WITH_REGS
 	select HAVE_FUNCTION_TRACER
 	select HAVE_FUNCTION_GRAPH_TRACER
+	select HAVE_MPROFILE_KERNEL
 	select SYSCTL_EXCEPTION_TRACE
 	select ARCH_WANT_OPTIONAL_GPIOLIB
 	select VIRT_TO_BUS if !PPC64
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index b9b4af2..25d0034 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -133,6 +133,13 @@ else
 CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
 endif
 
+ifeq ($(CONFIG_PPC64),y)
+ifdef CONFIG_HAVE_MPROFILE_KERNEL
+CC_FLAGS_FTRACE	:= -pg $(call cc-option,-mprofile-kernel)
+KBUILD_CPPFLAGS	+= -DCC_USING_MPROFILE_KERNEL
+endif
+endif
+
 CFLAGS-$(CONFIG_CELL_CPU) += $(call cc-option,-mcpu=cell)
 CFLAGS-$(CONFIG_POWER4_CPU) += $(call cc-option,-mcpu=power4)
 CFLAGS-$(CONFIG_POWER5_CPU) += $(call cc-option,-mcpu=power5)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 1153c43..dbcb635 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -52,6 +52,11 @@ config HAVE_FENTRY
 	help
 	  Arch supports the gcc options -pg with -mfentry
 
+config HAVE_MPROFILE_KERNEL
+	bool
+	help
+	  Arch supports the gcc options -pg with -mprofile-kernel
+
 config HAVE_C_RECORDMCOUNT
 	bool
 	help
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 4/8] ppc64 ftrace_with_regs: spare early boot and low level
  2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
  2015-10-26 17:56 ` [PATCH v3 2/8] ppc use ftrace_modify_all_code default Torsten Duwe
  2015-10-26 17:57 ` [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
@ 2015-10-26 17:58 ` Torsten Duwe
  2015-10-26 17:59 ` [PATCH v3 5/8] ppc64 ftrace: disable profiling for some functions Torsten Duwe
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 17:58 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Using -mprofile-kernel on early boot code not only confuses the
checker but is also useless, as the infrastructure is not yet in
place. Proceed like with -pg (remove it from CFLAGS), equally with
time.o and ftrace itself.

  * arch/powerpc/kernel/Makefile:
    - remove -mprofile-kernel from low level and boot code objects'
      CFLAGS for FUNCTION_TRACER configurations.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/kernel/Makefile | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ba33693..0f417d5 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -16,14 +16,14 @@ endif
 
 ifdef CONFIG_FUNCTION_TRACER
 # Do not trace early boot code
-CFLAGS_REMOVE_cputable.o = -pg -mno-sched-epilog
-CFLAGS_REMOVE_prom_init.o = -pg -mno-sched-epilog
-CFLAGS_REMOVE_btext.o = -pg -mno-sched-epilog
-CFLAGS_REMOVE_prom.o = -pg -mno-sched-epilog
+CFLAGS_REMOVE_cputable.o = -pg -mno-sched-epilog -mprofile-kernel
+CFLAGS_REMOVE_prom_init.o = -pg -mno-sched-epilog -mprofile-kernel
+CFLAGS_REMOVE_btext.o = -pg -mno-sched-epilog -mprofile-kernel
+CFLAGS_REMOVE_prom.o = -pg -mno-sched-epilog -mprofile-kernel
 # do not trace tracer code
-CFLAGS_REMOVE_ftrace.o = -pg -mno-sched-epilog
+CFLAGS_REMOVE_ftrace.o = -pg -mno-sched-epilog -mprofile-kernel
 # timers used by tracing
-CFLAGS_REMOVE_time.o = -pg -mno-sched-epilog
+CFLAGS_REMOVE_time.o = -pg -mno-sched-epilog -mprofile-kernel
 endif
 
 obj-y				:= cputable.o ptrace.o syscalls.o \
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 5/8] ppc64 ftrace: disable profiling for some functions
  2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (2 preceding siblings ...)
  2015-10-26 17:58 ` [PATCH v3 4/8] ppc64 ftrace_with_regs: spare early boot and low level Torsten Duwe
@ 2015-10-26 17:59 ` Torsten Duwe
  2015-10-26 18:01 ` [PATCH v3 6/8] ppc64 ftrace: disable profiling for some files Torsten Duwe
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 17:59 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

At least POWER7/8 have MMUs that don't completely autoload;
a normal, recoverable memory fault might pass through these functions.
If a dynamic tracer function causes such a fault, any of these functions
being traced with -mprofile-kernel may cause an endless recursion.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/kernel/process.c        |  2 +-
 arch/powerpc/mm/fault.c              |  2 +-
 arch/powerpc/mm/hash_utils_64.c      | 18 +++++++++---------
 arch/powerpc/mm/hugetlbpage-hash64.c |  2 +-
 arch/powerpc/mm/hugetlbpage.c        |  4 ++--
 arch/powerpc/mm/mem.c                |  2 +-
 arch/powerpc/mm/pgtable_64.c         |  2 +-
 arch/powerpc/mm/slb.c                |  6 +++---
 arch/powerpc/mm/slice.c              |  8 ++++----
 9 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 75b6676..c2900b9 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -715,7 +715,7 @@ static inline void __switch_to_tm(struct task_struct *prev)
  * don't know which of the checkpointed state and the transactional
  * state to use.
  */
-void restore_tm_state(struct pt_regs *regs)
+notrace void restore_tm_state(struct pt_regs *regs)
 {
 	unsigned long msr_diff;
 
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index a67c6d7..125be37 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -205,7 +205,7 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
  * The return value is 0 if the fault was handled, or the signal
  * number if this is a kernel fault that can't be handled here.
  */
-int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
+notrace int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
 			    unsigned long error_code)
 {
 	enum ctx_state prev_state = exception_enter();
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index aee7017..90e89e7 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -849,7 +849,7 @@ void early_init_mmu_secondary(void)
 /*
  * Called by asm hashtable.S for doing lazy icache flush
  */
-unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
+notrace unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
 {
 	struct page *page;
 
@@ -870,7 +870,7 @@ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
 }
 
 #ifdef CONFIG_PPC_MM_SLICES
-static unsigned int get_paca_psize(unsigned long addr)
+static notrace unsigned int get_paca_psize(unsigned long addr)
 {
 	u64 lpsizes;
 	unsigned char *hpsizes;
@@ -899,7 +899,7 @@ unsigned int get_paca_psize(unsigned long addr)
  * For now this makes the whole process use 4k pages.
  */
 #ifdef CONFIG_PPC_64K_PAGES
-void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
+notrace void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
 {
 	if (get_slice_psize(mm, addr) == MMU_PAGE_4K)
 		return;
@@ -920,7 +920,7 @@ void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
  * Result is 0: full permissions, _PAGE_RW: read-only,
  * _PAGE_USER or _PAGE_USER|_PAGE_RW: no access.
  */
-static int subpage_protection(struct mm_struct *mm, unsigned long ea)
+static notrace int subpage_protection(struct mm_struct *mm, unsigned long ea)
 {
 	struct subpage_prot_table *spt = &mm->context.spt;
 	u32 spp = 0;
@@ -968,7 +968,7 @@ void hash_failure_debug(unsigned long ea, unsigned long access,
 		trap, vsid, ssize, psize, lpsize, pte);
 }
 
-static void check_paca_psize(unsigned long ea, struct mm_struct *mm,
+static notrace void check_paca_psize(unsigned long ea, struct mm_struct *mm,
 			     int psize, bool user_region)
 {
 	if (user_region) {
@@ -990,7 +990,7 @@ static void check_paca_psize(unsigned long ea, struct mm_struct *mm,
  * -1 - critical hash insertion error
  * -2 - access not permitted by subpage protection mechanism
  */
-int hash_page_mm(struct mm_struct *mm, unsigned long ea,
+notrace int hash_page_mm(struct mm_struct *mm, unsigned long ea,
 		 unsigned long access, unsigned long trap,
 		 unsigned long flags)
 {
@@ -1186,7 +1186,7 @@ bail:
 }
 EXPORT_SYMBOL_GPL(hash_page_mm);
 
-int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
+notrace int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
 	      unsigned long dsisr)
 {
 	unsigned long flags = 0;
@@ -1288,7 +1288,7 @@ out_exit:
 /* WARNING: This is called from hash_low_64.S, if you change this prototype,
  *          do not forget to update the assembly call site !
  */
-void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize,
+notrace void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize,
 		     unsigned long flags)
 {
 	unsigned long hash, index, shift, hidx, slot;
@@ -1436,7 +1436,7 @@ void low_hash_fault(struct pt_regs *regs, unsigned long address, int rc)
 	exception_exit(prev_state);
 }
 
-long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
+notrace long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
 			   unsigned long pa, unsigned long rflags,
 			   unsigned long vflags, int psize, int ssize)
 {
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index d94b1af..50b8c6f 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -18,7 +18,7 @@ extern long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
 				  unsigned long pa, unsigned long rlags,
 				  unsigned long vflags, int psize, int ssize);
 
-int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
+notrace int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 		     pte_t *ptep, unsigned long trap, unsigned long flags,
 		     int ssize, unsigned int shift, unsigned int mmu_psize)
 {
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 06c1452..bc2f459 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -922,7 +922,7 @@ static int __init hugetlbpage_init(void)
 #endif
 arch_initcall(hugetlbpage_init);
 
-void flush_dcache_icache_hugepage(struct page *page)
+notrace void flush_dcache_icache_hugepage(struct page *page)
 {
 	int i;
 	void *start;
@@ -955,7 +955,7 @@ void flush_dcache_icache_hugepage(struct page *page)
  * when we have MSR[EE] = 0 but the paca->soft_enabled = 1
  */
 
-pte_t *__find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
+notrace pte_t *__find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
 				   unsigned *shift)
 {
 	pgd_t pgd, *pgdp;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 22d94c3..f690e8a 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -406,7 +406,7 @@ void flush_dcache_page(struct page *page)
 }
 EXPORT_SYMBOL(flush_dcache_page);
 
-void flush_dcache_icache_page(struct page *page)
+notrace void flush_dcache_icache_page(struct page *page)
 {
 #ifdef CONFIG_HUGETLB_PAGE
 	if (PageCompound(page)) {
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index e92cb21..c74050b 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -442,7 +442,7 @@ static void page_table_free_rcu(void *table)
 	}
 }
 
-void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
+notrace void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
 {
 	unsigned long pgf = (unsigned long)table;
 
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 8a32a2b..5b05754 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -91,7 +91,7 @@ static inline void create_shadowed_slbe(unsigned long ea, int ssize,
 		     : "memory" );
 }
 
-static void __slb_flush_and_rebolt(void)
+static notrace void __slb_flush_and_rebolt(void)
 {
 	/* If you change this make sure you change SLB_NUM_BOLTED
 	 * and PR KVM appropriately too. */
@@ -131,7 +131,7 @@ static void __slb_flush_and_rebolt(void)
 		     : "memory");
 }
 
-void slb_flush_and_rebolt(void)
+notrace void slb_flush_and_rebolt(void)
 {
 
 	WARN_ON(!irqs_disabled());
@@ -146,7 +146,7 @@ void slb_flush_and_rebolt(void)
 	get_paca()->slb_cache_ptr = 0;
 }
 
-void slb_vmalloc_update(void)
+notrace void slb_vmalloc_update(void)
 {
 	unsigned long vflags;
 
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 0f432a7..f92f0f0 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -76,8 +76,8 @@ static void slice_print_mask(const char *label, struct slice_mask mask) {}
 
 #endif
 
-static struct slice_mask slice_range_to_mask(unsigned long start,
-					     unsigned long len)
+static notrace struct slice_mask slice_range_to_mask(unsigned long start,
+						     unsigned long len)
 {
 	unsigned long end = start + len - 1;
 	struct slice_mask ret = { 0, 0 };
@@ -564,7 +564,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp,
 				       current->mm->context.user_psize, 1);
 }
 
-unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
+notrace unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
 {
 	unsigned char *hpsizes;
 	int index, mask_index;
@@ -645,7 +645,7 @@ void slice_set_user_psize(struct mm_struct *mm, unsigned int psize)
 	spin_unlock_irqrestore(&slice_convert_lock, flags);
 }
 
-void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
+notrace void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
 			   unsigned long len, unsigned int psize)
 {
 	struct slice_mask mask = slice_range_to_mask(start, len);
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 6/8] ppc64 ftrace: disable profiling for some files
  2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (3 preceding siblings ...)
  2015-10-26 17:59 ` [PATCH v3 5/8] ppc64 ftrace: disable profiling for some functions Torsten Duwe
@ 2015-10-26 18:01 ` Torsten Duwe
  2015-10-26 18:02 ` [PATCH v3 7/8] Implement kernel live patching for ppc64le (ABIv2) Torsten Duwe
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 18:01 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

This adds -mprofile-kernel to the cc flags to be stripped from the command
line for code-patching.o and feature-fixups.o, in addition to "-pg"

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/lib/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index a47e142..98e22b2 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -6,8 +6,8 @@ subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
 
 ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
 
-CFLAGS_REMOVE_code-patching.o = -pg
-CFLAGS_REMOVE_feature-fixups.o = -pg
+CFLAGS_REMOVE_code-patching.o = -pg -mprofile-kernel
+CFLAGS_REMOVE_feature-fixups.o = -pg -mprofile-kernel
 
 obj-y += string.o alloc.o crtsavres.o ppc_ksyms.o code-patching.o \
 	 feature-fixups.o
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 7/8] Implement kernel live patching for ppc64le (ABIv2)
  2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (4 preceding siblings ...)
  2015-10-26 18:01 ` [PATCH v3 6/8] ppc64 ftrace: disable profiling for some files Torsten Duwe
@ 2015-10-26 18:02 ` Torsten Duwe
  2015-10-26 18:02 ` [PATCH v3 8/8] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected Torsten Duwe
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 18:02 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

  * create the appropriate files+functions
    arch/powerpc/include/asm/livepatch.h
        klp_check_compiler_support,
        klp_arch_set_pc
    arch/powerpc/kernel/livepatch.c with a stub for
        klp_write_module_reloc
    This is architecture-independent work in progress.
  * introduce a fixup in arch/powerpc/kernel/entry_64.S
    for local calls that are becoming global due to live patching.
    And of course do the main KLP thing: return to a maybe different
    address, possibly altered by the live patching ftrace op.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/include/asm/livepatch.h | 27 ++++++++++++++++++++
 arch/powerpc/kernel/entry_64.S       | 48 +++++++++++++++++++++++++++++++++---
 arch/powerpc/kernel/livepatch.c      | 20 +++++++++++++++
 3 files changed, 91 insertions(+), 4 deletions(-)
 create mode 100644 arch/powerpc/include/asm/livepatch.h
 create mode 100644 arch/powerpc/kernel/livepatch.c

diff --git a/arch/powerpc/include/asm/livepatch.h b/arch/powerpc/include/asm/livepatch.h
new file mode 100644
index 0000000..334eb55
--- /dev/null
+++ b/arch/powerpc/include/asm/livepatch.h
@@ -0,0 +1,27 @@
+#ifndef _ASM_POWERPC64_LIVEPATCH_H
+#define _ASM_POWERPC64_LIVEPATCH_H
+
+#include <linux/module.h>
+#include <linux/ftrace.h>
+
+#ifdef CONFIG_LIVEPATCH
+static inline int klp_check_compiler_support(void)
+{
+#if !defined(_CALL_ELF) || _CALL_ELF != 2
+	return 1;
+#endif
+	return 0;
+}
+
+extern int klp_write_module_reloc(struct module *mod, unsigned long type,
+				   unsigned long loc, unsigned long value);
+
+static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
+{
+	regs->nip = ip;
+}
+#else
+#error Live patching support is disabled; check CONFIG_LIVEPATCH
+#endif
+
+#endif /* _ASM_POWERPC64_LIVEPATCH_H */
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index b0dfbfe..2681601 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1264,6 +1264,9 @@ _GLOBAL(ftrace_caller)
 	mflr    r3
 	std     r3, _NIP(r1)
 	std	r3, 16(r1)
+#ifdef CONFIG_LIVEPATCH
+	mr	r14,r3		// remember "old" NIP
+#endif
 	subi    r3, r3, MCOUNT_INSN_SIZE
 	mfmsr   r4
 	std     r4, _MSR(r1)
@@ -1280,7 +1283,10 @@ ftrace_call:
 	nop
 
 	ld	r3, _NIP(r1)
-	mtlr	r3
+	mtctr	r3		// prepare to jump there
+#ifdef CONFIG_LIVEPATCH
+	cmpd	r14,r3		// has NIP been altered?
+#endif
 
 	REST_8GPRS(0,r1)
 	REST_8GPRS(8,r1)
@@ -1293,6 +1299,24 @@ ftrace_call:
 	mtlr	r12
 	mr	r2,r0		// restore callee's TOC
 
+#ifdef CONFIG_LIVEPATCH
+	beq+	4f		// likely(old_NIP == new_NIP)
+
+	// For a local call, restore this TOC after calling the patch function.
+	// For a global call, it does not matter what we restore here,
+	// since the global caller does its own restore right afterwards,
+	// anyway.
+	// Just insert a KLP_return_helper frame in any case,
+	// so a patch function can always count on the changed stack offsets.
+	stdu	r1,-32(r1)	// open new mini stack frame
+	std	r0,24(r1)	// save TOC now, unconditionally.
+	LOAD_REG_IMMEDIATE(r12,KLP_return_helper)
+	std	r12,LRSAVE(r1)
+	mtlr	r12
+	bctr
+4:
+#endif
+
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	stdu	r1, -112(r1)
 .globl ftrace_graph_call
@@ -1302,15 +1326,31 @@ _GLOBAL(ftrace_graph_stub)
 	addi	r1, r1, 112
 #endif
 
-	mflr	r0		// move this LR to CTR
-	mtctr	r0
-
 	ld	r0,LRSAVE(r1)	// restore callee's lr at _mcount site
 	mtlr	r0
 	bctr			// jump after _mcount site
 #endif /* CC_USING_MPROFILE_KERNEL */
 _GLOBAL(ftrace_stub)
 	blr
+
+#ifdef CONFIG_LIVEPATCH
+/* Helper function for local calls that are becoming global
+   due to live patching.
+   We can't simply patch the NOP after the original call,
+   because, depending on the consistency model, some kernel
+   threads may still have called the original, local function
+   *without* saving their TOC in the respective stack frame slot,
+   so the decision is made per-thread during function return by
+   maybe inserting a KLP_return_helper frame or not.
+*/
+KLP_return_helper:
+	ld	r2,24(r1)	// restore TOC (saved by ftrace_caller)
+	addi r1, r1, 32		// destroy mini stack frame
+	ld	r0,LRSAVE(r1)	// get the real return address
+	mtlr	r0
+	blr
+#endif
+
 #else
 _GLOBAL_TOC(_mcount)
 	/* Taken from output of objdump from lib64/glibc */
diff --git a/arch/powerpc/kernel/livepatch.c b/arch/powerpc/kernel/livepatch.c
new file mode 100644
index 0000000..9dace38
--- /dev/null
+++ b/arch/powerpc/kernel/livepatch.c
@@ -0,0 +1,20 @@
+#include <linux/module.h>
+#include <asm/livepatch.h>
+
+/**
+ * klp_write_module_reloc() - write a relocation in a module
+ * @mod:       module in which the section to be modified is found
+ * @type:      ELF relocation type (see asm/elf.h)
+ * @loc:       address that the relocation should be written to
+ * @value:     relocation value (sym address + addend)
+ *
+ * This function writes a relocation to the specified location for
+ * a particular module.
+ */
+int klp_write_module_reloc(struct module *mod, unsigned long type,
+			    unsigned long loc, unsigned long value)
+{
+	/* This requires infrastructure changes; we need the loadinfos. */
+	pr_err("lpc_write_module_reloc not yet supported\n");
+	return -ENOSYS;
+}
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 8/8] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected.
  2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (5 preceding siblings ...)
  2015-10-26 18:02 ` [PATCH v3 7/8] Implement kernel live patching for ppc64le (ABIv2) Torsten Duwe
@ 2015-10-26 18:02 ` Torsten Duwe
  2015-10-26 18:03 ` [PATCH v3 3/8] ppc64 ftrace_with_regs configuration variables Torsten Duwe
  2015-10-26 18:04 ` [PATCH v3 1/8] ppc64le FTRACE_WITH_REGS implementation Torsten Duwe
  8 siblings, 0 replies; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 18:02 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/Kconfig         | 5 +++++
 arch/powerpc/kernel/Makefile | 1 +
 2 files changed, 6 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 0e6011c..341ebe9 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -163,6 +163,9 @@ config PPC
 	select ARCH_HAS_DMA_SET_COHERENT_MASK
 	select HAVE_ARCH_SECCOMP_FILTER
 
+config HAVE_LIVEPATCH
+       def_bool PPC64 && CPU_LITTLE_ENDIAN
+
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
 
@@ -1095,3 +1098,5 @@ config PPC_LIB_RHEAP
 	bool
 
 source "arch/powerpc/kvm/Kconfig"
+
+source "kernel/livepatch/Kconfig"
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 0f417d5..f9a2925 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -119,6 +119,7 @@ obj-$(CONFIG_DYNAMIC_FTRACE)	+= ftrace.o
 obj-$(CONFIG_FUNCTION_GRAPH_TRACER)	+= ftrace.o
 obj-$(CONFIG_FTRACE_SYSCALLS)	+= ftrace.o
 obj-$(CONFIG_TRACING)		+= trace_clock.o
+obj-$(CONFIG_LIVEPATCH)		+= livepatch.o
 
 ifneq ($(CONFIG_PPC_INDIRECT_PIO),y)
 obj-y				+= iomap.o
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 3/8] ppc64 ftrace_with_regs configuration variables
  2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (6 preceding siblings ...)
  2015-10-26 18:02 ` [PATCH v3 8/8] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected Torsten Duwe
@ 2015-10-26 18:03 ` Torsten Duwe
  2015-10-26 18:04 ` [PATCH v3 1/8] ppc64le FTRACE_WITH_REGS implementation Torsten Duwe
  8 siblings, 0 replies; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 18:03 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

  * Makefile:
    - globally use -mprofile-kernel in case it's configured.
  * arch/powerpc/Kconfig / kernel/trace/Kconfig:
    - declare that ppc64 HAVE_MPROFILE_KERNEL and
      HAVE_DYNAMIC_FTRACE_WITH_REGS, and use it.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/Kconfig  | 2 ++
 arch/powerpc/Makefile | 7 +++++++
 kernel/trace/Kconfig  | 5 +++++
 3 files changed, 14 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9a7057e..0e6011c 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -97,8 +97,10 @@ config PPC
 	select OF_RESERVED_MEM
 	select HAVE_FTRACE_MCOUNT_RECORD
 	select HAVE_DYNAMIC_FTRACE
+	select HAVE_DYNAMIC_FTRACE_WITH_REGS
 	select HAVE_FUNCTION_TRACER
 	select HAVE_FUNCTION_GRAPH_TRACER
+	select HAVE_MPROFILE_KERNEL
 	select SYSCTL_EXCEPTION_TRACE
 	select ARCH_WANT_OPTIONAL_GPIOLIB
 	select VIRT_TO_BUS if !PPC64
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index b9b4af2..25d0034 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -133,6 +133,13 @@ else
 CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
 endif
 
+ifeq ($(CONFIG_PPC64),y)
+ifdef CONFIG_HAVE_MPROFILE_KERNEL
+CC_FLAGS_FTRACE	:= -pg $(call cc-option,-mprofile-kernel)
+KBUILD_CPPFLAGS	+= -DCC_USING_MPROFILE_KERNEL
+endif
+endif
+
 CFLAGS-$(CONFIG_CELL_CPU) += $(call cc-option,-mcpu=cell)
 CFLAGS-$(CONFIG_POWER4_CPU) += $(call cc-option,-mcpu=power4)
 CFLAGS-$(CONFIG_POWER5_CPU) += $(call cc-option,-mcpu=power5)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 1153c43..dbcb635 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -52,6 +52,11 @@ config HAVE_FENTRY
 	help
 	  Arch supports the gcc options -pg with -mfentry
 
+config HAVE_MPROFILE_KERNEL
+	bool
+	help
+	  Arch supports the gcc options -pg with -mprofile-kernel
+
 config HAVE_C_RECORDMCOUNT
 	bool
 	help
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 1/8] ppc64le FTRACE_WITH_REGS implementation
  2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
                   ` (7 preceding siblings ...)
  2015-10-26 18:03 ` [PATCH v3 3/8] ppc64 ftrace_with_regs configuration variables Torsten Duwe
@ 2015-10-26 18:04 ` Torsten Duwe
  8 siblings, 0 replies; 11+ messages in thread
From: Torsten Duwe @ 2015-10-26 18:04 UTC (permalink / raw)
  To: Steven Rostedt, Michael Ellerman
  Cc: Jiri Kosina, linuxppc-dev, linux-kernel, live-patching

Implement FTRACE_WITH_REGS for powerpc64, on ELF ABI v2.
Initial work started by Vojtech Pavlik, used with permission.

  * arch/powerpc/kernel/entry_64.S:
    - enhance _mcount with a stub to support call sites
      generated by -mprofile-kernel. This is backward-compatible.
    - Implement an effective ftrace_caller that works from
      within the kernel binary as well as from modules.
  * arch/powerpc/kernel/ftrace.c:
    - be prepared to deal with ppc64 ELF ABI v2, especially
      calls to _mcount that result from gcc -mprofile-kernel
    - a little more error verbosity
  * arch/powerpc/kernel/module_64.c:
    - do not save the TOC pointer on the trampoline when the
      destination is ftrace_caller. This trampoline jump happens from
      a function prologue before a new stack frame is set up, so bad
      things may happen otherwise...
    - relax is_module_trampoline() to recognise the modified
      trampoline.

Signed-off-by: Torsten Duwe <duwe@suse.de>
---
 arch/powerpc/include/asm/ftrace.h |   5 ++
 arch/powerpc/kernel/entry_64.S    | 113 +++++++++++++++++++++++++++++++++++++-
 arch/powerpc/kernel/ftrace.c      |  72 +++++++++++++++++++++---
 arch/powerpc/kernel/module_64.c   |  39 ++++++++++++-
 4 files changed, 217 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index ef89b14..6eb9fbc 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -46,6 +46,8 @@
 extern void _mcount(void);
 
 #ifdef CONFIG_DYNAMIC_FTRACE
+# define FTRACE_ADDR ((unsigned long)ftrace_caller)
+# define FTRACE_REGS_ADDR FTRACE_ADDR
 static inline unsigned long ftrace_call_adjust(unsigned long addr)
 {
        /* reloction of mcount call site is the same as the address */
@@ -58,6 +60,9 @@ struct dyn_arch_ftrace {
 #endif /*  CONFIG_DYNAMIC_FTRACE */
 #endif /* __ASSEMBLY__ */
 
+#ifdef CONFIG_DYNAMIC_FTRACE
+#define ARCH_SUPPORTS_FTRACE_OPS 1
+#endif
 #endif
 
 #if defined(CONFIG_FTRACE_SYSCALLS) && defined(CONFIG_PPC64) && !defined(__ASSEMBLY__)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index a94f155..b0dfbfe 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1206,8 +1206,13 @@ _GLOBAL(enter_prom)
 #ifdef CONFIG_DYNAMIC_FTRACE
 _GLOBAL(mcount)
 _GLOBAL(_mcount)
-	blr
+	mflr	r0
+	mtctr	r0
+	ld	r0,LRSAVE(r1)
+	mtlr	r0
+	bctr
 
+#ifndef CC_USING_MPROFILE_KERNEL
 _GLOBAL_TOC(ftrace_caller)
 	/* Taken from output of objdump from lib64/glibc */
 	mflr	r3
@@ -1229,6 +1234,81 @@ _GLOBAL(ftrace_graph_stub)
 	ld	r0, 128(r1)
 	mtlr	r0
 	addi	r1, r1, 112
+#else
+_GLOBAL(ftrace_caller)
+#if defined(_CALL_ELF) && _CALL_ELF == 2
+	mflr	r0
+	bl	2f
+2:	mflr	r12
+	mtlr	r0
+	mr      r0,r2   // save callee's TOC
+	addis	r2,r12,(.TOC.-ftrace_caller-8)@ha
+	addi    r2,r2,(.TOC.-ftrace_caller-8)@l
+#else
+	mr	r0,r2
+#endif
+	ld	r12,LRSAVE(r1)	// get caller's address
+
+	stdu	r1,-SWITCH_FRAME_SIZE(r1)
+
+	std     r12, _LINK(r1)
+	SAVE_8GPRS(0,r1)
+	std	r0, 24(r1)	// save TOC
+	SAVE_8GPRS(8,r1)
+	SAVE_8GPRS(16,r1)
+	SAVE_8GPRS(24,r1)
+
+	LOAD_REG_IMMEDIATE(r3,function_trace_op)
+	ld	r5,0(r3)
+
+	mflr    r3
+	std     r3, _NIP(r1)
+	std	r3, 16(r1)
+	subi    r3, r3, MCOUNT_INSN_SIZE
+	mfmsr   r4
+	std     r4, _MSR(r1)
+	mfctr   r4
+	std     r4, _CTR(r1)
+	mfxer   r4
+	std     r4, _XER(r1)
+	mr	r4, r12
+	addi    r6, r1 ,STACK_FRAME_OVERHEAD
+
+.globl ftrace_call
+ftrace_call:
+	bl	ftrace_stub
+	nop
+
+	ld	r3, _NIP(r1)
+	mtlr	r3
+
+	REST_8GPRS(0,r1)
+	REST_8GPRS(8,r1)
+	REST_8GPRS(16,r1)
+	REST_8GPRS(24,r1)
+
+	addi r1, r1, SWITCH_FRAME_SIZE
+
+	ld	r12, LRSAVE(r1)  // get caller's address
+	mtlr	r12
+	mr	r2,r0		// restore callee's TOC
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+	stdu	r1, -112(r1)
+.globl ftrace_graph_call
+ftrace_graph_call:
+	b	ftrace_graph_stub
+_GLOBAL(ftrace_graph_stub)
+	addi	r1, r1, 112
+#endif
+
+	mflr	r0		// move this LR to CTR
+	mtctr	r0
+
+	ld	r0,LRSAVE(r1)	// restore callee's lr at _mcount site
+	mtlr	r0
+	bctr			// jump after _mcount site
+#endif /* CC_USING_MPROFILE_KERNEL */
 _GLOBAL(ftrace_stub)
 	blr
 #else
@@ -1262,6 +1342,19 @@ _GLOBAL(ftrace_stub)
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 _GLOBAL(ftrace_graph_caller)
+#ifdef CC_USING_MPROFILE_KERNEL
+	// with -mprofile-kernel, parameter regs are still alive at _mcount
+	std	r10, 104(r1)
+	std	r9, 96(r1)
+	std	r8, 88(r1)
+	std	r7, 80(r1)
+	std	r6, 72(r1)
+	std	r5, 64(r1)
+	std	r4, 56(r1)
+	std	r3, 48(r1)
+	mflr	r0
+	std	r0, 40(r1)
+#endif
 	/* load r4 with local address */
 	ld	r4, 128(r1)
 	subi	r4, r4, MCOUNT_INSN_SIZE
@@ -1280,10 +1373,28 @@ _GLOBAL(ftrace_graph_caller)
 	ld	r11, 112(r1)
 	std	r3, 16(r11)
 
+#ifdef CC_USING_MPROFILE_KERNEL
+	ld	r0, 40(r1)
+	mtctr	r0
+	ld	r10, 104(r1)
+	ld	r9, 96(r1)
+	ld	r8, 88(r1)
+	ld	r7, 80(r1)
+	ld	r6, 72(r1)
+	ld	r5, 64(r1)
+	ld	r4, 56(r1)
+	ld	r3, 48(r1)
+
+	addi	r1, r1, 112
+	ld	r0, LRSAVE(r1)
+	mtlr	r0
+	bctr
+#else
 	ld	r0, 128(r1)
 	mtlr	r0
 	addi	r1, r1, 112
 	blr
+#endif
 
 _GLOBAL(return_to_handler)
 	/* need to save return values */
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 44d4d8e..310137f 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -61,8 +61,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
 		return -EFAULT;
 
 	/* Make sure it is what we expect it to be */
-	if (replaced != old)
+	if (replaced != old) {
+		pr_err("%p: replaced (%#x) != old (%#x)",
+		(void *)ip, replaced, old);
 		return -EINVAL;
+	}
 
 	/* replace the text with the new text */
 	if (patch_instruction((unsigned int *)ip, new))
@@ -106,14 +109,16 @@ static int
 __ftrace_make_nop(struct module *mod,
 		  struct dyn_ftrace *rec, unsigned long addr)
 {
-	unsigned int op;
+	unsigned int op, op0, op1, pop;
 	unsigned long entry, ptr;
 	unsigned long ip = rec->ip;
 	void *tramp;
 
 	/* read where this goes */
-	if (probe_kernel_read(&op, (void *)ip, sizeof(int)))
+	if (probe_kernel_read(&op, (void *)ip, sizeof(int))) {
+		pr_err("Fetching opcode failed.\n");
 		return -EFAULT;
+	}
 
 	/* Make sure that that this is still a 24bit jump */
 	if (!is_bl_op(op)) {
@@ -158,10 +163,46 @@ __ftrace_make_nop(struct module *mod,
 	 *
 	 * Use a b +8 to jump over the load.
 	 */
-	op = 0x48000008;	/* b +8 */
 
-	if (patch_instruction((unsigned int *)ip, op))
+	pop = 0x48000008;	/* b +8 */
+
+	/*
+	 * Check what is in the next instruction. We can see ld r2,40(r1), but
+	 * on first pass after boot we will see mflr r0.
+	 */
+	if (probe_kernel_read(&op, (void *)(ip+4), MCOUNT_INSN_SIZE)) {
+		pr_err("Fetching op failed.\n");
+		return -EFAULT;
+	}
+
+	if (op != 0xe8410028) { /* ld r2,STACK_OFFSET(r1) */
+
+		if (probe_kernel_read(&op0, (void *)(ip-8), MCOUNT_INSN_SIZE)) {
+			pr_err("Fetching op0 failed.\n");
+			return -EFAULT;
+		}
+
+		if (probe_kernel_read(&op1, (void *)(ip-4), MCOUNT_INSN_SIZE)) {
+			pr_err("Fetching op1 failed.\n");
+			return -EFAULT;
+		}
+
+		/* mflr r0 ; std r0,LRSAVE(r1) */
+		if (op0 != 0x7c0802a6 && op1 != 0xf8010010) {
+			pr_err("Unexpected instructions around bl\n"
+				"when enabling dynamic ftrace!\t"
+				"(%08x,%08x,bl,%08x)\n", op0, op1, op);
+			return -EINVAL;
+		}
+
+		/* When using -mkernel_profile there is no load to jump over */
+		pop = PPC_INST_NOP;
+	}
+
+	if (patch_instruction((unsigned int *)ip, pop)) {
+		pr_err("Patching NOP failed.\n");
 		return -EPERM;
+	}
 
 	return 0;
 }
@@ -287,6 +328,13 @@ int ftrace_make_nop(struct module *mod,
 
 #ifdef CONFIG_MODULES
 #ifdef CONFIG_PPC64
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+			unsigned long addr)
+{
+	return ftrace_make_call(rec, addr);
+}
+#endif
 static int
 __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 {
@@ -306,11 +354,19 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 	 * The load offset is different depending on the ABI. For simplicity
 	 * just mask it out when doing the compare.
 	 */
+#ifndef CC_USING_MPROFILE_KERNEL
 	if ((op[0] != 0x48000008) || ((op[1] & 0xffff0000) != 0xe8410000)) {
-		pr_err("Unexpected call sequence: %x %x\n", op[0], op[1]);
+		pr_err("Unexpected call sequence at %p: %x %x\n",
+		ip, op[0], op[1]);
 		return -EINVAL;
 	}
-
+#else
+	/* look for patched "NOP" on ppc64 with -mprofile-kernel */
+	if (op[0] != 0x60000000) {
+		pr_err("Unexpected call at %p: %x\n", ip, op[0]);
+		return -EINVAL;
+	}
+#endif
 	/* If we never set up a trampoline to ftrace_caller, then bail */
 	if (!rec->arch.mod->arch.tramp) {
 		pr_err("No ftrace trampoline\n");
@@ -330,7 +386,7 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 
 	return 0;
 }
-#else
+#else  /* !CONFIG_PPC64: */
 static int
 __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 {
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 6838451..e62c41f 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -138,12 +138,21 @@ static u32 ppc64_stub_insns[] = {
 	0x4e800420			/* bctr */
 };
 
+/* In case of _mcount calls or dynamic ftracing, Do not save the
+ * current callee's TOC (in R2) again into the original caller's stack
+ * frame during this trampoline hop. The stack frame already holds
+ * that of the original caller.  _mcount and ftrace_caller will take
+ * care of this TOC value themselves.
+ */
+#define SQUASH_TOC_SAVE_INSN(trampoline_addr) \
+	(((struct ppc64_stub_entry *)(trampoline_addr))->jump[2] = PPC_INST_NOP)
+
 #ifdef CONFIG_DYNAMIC_FTRACE
 
 static u32 ppc64_stub_mask[] = {
 	0xffff0000,
 	0xffff0000,
-	0xffffffff,
+	0x00000000,
 	0xffffffff,
 #if !defined(_CALL_ELF) || _CALL_ELF != 2
 	0xffffffff,
@@ -170,6 +179,9 @@ bool is_module_trampoline(u32 *p)
 		if ((insna & mask) != (insnb & mask))
 			return false;
 	}
+	if (insns[2] != ppc64_stub_insns[2] &&
+	    insns[2] != PPC_INST_NOP)
+		return false;
 
 	return true;
 }
@@ -475,6 +487,19 @@ static unsigned long stub_for_addr(Elf64_Shdr *sechdrs,
 static int restore_r2(u32 *instruction, struct module *me)
 {
 	if (*instruction != PPC_INST_NOP) {
+
+		/* -mprofile_kernel sequence starting with
+		 * mflr r0; std r0, LRSAVE(r1)
+		 */
+		if (instruction[-3] == 0x7c0802a6 &&
+		    instruction[-2] == 0xf8010010) {
+			/* Nothing to be done here, it's an _mcount
+			 * call location and r2 will have to be
+			 * restored in the _mcount function.
+			 */
+			return 2;
+		};
+
 		pr_err("%s: Expect noop after relocate, got %08x\n",
 		       me->name, *instruction);
 		return 0;
@@ -490,7 +515,7 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
 		       unsigned int relsec,
 		       struct module *me)
 {
-	unsigned int i;
+	unsigned int i, r2;
 	Elf64_Rela *rela = (void *)sechdrs[relsec].sh_addr;
 	Elf64_Sym *sym;
 	unsigned long *location;
@@ -603,8 +628,12 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
 				value = stub_for_addr(sechdrs, value, me);
 				if (!value)
 					return -ENOENT;
-				if (!restore_r2((u32 *)location + 1, me))
+				r2 = restore_r2((u32 *)location + 1, me);
+				if (!r2)
 					return -ENOEXEC;
+				/* Squash the TOC saver for profiler calls */
+				if (!strcmp("_mcount", strtab+sym->st_name))
+					SQUASH_TOC_SAVE_INSN(value);
 			} else
 				value += local_entry_offset(sym);
 
@@ -665,6 +694,10 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
 	me->arch.tramp = stub_for_addr(sechdrs,
 				       (unsigned long)ftrace_caller,
 				       me);
+	/* ftrace_caller will take care of the TOC;
+	 * do not clobber original caller's value.
+	 */
+	SQUASH_TOC_SAVE_INSN(me->arch.tramp);
 #endif
 
 	return 0;
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: Re: [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2)
  2015-10-26 17:57 ` [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
@ 2015-10-27  6:20   ` kbuild test robot
  0 siblings, 0 replies; 11+ messages in thread
From: kbuild test robot @ 2015-10-27  6:20 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: kbuild-all, Steven Rostedt, Michael Ellerman, Jiri Kosina,
	linuxppc-dev, linux-kernel, live-patching

[-- Attachment #1: Type: text/plain, Size: 5285 bytes --]

Hi Torsten,

[auto build test ERROR on powerpc/next -- if it's inappropriate base, please suggest rules for selecting the more suitable base]

url:    https://github.com/0day-ci/linux/commits/Torsten-Duwe/Re-PATCH-v3-0-8-ftrace-with-regs-live-patching-for-ppc64-LE-ABI-v2/20151027-020058
config: powerpc-ppc6xx_defconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

All errors (new ones prefixed by >>):

   kernel/built-in.o: In function `ftrace_get_addr_curr':
>> kernel/trace/ftrace.c:2286: undefined reference to `ftrace_regs_caller'
>> kernel/trace/ftrace.c:2286: undefined reference to `ftrace_regs_caller'
   kernel/built-in.o: In function `ftrace_get_addr_new':
   kernel/trace/ftrace.c:2254: undefined reference to `ftrace_regs_caller'
   kernel/trace/ftrace.c:2254: undefined reference to `ftrace_regs_caller'
   kernel/built-in.o: In function `__ftrace_replace_code':
>> kernel/trace/ftrace.c:2316: undefined reference to `ftrace_modify_call'
   kernel/built-in.o: In function `ftrace_get_addr_curr':
>> kernel/trace/ftrace.c:2286: undefined reference to `ftrace_regs_caller'
>> kernel/trace/ftrace.c:2286: undefined reference to `ftrace_regs_caller'
   kernel/built-in.o: In function `ftrace_get_addr_new':
   kernel/trace/ftrace.c:2254: undefined reference to `ftrace_regs_caller'
   kernel/trace/ftrace.c:2254: undefined reference to `ftrace_regs_caller'
   kernel/built-in.o: In function `ftrace_get_addr_curr':
>> kernel/trace/ftrace.c:2286: undefined reference to `ftrace_regs_caller'
   kernel/built-in.o:kernel/trace/ftrace.c:2286: more undefined references to `ftrace_regs_caller' follow

vim +2286 kernel/trace/ftrace.c

79922b80 Steven Rostedt (Red Hat  2014-05-06  2280) 			return (unsigned long)FTRACE_ADDR;
79922b80 Steven Rostedt (Red Hat  2014-05-06  2281) 		}
79922b80 Steven Rostedt (Red Hat  2014-05-06  2282) 		return ops->trampoline;
79922b80 Steven Rostedt (Red Hat  2014-05-06  2283) 	}
79922b80 Steven Rostedt (Red Hat  2014-05-06  2284) 
7413af1f Steven Rostedt (Red Hat  2014-05-06  2285) 	if (rec->flags & FTRACE_FL_REGS_EN)
7413af1f Steven Rostedt (Red Hat  2014-05-06 @2286) 		return (unsigned long)FTRACE_REGS_ADDR;
7413af1f Steven Rostedt (Red Hat  2014-05-06  2287) 	else
7413af1f Steven Rostedt (Red Hat  2014-05-06  2288) 		return (unsigned long)FTRACE_ADDR;
7413af1f Steven Rostedt (Red Hat  2014-05-06  2289) }
7413af1f Steven Rostedt (Red Hat  2014-05-06  2290) 
c88fd863 Steven Rostedt           2011-08-16  2291  static int
c88fd863 Steven Rostedt           2011-08-16  2292  __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
c88fd863 Steven Rostedt           2011-08-16  2293  {
08f6fba5 Steven Rostedt           2012-04-30  2294  	unsigned long ftrace_old_addr;
c88fd863 Steven Rostedt           2011-08-16  2295  	unsigned long ftrace_addr;
c88fd863 Steven Rostedt           2011-08-16  2296  	int ret;
c88fd863 Steven Rostedt           2011-08-16  2297  
7c0868e0 Steven Rostedt (Red Hat  2014-05-08  2298) 	ftrace_addr = ftrace_get_addr_new(rec);
c88fd863 Steven Rostedt           2011-08-16  2299  
7c0868e0 Steven Rostedt (Red Hat  2014-05-08  2300) 	/* This needs to be done before we call ftrace_update_record */
7c0868e0 Steven Rostedt (Red Hat  2014-05-08  2301) 	ftrace_old_addr = ftrace_get_addr_curr(rec);
7c0868e0 Steven Rostedt (Red Hat  2014-05-08  2302) 
7c0868e0 Steven Rostedt (Red Hat  2014-05-08  2303) 	ret = ftrace_update_record(rec, enable);
08f6fba5 Steven Rostedt           2012-04-30  2304  
c88fd863 Steven Rostedt           2011-08-16  2305  	switch (ret) {
c88fd863 Steven Rostedt           2011-08-16  2306  	case FTRACE_UPDATE_IGNORE:
c88fd863 Steven Rostedt           2011-08-16  2307  		return 0;
c88fd863 Steven Rostedt           2011-08-16  2308  
c88fd863 Steven Rostedt           2011-08-16  2309  	case FTRACE_UPDATE_MAKE_CALL:
c88fd863 Steven Rostedt           2011-08-16  2310  		return ftrace_make_call(rec, ftrace_addr);
c88fd863 Steven Rostedt           2011-08-16  2311  
c88fd863 Steven Rostedt           2011-08-16  2312  	case FTRACE_UPDATE_MAKE_NOP:
39b5552c Steven Rostedt (Red Hat  2014-08-17  2313) 		return ftrace_make_nop(NULL, rec, ftrace_old_addr);
08f6fba5 Steven Rostedt           2012-04-30  2314  
08f6fba5 Steven Rostedt           2012-04-30  2315  	case FTRACE_UPDATE_MODIFY_CALL:
08f6fba5 Steven Rostedt           2012-04-30 @2316  		return ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
5072c59f Steven Rostedt           2008-05-12  2317  	}
5072c59f Steven Rostedt           2008-05-12  2318  
c88fd863 Steven Rostedt           2011-08-16  2319  	return -1; /* unknow ftrace bug */

:::::: The code at line 2286 was first introduced by commit
:::::: 7413af1fb70e7efa6dbc7f27663e7a5126b3aa33 ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global

:::::: TO: Steven Rostedt (Red Hat) <rostedt@goodmis.org>
:::::: CC: Steven Rostedt <rostedt@goodmis.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 27240 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-10-27  6:21 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-26 17:49 [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
2015-10-26 17:56 ` [PATCH v3 2/8] ppc use ftrace_modify_all_code default Torsten Duwe
2015-10-26 17:57 ` [PATCH v3 0/8] ftrace with regs + live patching for ppc64 LE (ABI v2) Torsten Duwe
2015-10-27  6:20   ` kbuild test robot
2015-10-26 17:58 ` [PATCH v3 4/8] ppc64 ftrace_with_regs: spare early boot and low level Torsten Duwe
2015-10-26 17:59 ` [PATCH v3 5/8] ppc64 ftrace: disable profiling for some functions Torsten Duwe
2015-10-26 18:01 ` [PATCH v3 6/8] ppc64 ftrace: disable profiling for some files Torsten Duwe
2015-10-26 18:02 ` [PATCH v3 7/8] Implement kernel live patching for ppc64le (ABIv2) Torsten Duwe
2015-10-26 18:02 ` [PATCH v3 8/8] Enable LIVEPATCH to be configured on ppc64le and add livepatch.o if it is selected Torsten Duwe
2015-10-26 18:03 ` [PATCH v3 3/8] ppc64 ftrace_with_regs configuration variables Torsten Duwe
2015-10-26 18:04 ` [PATCH v3 1/8] ppc64le FTRACE_WITH_REGS implementation Torsten Duwe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).