All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] ARM: Initial support for Cortex-M3
@ 2012-04-04 21:08 Uwe Kleine-König
  2012-04-04 21:10 ` [PATCH v4 1/3] ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15 Uwe Kleine-König
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Uwe Kleine-König @ 2012-04-04 21:08 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

I rebased the series to 3.4-rc1. Apart from the asm/system.h
disintegration this was trivial and git did it just right for me.

I dropped patch 3 (ARM: force branch instructions to use long distance
encoding) as I don't need that anymore for reasons I didn't try to find
out. With the rework of patch 1 (ARM: protect usage of cr_alignment by
#ifdef CONFIG_CPU_CP15 -> ARM: make cr_alignment read-only #ifndef
CONFIG_CPU_CP15) patch 2 isn't needed anymore, too.

The patches are available via git at

	git://git.pengutronix.de/git/ukl/linux.git cortexm3

I also updated the efm32 branch in the same repository that additionally
contains support for Energymicro's efm32 processor. The patches in this
series were tested on such a machine.

As before interrupt support and Makefile/Kconfig glue are missing. The
former needs some more love as I think it should be possible to
support the Cortex M3's nvic using common/gic.c. I plan to address this
when this series made it into Linus' tree.

See the mails with the respective patches for some more details about
what changed compared to v3.

Best regards
Uwe

-- 
Pengutronix e.K.                           | Uwe Kleine-K?nig            |
Industrial Linux Solutions                 | http://www.pengutronix.de/  |

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 1/3] ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15
  2012-04-04 21:08 [PATCH v4 0/3] ARM: Initial support for Cortex-M3 Uwe Kleine-König
@ 2012-04-04 21:10 ` Uwe Kleine-König
  2012-04-04 21:10 ` [PATCH v4 2/3] Cortex-M3: Add base support for Cortex-M3 Uwe Kleine-König
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Uwe Kleine-König @ 2012-04-04 21:10 UTC (permalink / raw)
  To: linux-arm-kernel

This makes cr_alignment a constant 0 to break code that tries to modify
the value as it's likely that it's built on wrong assumption when
CONFIG_CPU_CP15 isn't defined. For code that is only reading the value 0
is more or less a fine value to report.

Signed-off-by: Uwe Kleine-K?nig <u.kleine-koenig@pengutronix.de>
---
Hello,

in the v3 series this patch was titled "ARM: protect usage of
cr_alignment by #ifdef CONFIG_CPU_CP15". In response to Nicolas' review
cr_alignment was made a #define and only the code modifying it was
patched.

Moreover I patched the cachepolicy and noalign kernel parameter handling
to just print a warning that these are unimplemented without cp15.

Best regards
Uwe

 arch/arm/include/asm/cp15.h   |   11 ++++++++++-
 arch/arm/kernel/head-common.S |    9 +++++++--
 arch/arm/mm/alignment.c       |    2 ++
 arch/arm/mm/mmu.c             |   17 +++++++++++++++++
 4 files changed, 36 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index 5ef4d80..d814435 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -42,6 +42,8 @@
 #define vectors_high()	(0)
 #endif
 
+#ifdef CONFIG_CPU_CP15
+
 extern unsigned long cr_no_alignment;	/* defined in entry-armv.S */
 extern unsigned long cr_alignment;	/* defined in entry-armv.S */
 
@@ -82,6 +84,13 @@ static inline void set_copro_access(unsigned int val)
 	isb();
 }
 
-#endif
+#else /* ifdef CONFIG_CPU_CP15 */
+
+#define cr_no_alignment	UL(0)
+#define cr_alignment	UL(0)
+
+#endif /* ifdef CONFIG_CPU_CP15 / else */
+
+#endif /* ifndef __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 854bd22..2f560c5 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -98,8 +98,9 @@ __mmap_switched:
 	str	r9, [r4]			@ Save processor ID
 	str	r1, [r5]			@ Save machine type
 	str	r2, [r6]			@ Save atags pointer
-	bic	r4, r0, #CR_A			@ Clear 'A' bit
-	stmia	r7, {r0, r4}			@ Save control register values
+	cmp	r7, #0
+	bicne	r4, r0, #CR_A			@ Clear 'A' bit
+	stmneia	r7, {r0, r4}			@ Save control register values
 	b	start_kernel
 ENDPROC(__mmap_switched)
 
@@ -113,7 +114,11 @@ __mmap_switched_data:
 	.long	processor_id			@ r4
 	.long	__machine_arch_type		@ r5
 	.long	__atags_pointer			@ r6
+#ifdef CONFIG_CPU_CP15
 	.long	cr_alignment			@ r7
+#else
+	.long	0
+#endif
 	.long	init_thread_union + THREAD_START_SP @ sp
 	.size	__mmap_switched_data, . - __mmap_switched_data
 
diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c
index 9107231..7df07d1 100644
--- a/arch/arm/mm/alignment.c
+++ b/arch/arm/mm/alignment.c
@@ -962,12 +962,14 @@ static int __init alignment_init(void)
 		return -ENOMEM;
 #endif
 
+#ifdef CONFIG_CPU_CP15
 	if (cpu_is_v6_unaligned()) {
 		cr_alignment &= ~CR_A;
 		cr_no_alignment &= ~CR_A;
 		set_cr(cr_alignment);
 		ai_usermode = safe_usermode(ai_usermode, false);
 	}
+#endif
 
 	hook_fault_code(FAULT_CODE_ALIGNMENT, do_alignment, SIGBUS, BUS_ADRALN,
 			"alignment exception");
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index b86f893..b78d47a 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -96,6 +96,7 @@ static struct cachepolicy cache_policies[] __initdata = {
 	}
 };
 
+#ifdef CONFIG_CPU_CP15
 /*
  * These are useful for identifying cache coherency
  * problems by allowing the cache or the cache and
@@ -194,6 +195,22 @@ void adjust_cr(unsigned long mask, unsigned long set)
 }
 #endif
 
+#else
+
+static int __init early_cachepolicy(char *p)
+{
+	pr_warning("cachepolicy kernel parameter not supported without cp15\n");
+}
+early_param("cachepolicy", early_cachepolicy);
+
+static int __init noalign_setup(char *__unused)
+{
+	pr_warning("noalign kernel parameter not supported without cp15\n");
+}
+__setup("noalign", noalign_setup);
+
+#endif
+
 #define PROT_PTE_DEVICE		L_PTE_PRESENT|L_PTE_YOUNG|L_PTE_DIRTY|L_PTE_XN
 #define PROT_SECT_DEVICE	PMD_TYPE_SECT|PMD_SECT_AP_WRITE
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 2/3] Cortex-M3: Add base support for Cortex-M3
  2012-04-04 21:08 [PATCH v4 0/3] ARM: Initial support for Cortex-M3 Uwe Kleine-König
  2012-04-04 21:10 ` [PATCH v4 1/3] ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15 Uwe Kleine-König
@ 2012-04-04 21:10 ` Uwe Kleine-König
  2012-04-23  9:46   ` Russell King - ARM Linux
  2012-04-04 21:10 ` [PATCH v4 3/3] Cortex-M3: Add support for exception handling Uwe Kleine-König
  2012-04-16  8:27 ` [PATCH v4 0/3] ARM: Initial support for Cortex-M3 Uwe Kleine-König
  3 siblings, 1 reply; 7+ messages in thread
From: Uwe Kleine-König @ 2012-04-04 21:10 UTC (permalink / raw)
  To: linux-arm-kernel

From: Catalin Marinas <catalin.marinas@arm.com>

This patch adds the base support for the Cortex-M3 processor (ARMv7-M
architecture). It consists of the corresponding arch/arm/mm/ files and
various #ifdef's around the kernel. Exception handling is implemented by
a subsequent patch.

[ukleinek: squash in some changes originating from commit

b5717ba (Cortex-M3: Add support for the Microcontroller Prototyping System)

from the v2.6.33-arm1 patch stack, port to post 3.2, drop zImage
support, drop reorganisation of pt_regs, and a few cosmetic changes]

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Uwe Kleine-K?nig <u.kleine-koenig@pengutronix.de>
---
Hello,

this patch was only rebased onto v3.4-rc1 and so needed some minor
adaptions, e.g. for the asm/system.h disintegration.

Best regards
Uwe

 arch/arm/include/asm/assembler.h   |   13 ++-
 arch/arm/include/asm/cputype.h     |    3 +
 arch/arm/include/asm/glue-cache.h  |   24 ++++++
 arch/arm/include/asm/glue-df.h     |    8 ++
 arch/arm/include/asm/glue-proc.h   |    9 ++
 arch/arm/include/asm/irqflags.h    |   51 +++++++++++-
 arch/arm/include/asm/processor.h   |    7 ++
 arch/arm/include/asm/ptrace.h      |   20 ++++-
 arch/arm/include/asm/system_info.h |    1 +
 arch/arm/kernel/asm-offsets.c      |    3 +
 arch/arm/kernel/head-nommu.S       |    9 +-
 arch/arm/kernel/setup.c            |   13 ++-
 arch/arm/kernel/traps.c            |    2 +
 arch/arm/mm/nommu.c                |    2 +
 arch/arm/mm/proc-v7m.S             |  161 ++++++++++++++++++++++++++++++++++++
 15 files changed, 318 insertions(+), 8 deletions(-)
 create mode 100644 arch/arm/mm/proc-v7m.S

diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index 03fb936..705d108 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -135,7 +135,11 @@
  * assumes FIQs are enabled, and that the processor is in SVC mode.
  */
 	.macro	save_and_disable_irqs, oldcpsr
+#ifdef CONFIG_CPU_V7M
+	mrs	\oldcpsr, primask
+#else
 	mrs	\oldcpsr, cpsr
+#endif
 	disable_irq
 	.endm
 
@@ -149,7 +153,11 @@
  * guarantee that this will preserve the flags.
  */
 	.macro	restore_irqs_notrace, oldcpsr
+#ifdef CONFIG_CPU_V7M
+	msr	primask, \oldcpsr
+#else
 	msr	cpsr_c, \oldcpsr
+#endif
 	.endm
 
 	.macro restore_irqs, oldcpsr
@@ -228,7 +236,10 @@
 #endif
 	.endm
 
-#ifdef CONFIG_THUMB2_KERNEL
+#if defined(CONFIG_CPU_V7M)
+	.macro	setmode, mode, reg
+	.endm
+#elif defined(CONFIG_THUMB2_KERNEL)
 	.macro	setmode, mode, reg
 	mov	\reg, #\mode
 	msr	cpsr_c, \reg
diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h
index cb47d28..5bd8cb6 100644
--- a/arch/arm/include/asm/cputype.h
+++ b/arch/arm/include/asm/cputype.h
@@ -46,6 +46,9 @@ extern unsigned int processor_id;
 		    : "cc");						\
 		__val;							\
 	})
+#elif defined(CONFIG_CPU_V7M)
+#define read_cpuid(reg) (*(unsigned int *)0xe000ed00)
+#define read_cpuid_ext(reg) 0
 #else
 #define read_cpuid(reg) (processor_id)
 #define read_cpuid_ext(reg) 0
diff --git a/arch/arm/include/asm/glue-cache.h b/arch/arm/include/asm/glue-cache.h
index 7e30874..6a5fd0b 100644
--- a/arch/arm/include/asm/glue-cache.h
+++ b/arch/arm/include/asm/glue-cache.h
@@ -125,10 +125,34 @@
 //# endif
 #endif
 
+#if defined(CONFIG_CPU_V7M)
+# ifdef _CACHE
+#  error "Multi-cache not supported on ARMv7-M"
+# else
+#  define _CACHE nop
+# endif
+#endif
+
 #if !defined(_CACHE) && !defined(MULTI_CACHE)
 #error Unknown cache maintenance model
 #endif
 
+#ifndef __ASSEMBLER__
+static inline void nop_flush_icache_all(void) { }
+static inline void nop_flush_kern_cache_all(void) { }
+static inline void nop_flush_user_cache_all(void) { }
+static inline void nop_flush_user_cache_range(unsigned long a, unsigned long b, unsigned int c) { }
+
+static inline void nop_coherent_kern_range(unsigned long a, unsigned long b) { }
+static inline void nop_coherent_user_range(unsigned long a, unsigned long b) { }
+static inline void nop_flush_kern_dcache_area(void *a, size_t s) { }
+
+static inline void nop_dma_flush_range(const void *a, const void *b) { }
+
+static inline void nop_dma_map_area(const void *s, size_t l, int f) { }
+static inline void nop_dma_unmap_area(const void *s, size_t l, int f) { }
+#endif
+
 #ifndef MULTI_CACHE
 #define __cpuc_flush_icache_all		__glue(_CACHE,_flush_icache_all)
 #define __cpuc_flush_kern_all		__glue(_CACHE,_flush_kern_cache_all)
diff --git a/arch/arm/include/asm/glue-df.h b/arch/arm/include/asm/glue-df.h
index 354d571..26be71c 100644
--- a/arch/arm/include/asm/glue-df.h
+++ b/arch/arm/include/asm/glue-df.h
@@ -103,6 +103,14 @@
 # endif
 #endif
 
+#ifdef CONFIG_CPU_ABRT_NOMMU
+# ifdef CPU_DABORT_HANDLER
+#  define MULTI_DABORT 1
+# else
+#  define CPU_DABORT_HANDLER nommu_early_abort
+# endif
+#endif
+
 #ifndef CPU_DABORT_HANDLER
 #error Unknown data abort handler type
 #endif
diff --git a/arch/arm/include/asm/glue-proc.h b/arch/arm/include/asm/glue-proc.h
index e2be7f1..8f9991b 100644
--- a/arch/arm/include/asm/glue-proc.h
+++ b/arch/arm/include/asm/glue-proc.h
@@ -248,6 +248,15 @@
 # endif
 #endif
 
+#ifdef CONFIG_CPU_V7M
+# ifdef CPU_NAME
+#  undef  MULTI_CPU
+#  define MULTI_CPU
+# else
+#  define CPU_NAME cpu_v7m
+# endif
+#endif
+
 #ifndef MULTI_CPU
 #define cpu_proc_init			__glue(CPU_NAME,_proc_init)
 #define cpu_proc_fin			__glue(CPU_NAME,_proc_fin)
diff --git a/arch/arm/include/asm/irqflags.h b/arch/arm/include/asm/irqflags.h
index 1e6cca5..ac4d548 100644
--- a/arch/arm/include/asm/irqflags.h
+++ b/arch/arm/include/asm/irqflags.h
@@ -10,6 +10,18 @@
  */
 #if __LINUX_ARM_ARCH__ >= 6
 
+#ifdef CONFIG_CPU_V7M
+static inline unsigned long arch_local_irq_save(void)
+{
+	unsigned long flags;
+
+	asm volatile(
+		"	mrs	%0, primask	@ arch_local_irq_save\n"
+		"	cpsid	i"
+		: "=r" (flags) : : "memory", "cc");
+	return flags;
+}
+#else
 static inline unsigned long arch_local_irq_save(void)
 {
 	unsigned long flags;
@@ -20,6 +32,7 @@ static inline unsigned long arch_local_irq_save(void)
 		: "=r" (flags) : : "memory", "cc");
 	return flags;
 }
+#endif
 
 static inline void arch_local_irq_enable(void)
 {
@@ -122,6 +135,38 @@ static inline void arch_local_irq_disable(void)
 
 #endif
 
+#ifdef CONFIG_CPU_V7M
+/*
+ * Save the current interrupt enable state.
+ */
+static inline unsigned long arch_local_save_flags(void)
+{
+	unsigned long flags;
+	asm volatile(
+		"	mrs	%0, primask	@ local_save_flags"
+		: "=r" (flags) : : "memory", "cc");
+	return flags;
+}
+
+/*
+ * restore saved IRQ & FIQ state
+ */
+static inline void arch_local_irq_restore(unsigned long flags)
+{
+	asm volatile(
+		"	msr	primask, %0	@ local_irq_restore"
+		:
+		: "r" (flags)
+		: "memory", "cc");
+}
+
+static inline int arch_irqs_disabled_flags(unsigned long flags)
+{
+	return flags & 1;
+}
+
+#else /* ifdef CONFIG_CPU_V7M */
+
 /*
  * Save the current interrupt enable state.
  */
@@ -151,5 +196,7 @@ static inline int arch_irqs_disabled_flags(unsigned long flags)
 	return flags & PSR_I_BIT;
 }
 
-#endif
-#endif
+#endif /* ifdef CONFIG_CPU_V7M / else */
+
+#endif /* ifdef __KERNEL__ */
+#endif /* ifndef __ASM_ARM_IRQFLAGS_H */
diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
index 5ac8d3d..850710e 100644
--- a/arch/arm/include/asm/processor.h
+++ b/arch/arm/include/asm/processor.h
@@ -49,7 +49,14 @@ struct thread_struct {
 #ifdef CONFIG_MMU
 #define nommu_start_thread(regs) do { } while (0)
 #else
+#ifndef CONFIG_CPU_V7M
 #define nommu_start_thread(regs) regs->ARM_r10 = current->mm->start_data
+#else
+#define nommu_start_thread(regs) do {					\
+	regs->ARM_r10 = current->mm->start_data;			\
+	regs->ARM_EXC_RET = 0xfffffffdL;				\
+} while (0)
+#endif
 #endif
 
 #define start_thread(regs,pc,sp)					\
diff --git a/arch/arm/include/asm/ptrace.h b/arch/arm/include/asm/ptrace.h
index 451808b..696bbf4 100644
--- a/arch/arm/include/asm/ptrace.h
+++ b/arch/arm/include/asm/ptrace.h
@@ -39,16 +39,25 @@
 #define FIQ26_MODE	0x00000001
 #define IRQ26_MODE	0x00000002
 #define SVC26_MODE	0x00000003
+#ifndef CONFIG_CPU_V7M
 #define USR_MODE	0x00000010
+#define SVC_MODE	0x00000013
+#else
+#define USR_MODE	0x00000000
+#define SVC_MODE	0x00000000
+#endif
 #define FIQ_MODE	0x00000011
 #define IRQ_MODE	0x00000012
-#define SVC_MODE	0x00000013
 #define ABT_MODE	0x00000017
 #define UND_MODE	0x0000001b
 #define SYSTEM_MODE	0x0000001f
 #define MODE32_BIT	0x00000010
 #define MODE_MASK	0x0000001f
+#ifndef CONFIG_CPU_V7M
 #define PSR_T_BIT	0x00000020
+#else
+#define PSR_T_BIT	0x01000000
+#endif
 #define PSR_F_BIT	0x00000040
 #define PSR_I_BIT	0x00000080
 #define PSR_A_BIT	0x00000100
@@ -106,7 +115,11 @@ struct pt_regs {
 };
 #else /* __KERNEL__ */
 struct pt_regs {
+#ifdef CONFIG_CPU_V7M
+	unsigned long uregs[20];
+#else
 	unsigned long uregs[18];
+#endif
 };
 #endif /* __KERNEL__ */
 
@@ -128,6 +141,7 @@ struct pt_regs {
 #define ARM_r1		uregs[1]
 #define ARM_r0		uregs[0]
 #define ARM_ORIG_r0	uregs[17]
+#define ARM_EXC_RET	uregs[18]
 
 /*
  * The size of the user-visible VFP state as seen by PTRACE_GET/SETVFPREGS
@@ -165,6 +179,7 @@ struct pt_regs {
  */
 static inline int valid_user_regs(struct pt_regs *regs)
 {
+#ifndef CONFIG_CPU_V7M
 	unsigned long mode = regs->ARM_cpsr & MODE_MASK;
 
 	/*
@@ -187,6 +202,9 @@ static inline int valid_user_regs(struct pt_regs *regs)
 		regs->ARM_cpsr |= USR_MODE;
 
 	return 0;
+#else /* ifndef CONFIG_CPU_V7M */
+	return 1;
+#endif
 }
 
 static inline long regs_return_value(struct pt_regs *regs)
diff --git a/arch/arm/include/asm/system_info.h b/arch/arm/include/asm/system_info.h
index dfd386d..720ea03 100644
--- a/arch/arm/include/asm/system_info.h
+++ b/arch/arm/include/asm/system_info.h
@@ -11,6 +11,7 @@
 #define CPU_ARCH_ARMv5TEJ	7
 #define CPU_ARCH_ARMv6		8
 #define CPU_ARCH_ARMv7		9
+#define CPU_ARCH_ARMv7M		10
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 1429d89..6f6b5b6 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -91,6 +91,9 @@ int main(void)
   DEFINE(S_PC,			offsetof(struct pt_regs, ARM_pc));
   DEFINE(S_PSR,			offsetof(struct pt_regs, ARM_cpsr));
   DEFINE(S_OLD_R0,		offsetof(struct pt_regs, ARM_ORIG_r0));
+#ifdef CONFIG_CPU_V7M
+  DEFINE(S_EXC_RET,		offsetof(struct pt_regs, ARM_EXC_RET));
+#endif
   DEFINE(S_FRAME_SIZE,		sizeof(struct pt_regs));
   BLANK();
 #ifdef CONFIG_CACHE_L2X0
diff --git a/arch/arm/kernel/head-nommu.S b/arch/arm/kernel/head-nommu.S
index 278cfc1..c391c05 100644
--- a/arch/arm/kernel/head-nommu.S
+++ b/arch/arm/kernel/head-nommu.S
@@ -44,10 +44,13 @@ ENTRY(stext)
 
 	setmode	PSR_F_BIT | PSR_I_BIT | SVC_MODE, r9 @ ensure svc mode
 						@ and irqs disabled
-#ifndef CONFIG_CPU_CP15
-	ldr	r9, =CONFIG_PROCESSOR_ID
-#else
+#if defined(CONFIG_CPU_CP15)
 	mrc	p15, 0, r9, c0, c0		@ get processor id
+#elif defined(CONFIG_CPU_V7M)
+	ldr	r9, =0xe000ed00			@ CPUID register address
+	ldr	r9, [r9]
+#else
+	ldr	r9, =CONFIG_PROCESSOR_ID
 #endif
 	bl	__lookup_processor_type		@ r5=procinfo r9=cpuid
 	movs	r10, r5				@ invalid processor (r5=0)?
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index b914113..7eb44c9 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -134,7 +134,9 @@ struct stack {
 	u32 und[3];
 } ____cacheline_aligned;
 
+#ifndef CONFIG_CPU_V7M
 static struct stack stacks[NR_CPUS];
+#endif
 
 char elf_platform[ELF_PLATFORM_SIZE];
 EXPORT_SYMBOL(elf_platform);
@@ -214,7 +216,7 @@ static const char *proc_arch[] = {
 	"5TEJ",
 	"6TEJ",
 	"7",
-	"?(11)",
+	"7M",
 	"?(12)",
 	"?(13)",
 	"?(14)",
@@ -223,6 +225,12 @@ static const char *proc_arch[] = {
 	"?(17)",
 };
 
+#ifdef CONFIG_CPU_V7M
+static int __get_cpu_architecture(void)
+{
+	return CPU_ARCH_ARMv7M;
+}
+#else
 static int __get_cpu_architecture(void)
 {
 	int cpu_arch;
@@ -255,6 +263,7 @@ static int __get_cpu_architecture(void)
 
 	return cpu_arch;
 }
+#endif
 
 int __pure cpu_architecture(void)
 {
@@ -382,6 +391,7 @@ static void __init feat_v6_fixup(void)
  */
 void cpu_init(void)
 {
+#ifndef CONFIG_CPU_V7M
 	unsigned int cpu = smp_processor_id();
 	struct stack *stk = &stacks[cpu];
 
@@ -426,6 +436,7 @@ void cpu_init(void)
 	      "I" (offsetof(struct stack, und[0])),
 	      PLC (PSR_F_BIT | PSR_I_BIT | SVC_MODE)
 	    : "r14");
+#endif
 }
 
 int __cpu_logical_map[NR_CPUS];
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index 7784547..d66da90 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -791,6 +791,7 @@ static void __init kuser_get_tls_init(unsigned long vectors)
 
 void __init early_trap_init(void *vectors_base)
 {
+#ifndef CONFIG_CPU_V7M
 	unsigned long vectors = (unsigned long)vectors_base;
 	extern char __stubs_start[], __stubs_end[];
 	extern char __vectors_start[], __vectors_end[];
@@ -824,4 +825,5 @@ void __init early_trap_init(void *vectors_base)
 
 	flush_icache_range(vectors, vectors + PAGE_SIZE);
 	modify_domain(DOMAIN_USER, DOMAIN_CLIENT);
+#endif
 }
diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
index 6486d2f..5960a32 100644
--- a/arch/arm/mm/nommu.c
+++ b/arch/arm/mm/nommu.c
@@ -19,12 +19,14 @@
 
 void __init arm_mm_memblock_reserve(void)
 {
+#ifndef CONFIG_CPU_V7M
 	/*
 	 * Register the exception vector page.
 	 * some architectures which the DRAM is the exception vector to trap,
 	 * alloc_page breaks with error, although it is not NULL, but "0."
 	 */
 	memblock_reserve(CONFIG_VECTORS_BASE, PAGE_SIZE);
+#endif
 }
 
 void __init sanity_check_meminfo(void)
diff --git a/arch/arm/mm/proc-v7m.S b/arch/arm/mm/proc-v7m.S
new file mode 100644
index 0000000..2b8eb97
--- /dev/null
+++ b/arch/arm/mm/proc-v7m.S
@@ -0,0 +1,161 @@
+/*
+ *  linux/arch/arm/mm/proc-v7m.S
+ *
+ *  Copyright (C) 2008 ARM Ltd.
+ *  Copyright (C) 2001 Deep Blue Solutions Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ *  This is the "shell" of the ARMv7-M processor support.
+ */
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+ENTRY(cpu_v7m_proc_init)
+	mov	pc, lr
+ENDPROC(cpu_v7m_proc_init)
+
+ENTRY(cpu_v7m_proc_fin)
+	mov	pc, lr
+ENDPROC(cpu_v7m_proc_fin)
+
+/*
+ *	cpu_v7m_reset(loc)
+ *
+ *	Perform a soft reset of the system.  Put the CPU into the
+ *	same state as it would be if it had been reset, and branch
+ *	to what would be the reset vector.
+ *
+ *	- loc   - location to jump to for soft reset
+ */
+	.align	5
+ENTRY(cpu_v7m_reset)
+	mov	pc, r0
+ENDPROC(cpu_v7m_reset)
+
+/*
+ *	cpu_v7m_do_idle()
+ *
+ *	Idle the processor (eg, wait for interrupt).
+ *
+ *	IRQs are already disabled.
+ */
+ENTRY(cpu_v7m_do_idle)
+	wfi
+	mov	pc, lr
+ENDPROC(cpu_v7m_do_idle)
+
+ENTRY(cpu_v7m_dcache_clean_area)
+	mov	pc, lr
+ENDPROC(cpu_v7m_dcache_clean_area)
+
+/*
+ * There is no MMU, so here is nothing to do.
+ */
+ENTRY(cpu_v7m_switch_mm)
+	mov	pc, lr
+ENDPROC(cpu_v7m_switch_mm)
+
+cpu_v7m_name:
+	.ascii	"ARMv7-M Processor"
+	.align
+
+	.section ".text.init", #alloc, #execinstr
+
+/*
+ *	__v7m_setup
+ *
+ *	This should be able to cover all ARMv7-M cores.
+ */
+__v7m_setup:
+	@ Configure the vector table base address
+	ldr	r0, =0xe000ed08		@ vector table base address
+	ldr	r12, =vector_table
+	str	r12, [r0]
+
+	@ Lower the priority of the SVC and PendSV exceptions
+	ldr	r0, =0xe000ed1c
+	mov	r5, #0x80000000
+	str	r5, [r0]		@ set SVC priority
+	ldr	r0, =0xe000ed20
+	mov	r5, #0x00800000
+	str	r5, [r0]		@ set PendSV priority
+
+	@ SVC to run the kernel in this mode
+	adr	r0, BSYM(1f)
+	ldr	r5, [r12, #11 * 4]	@ read the SVC vector entry
+	str	r0, [r12, #11 * 4]	@ write the temporary SVC vector entry
+	mov	r6, lr			@ save LR
+	mov	r7, sp			@ save SP
+	ldr	sp, =__v7m_setup_stack_top
+	cpsie	i
+	svc	#0
+1:	cpsid	i
+	str	r5, [r12, #11 * 4]	@ restore the original SVC vector entry
+	mov	lr, r6			@ restore LR
+	mov	sp, r7			@ restore SP
+
+	@ Special-purpose control register
+	mov	r0, #1
+	msr	control, r0		@ Thread mode has unpriviledged access
+
+	@ Configure the System Control Register
+	ldr	r0, =0xe000ed14		@ system control register
+	ldr	r12, [r0]
+	orr	r12, #1 << 9		@ STKALIGN
+	str	r12, [r0]
+	mov	pc, lr
+ENDPROC(__v7m_setup)
+
+	.align	2
+	.type	v7m_processor_functions, #object
+ENTRY(v7m_processor_functions)
+	.word	nommu_early_abort
+	.word	cpu_v7m_proc_init
+	.word	cpu_v7m_proc_fin
+	.word	cpu_v7m_reset
+	.word	cpu_v7m_do_idle
+	.word	cpu_v7m_dcache_clean_area
+	.word	cpu_v7m_switch_mm
+	.word	0			@ cpu_v7m_set_pte_ext
+	.word	legacy_pabort
+	.size	v7m_processor_functions, . - v7m_processor_functions
+
+	.type	cpu_arch_name, #object
+cpu_arch_name:
+	.asciz	"armv7m"
+	.size	cpu_arch_name, . - cpu_arch_name
+
+	.type	cpu_elf_name, #object
+cpu_elf_name:
+	.asciz	"v7m"
+	.size	cpu_elf_name, . - cpu_elf_name
+	.align
+
+	.section ".proc.info.init", #alloc, #execinstr
+
+	/*
+	 * Match any ARMv7-M processor core.
+	 */
+	.type	__v7m_proc_info, #object
+__v7m_proc_info:
+	.long	0x000f0000		@ Required ID value
+	.long	0x000f0000		@ Mask for ID
+	.long   0			@ proc_info_list.__cpu_mm_mmu_flags
+	.long   0			@ proc_info_list.__cpu_io_mmu_flags
+	b	__v7m_setup		@ proc_info_list.__cpu_flush
+	.long	cpu_arch_name
+	.long	cpu_elf_name
+	.long	HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
+	.long	cpu_v7m_name
+	.long	v7m_processor_functions	@ proc_info_list.proc
+	.long	0			@ proc_info_list.tlb
+	.long	0			@ proc_info_list.user
+	.long	0			@ proc_info_list.cache
+	.size	__v7m_proc_info, . - __v7m_proc_info
+
+__v7m_setup_stack:
+	.space	4 * 8				@ 8 registers
+__v7m_setup_stack_top:
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 3/3] Cortex-M3: Add support for exception handling
  2012-04-04 21:08 [PATCH v4 0/3] ARM: Initial support for Cortex-M3 Uwe Kleine-König
  2012-04-04 21:10 ` [PATCH v4 1/3] ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15 Uwe Kleine-König
  2012-04-04 21:10 ` [PATCH v4 2/3] Cortex-M3: Add base support for Cortex-M3 Uwe Kleine-König
@ 2012-04-04 21:10 ` Uwe Kleine-König
  2012-04-16  8:27 ` [PATCH v4 0/3] ARM: Initial support for Cortex-M3 Uwe Kleine-König
  3 siblings, 0 replies; 7+ messages in thread
From: Uwe Kleine-König @ 2012-04-04 21:10 UTC (permalink / raw)
  To: linux-arm-kernel

This patch implements the exception handling for the ARMv7-M
architecture (pretty different from the A or R profiles).

It bases on work done earlier by Catalin for 2.6.33 but was nearly
completely rewritten to use a pt_regs layout compatible to the A
profile.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Uwe Kleine-K?nig <u.kleine-koenig@pengutronix.de>
---
Hello,

apart from rebasing on top of 3.4-rc1 this patch is unchanged compared
to v3.

Best regards
Uwe

 arch/arm/kernel/entry-common.S |    4 ++
 arch/arm/kernel/entry-header.S |  148 ++++++++++++++++++++++++++++++++++++++++
 arch/arm/kernel/entry-v7m.S    |  134 ++++++++++++++++++++++++++++++++++++
 arch/arm/kernel/process.c      |    8 +++
 arch/arm/kernel/ptrace.c       |    3 +
 arch/arm/kernel/signal.c       |   13 ++++
 arch/arm/kernel/sys_arm.c      |    1 +
 7 files changed, 311 insertions(+)
 create mode 100644 arch/arm/kernel/entry-v7m.S

diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 54ee265..3e7c216 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -351,6 +351,9 @@ ENDPROC(ftrace_stub)
 
 	.align	5
 ENTRY(vector_swi)
+#ifdef CONFIG_CPU_V7M
+	v7m_exception_entry
+#else
 	sub	sp, sp, #S_FRAME_SIZE
 	stmia	sp, {r0 - r12}			@ Calling r0 - r12
  ARM(	add	r8, sp, #S_PC		)
@@ -361,6 +364,7 @@ ENTRY(vector_swi)
 	str	lr, [sp, #S_PC]			@ Save calling PC
 	str	r8, [sp, #S_PSR]		@ Save CPSR
 	str	r0, [sp, #S_OLD_R0]		@ Save OLD_R0
+#endif
 	zero_fp
 
 	/*
diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
index 9a8531e..33d9900 100644
--- a/arch/arm/kernel/entry-header.S
+++ b/arch/arm/kernel/entry-header.S
@@ -44,6 +44,145 @@
 #endif
 	.endm
 
+#ifdef CONFIG_CPU_V7M
+/*
+ * ARMv7-M exception entry/exit macros.
+ *
+ * xPSR, ReturnAddress(), LR (R14), R12, R3, R2, R1, and R0 are
+ * automatically saved on the current stack (32 words) before
+ * switching to the exception stack (SP_main).
+ *
+ * If exception is taken while in user mode, SP_main is
+ * empty. Otherwise, SP_main is aligned to 64 bit automatically
+ * (CCR.STKALIGN set).
+ *
+ * Linux assumes that the interrupts are disabled when entering an
+ * exception handler and it may BUG if this is not the case. Interrupts
+ * are disabled during entry and reenabled in the exit macro.
+ *
+ * v7m_exception_fast_exit is used when returning from interrupts.
+ *
+ * v7m_exception_slow_exit is used when returning from SVC or PendSV.
+ * When returning to kernel mode, we don't return from exception.
+ */
+	.macro	v7m_exception_entry
+	@ determine the location of the registers saved by the core during
+	@ exception entry. Depending on the mode the cpu was in when the
+	@ exception happend that is either on the main or the process stack.
+	@ Bit 2 of EXC_RETURN stored in the lr register specifies which stack
+	@ was used.
+	tst	lr, #0x4
+	mrsne	r12, psp
+	moveq	r12, sp
+
+	@ we cannot rely on r0-r3 and r12 matching the value saved in the
+	@ exception frame because of tail-chaining. So these have to be
+	@ reloaded.
+	ldmia	r12!, {r0-r3}
+
+	@ Linux expects to have irqs off. Do it here before taking stack space
+	cpsid	i
+
+	sub	sp, #S_FRAME_SIZE-S_IP
+	stmdb	sp!, {r0-r11}
+
+	@ load saved r12, lr, return address and xPSR.
+	@ r0-r7 are used for signals and never touched from now on. Clobbering
+	@ r8-r12 is OK.
+	mov	r9, r12
+	ldmia	r9!, {r8, r10-r12}
+
+	@ calculate the original stack pointer value.
+	@ r9 currently points to the memory location just above the auto saved
+	@ xPSR. If the FP extension is implemented and bit 4 of EXC_RETURN is 0
+	@ then space was allocated for FP state. That is space for 18 32-bit
+	@ values. (If FP extension is unimplemented, bit 4 is 1.)
+	@ Additionally the cpu might automatically 8-byte align the stack. Bit 9
+	@ of the saved xPSR specifies if stack aligning took place. In this case
+	@ another 32-bit value is included in the stack.
+
+	tst	lr, #0x10
+	addeq	r9, r9, #576
+
+	tst	r12, 0x100
+	addne	r9, r9, #4
+
+	@ store saved r12 using str to have a register to hold the base for stm
+	str	r8, [sp, #S_IP]
+	add	r8, sp, #S_SP
+	@ store r13-r15, xPSR
+	stmia	r8!, {r9-r12}
+	@ store r0 once more and EXC_RETURN
+	stmia	r8, {r0, lr}
+	.endm
+
+	.macro	v7m_exception_fast_exit
+	@ registers r0-r3 and r12 are automatically restored on exception
+	@ return. r4-r7 were not clobbered in v7m_exception_entry so for
+	@ correctness they don't need to be restored. So only r8-r11 must be
+	@ restored here. The easiest way to do so is to restore r0-r7, too.
+	ldmia	sp!, {r0-r11}
+	add	sp, #S_FRAME_SIZE-S_IP
+	cpsie	i
+	bx	lr
+	.endm
+
+	.macro	v7m_exception_slow_exit ret_r0
+	cpsid	i
+	ldr	lr, [sp, #S_EXC_RET]	@ read exception LR
+	tst     lr, #0x8
+	bne	1f			@ go to thread mode using exception return
+
+	/*
+	 * return to kernel thread
+	 * sp is already set up (and might be unset in pt_regs), so only
+	 * restore r0-r12 and pc
+	 */
+	ldmia	sp, {r0-r12}
+	ldr	lr, [sp, #S_PC]
+	add	sp, sp, #S_FRAME_SIZE
+	cpsie	i
+	bx	lr
+
+1:	/*
+	 * return to userspace
+	 */
+
+	@ read original r12, sp, lr, pc and xPSR
+	add	r12, sp, #S_IP
+	ldmia	r12, {r1-r5}
+
+	@ handle stack aligning
+	tst	r5, #0x100
+	subne	r2, r2, #4
+
+	@ skip over stack space for fp saving
+	tst	lr, #0x10
+	subeq	r2, r2, #576
+
+	@ write basic exception frame
+	stmdb	r2!, {r1, r3-r5}
+	ldmia	sp, {r1, r3-r5}
+	.if	\ret_r0
+	stmdb	r2!, {r0, r3-r5}
+	.else
+	stmdb	r2!, {r1, r3-r5}
+	.endif
+
+	@ restore process sp
+	msr	psp, r2
+
+	@ restore original r4-r11
+	ldmia	sp!, {r0-r11}
+
+	@ restore main sp
+	add	sp, sp, #S_FRAME_SIZE-S_IP
+
+	cpsie	i
+	bx	lr
+	.endm
+#endif	/* CONFIG_CPU_V7M */
+
 	@
 	@ Store/load the USER SP and LR registers by switching to the SYS
 	@ mode. Useful in Thumb-2 mode where "stm/ldm rd, {sp, lr}^" is not
@@ -131,6 +270,14 @@
 	rfeia	sp!
 	.endm
 
+#ifdef CONFIG_CPU_V7M
+	.macro	restore_user_regs, fast = 0, offset = 0
+	.if	\offset
+	add	sp, #\offset
+	.endif
+	v7m_exception_slow_exit ret_r0 = \fast
+	.endm
+#else	/* !CONFIG_CPU_V7M */
 	.macro	restore_user_regs, fast = 0, offset = 0
 	clrex					@ clear the exclusive monitor
 	mov	r2, sp
@@ -147,6 +294,7 @@
 	add	sp, sp, #S_FRAME_SIZE - S_SP
 	movs	pc, lr				@ return & move spsr_svc into cpsr
 	.endm
+#endif	/* CONFIG_CPU_V7M */
 
 	.macro	get_thread_info, rd
 	mov	\rd, sp
diff --git a/arch/arm/kernel/entry-v7m.S b/arch/arm/kernel/entry-v7m.S
new file mode 100644
index 0000000..a0991dc
--- /dev/null
+++ b/arch/arm/kernel/entry-v7m.S
@@ -0,0 +1,134 @@
+/*
+ * linux/arch/arm/kernel/entry-v7m.S
+ *
+ * Copyright (C) 2008 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Low-level vector interface routines for the ARMv7-M architecture
+ */
+#include <asm/memory.h>
+#include <asm/glue.h>
+#include <asm/thread_notify.h>
+
+#include <mach/entry-macro.S>
+
+#include "entry-header.S"
+
+#ifdef CONFIG_PREEMPT
+#error "CONFIG_PREEMPT not supported on the current ARMv7M implementation"
+#endif
+#ifdef CONFIG_TRACE_IRQFLAGS
+#error "CONFIG_TRACE_IRQFLAGS not supported on the current ARMv7M implementation"
+#endif
+
+__invalid_entry:
+	v7m_exception_entry
+	adr	r0, strerr
+	mrs	r1, ipsr
+	mov	r2, lr
+	bl	printk
+	mov	r0, sp
+	bl	show_regs
+1:	b	1b
+ENDPROC(__invalid_entry)
+
+strerr:	.asciz	"\nUnhandled exception: IPSR = %08lx LR = %08lx\n"
+
+	.align	2
+__irq_entry:
+	v7m_exception_entry
+
+	@
+	@ Invoke the IRQ handler
+	@
+	mrs	r0, ipsr
+	and	r0, #0xff
+	sub	r0, #16			@ IRQ number
+	mov	r1, sp
+	@ routine called with r0 = irq number, r1 = struct pt_regs *
+	bl	asm_do_IRQ
+
+	@
+	@ Check for any pending work if returning to user
+	@
+	ldr	lr, [sp, #S_EXC_RET]
+	tst	lr, #0x8		@ check the return stack
+	beq	2f			@ returning to handler mode
+	get_thread_info tsk
+	ldr	r1, [tsk, #TI_FLAGS]
+	tst	r1, #_TIF_WORK_MASK
+	beq	2f			@ no work pending
+	ldr	r1, =0xe000ed04		@ ICSR
+	mov	r0, #1 << 28		@ ICSR.PENDSVSET
+	str	r0, [r1]		@ raise PendSV
+
+2:
+	v7m_exception_fast_exit
+ENDPROC(__irq_entry)
+
+__pendsv_entry:
+	v7m_exception_entry
+
+	ldr	r1, =0xe000ed04		@ ICSR
+	mov	r0, #1 << 27		@ ICSR.PENDSVCLR
+	str	r0, [r1]		@ clear PendSV
+
+	@ execute the pending work, including reschedule
+	get_thread_info tsk
+	mov	why, #0
+	b	ret_to_user
+ENDPROC(__pendsv_entry)
+
+/*
+ * Register switch for ARMv7-M processors.
+ * r0 = previous task_struct, r1 = previous thread_info, r2 = next thread_info
+ * previous and next are guaranteed not to be the same.
+ */
+ENTRY(__switch_to)
+	.fnstart
+	.cantunwind
+	add	ip, r1, #TI_CPU_SAVE
+	stmia	ip!, {r4 - r11}		@ Store most regs on stack
+	str	sp, [ip], #4
+	str	lr, [ip], #4
+	mov	r5, r0
+	add	r4, r2, #TI_CPU_SAVE
+	ldr	r0, =thread_notify_head
+	mov	r1, #THREAD_NOTIFY_SWITCH
+	bl	atomic_notifier_call_chain
+	mov	ip, r4
+	mov	r0, r5
+	ldmia	ip!, {r4 - r11}		@ Load all regs saved previously
+	ldr	sp, [ip], #4
+	ldr	pc, [ip]
+	.fnend
+ENDPROC(__switch_to)
+
+	.data
+	.align	8
+/*
+ * Vector table (64 words => 256 bytes natural alignment)
+ */
+ENTRY(vector_table)
+	.long	0			@ 0 - Reset stack pointer
+	.long	__invalid_entry		@ 1 - Reset
+	.long	__invalid_entry		@ 2 - NMI
+	.long	__invalid_entry		@ 3 - HardFault
+	.long	__invalid_entry		@ 4 - MemManage
+	.long	__invalid_entry		@ 5 - BusFault
+	.long	__invalid_entry		@ 6 - UsageFault
+	.long	__invalid_entry		@ 7 - Reserved
+	.long	__invalid_entry		@ 8 - Reserved
+	.long	__invalid_entry		@ 9 - Reserved
+	.long	__invalid_entry		@ 10 - Reserved
+	.long	vector_swi		@ 11 - SVCall
+	.long	__invalid_entry		@ 12 - Debug Monitor
+	.long	__invalid_entry		@ 13 - Reserved
+	.long	__pendsv_entry		@ 14 - PendSV
+	.long	__invalid_entry		@ 15 - SysTick
+	.rept	64 - 16
+	.long	__irq_entry		@ 16..64 - External Interrupts
+	.endr
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index 2b7b017..9f301dc 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -454,7 +454,11 @@ asm(	".pushsection .text\n"
 #ifdef CONFIG_TRACE_IRQFLAGS
 "	bl	trace_hardirqs_on\n"
 #endif
+#ifdef CONFIG_CPU_V7M
+"	msr	primask, r7\n"
+#else
 "	msr	cpsr_c, r7\n"
+#endif
 "	mov	r0, r4\n"
 "	mov	lr, r6\n"
 "	mov	pc, r5\n"
@@ -493,6 +497,10 @@ pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags)
 	regs.ARM_r7 = SVC_MODE | PSR_ENDSTATE | PSR_ISETSTATE;
 	regs.ARM_pc = (unsigned long)kernel_thread_helper;
 	regs.ARM_cpsr = regs.ARM_r7 | PSR_I_BIT;
+#if defined CONFIG_CPU_V7M
+	/* Return to Handler mode */
+	regs.ARM_EXC_RET = 0xfffffff1L;
+#endif
 
 	return do_fork(flags|CLONE_VM|CLONE_UNTRACED, 0, &regs, 0, NULL, NULL);
 }
diff --git a/arch/arm/kernel/ptrace.c b/arch/arm/kernel/ptrace.c
index 45956c9..49383a4 100644
--- a/arch/arm/kernel/ptrace.c
+++ b/arch/arm/kernel/ptrace.c
@@ -82,6 +82,9 @@ static const struct pt_regs_offset regoffset_table[] = {
 	REG_OFFSET_NAME(pc),
 	REG_OFFSET_NAME(cpsr),
 	REG_OFFSET_NAME(ORIG_r0),
+#ifdef CONFIG_CPU_V7M
+	REG_OFFSET_NAME(EXC_RET),
+#endif
 	REG_OFFSET_END,
 };
 
diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
index 7cb532f..be61a41 100644
--- a/arch/arm/kernel/signal.c
+++ b/arch/arm/kernel/signal.c
@@ -278,6 +278,8 @@ static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf)
 	sigset_t set;
 	int err;
 
+	pr_info("%s(regs=%p)\n", __func__, regs);
+
 	err = __copy_from_user(&set, &sf->uc.uc_sigmask, sizeof(set));
 	if (err == 0) {
 		sigdelsetmask(&set, ~_BLOCKABLE);
@@ -325,6 +327,7 @@ asmlinkage int sys_sigreturn(struct pt_regs *regs)
 {
 	struct sigframe __user *frame;
 
+	pr_info("%s(regs=%p)\n", __func__, regs);
 	/* Always make any pending restarted system calls return -EINTR */
 	current_thread_info()->restart_block.fn = do_no_restart_syscall;
 
@@ -355,6 +358,7 @@ asmlinkage int sys_rt_sigreturn(struct pt_regs *regs)
 {
 	struct rt_sigframe __user *frame;
 
+	pr_info("%s(regs=%p)\n", __func__, regs);
 	/* Always make any pending restarted system calls return -EINTR */
 	current_thread_info()->restart_block.fn = do_no_restart_syscall;
 
@@ -439,6 +443,7 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, int framesize)
 	unsigned long sp = regs->ARM_sp;
 	void __user *frame;
 
+	pr_info("%s(regs=%p)\n", __func__, regs);
 	/*
 	 * This is the X/Open sanctioned signal stack switching.
 	 */
@@ -468,6 +473,8 @@ setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 	int thumb = 0;
 	unsigned long cpsr = regs->ARM_cpsr & ~(PSR_f | PSR_E_BIT);
 
+	pr_info("%s(regs=%p)\n", __func__, regs);
+
 	cpsr |= PSR_ENDSTATE;
 
 	/*
@@ -540,6 +547,8 @@ setup_frame(int usig, struct k_sigaction *ka, sigset_t *set, struct pt_regs *reg
 	struct sigframe __user *frame = get_sigframe(ka, regs, sizeof(*frame));
 	int err = 0;
 
+	pr_info("%s(regs=%p)\n", __func__, regs);
+
 	if (!frame)
 		return 1;
 
@@ -563,6 +572,8 @@ setup_rt_frame(int usig, struct k_sigaction *ka, siginfo_t *info,
 	stack_t stack;
 	int err = 0;
 
+	pr_info("%s(regs=%p)\n", __func__, regs);
+
 	if (!frame)
 		return 1;
 
@@ -607,6 +618,7 @@ handle_signal(unsigned long sig, struct k_sigaction *ka,
 	int usig = sig;
 	int ret;
 
+	pr_info("%s(regs=%p)\n", __func__, regs);
 	/*
 	 * translate the signal
 	 */
@@ -776,6 +788,7 @@ static void do_signal(struct pt_regs *regs, int syscall)
 asmlinkage void
 do_notify_resume(struct pt_regs *regs, unsigned int thread_flags, int syscall)
 {
+	pr_info("%s(regs=%p)\n", __func__, regs);
 	if (thread_flags & _TIF_SIGPENDING)
 		do_signal(regs, syscall);
 
diff --git a/arch/arm/kernel/sys_arm.c b/arch/arm/kernel/sys_arm.c
index d2b1779..4d3caca4d 100644
--- a/arch/arm/kernel/sys_arm.c
+++ b/arch/arm/kernel/sys_arm.c
@@ -102,6 +102,7 @@ int kernel_execve(const char *filename,
 	 * We were successful.  We won't be returning to our caller, but
 	 * instead to user space by manipulating the kernel stack.
 	 */
+	pr_info("we were successful\n");
 	asm(	"add	r0, %0, %1\n\t"
 		"mov	r1, %2\n\t"
 		"mov	r2, %3\n\t"
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 0/3] ARM: Initial support for Cortex-M3
  2012-04-04 21:08 [PATCH v4 0/3] ARM: Initial support for Cortex-M3 Uwe Kleine-König
                   ` (2 preceding siblings ...)
  2012-04-04 21:10 ` [PATCH v4 3/3] Cortex-M3: Add support for exception handling Uwe Kleine-König
@ 2012-04-16  8:27 ` Uwe Kleine-König
  2012-04-23  9:27   ` Uwe Kleine-König
  3 siblings, 1 reply; 7+ messages in thread
From: Uwe Kleine-König @ 2012-04-16  8:27 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

in reply to the last post Arnd suggested to take this series early after
the 3.4 merge window for next.

Now that 3.4-rc3 is out, I wonder if you (Russell) already found
the time to look at the series?

Best regards
Uwe

-- 
Pengutronix e.K.                           | Uwe Kleine-K?nig            |
Industrial Linux Solutions                 | http://www.pengutronix.de/  |

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 0/3] ARM: Initial support for Cortex-M3
  2012-04-16  8:27 ` [PATCH v4 0/3] ARM: Initial support for Cortex-M3 Uwe Kleine-König
@ 2012-04-23  9:27   ` Uwe Kleine-König
  0 siblings, 0 replies; 7+ messages in thread
From: Uwe Kleine-König @ 2012-04-23  9:27 UTC (permalink / raw)
  To: linux-arm-kernel

Hello Russell,

On Mon, Apr 16, 2012 at 10:27:18AM +0200, Uwe Kleine-K?nig wrote:
> in reply to the last post Arnd suggested to take this series early after
> the 3.4 merge window for next.
> 
> Now that 3.4-rc3 is out, I wonder if you (Russell) already found
> the time to look at the series?
would it be ok for you if I put the series into your patch system just
that it doesn't become forgotten?

Best regards
Uwe

-- 
Pengutronix e.K.                           | Uwe Kleine-K?nig            |
Industrial Linux Solutions                 | http://www.pengutronix.de/  |

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 2/3] Cortex-M3: Add base support for Cortex-M3
  2012-04-04 21:10 ` [PATCH v4 2/3] Cortex-M3: Add base support for Cortex-M3 Uwe Kleine-König
@ 2012-04-23  9:46   ` Russell King - ARM Linux
  0 siblings, 0 replies; 7+ messages in thread
From: Russell King - ARM Linux @ 2012-04-23  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Apr 04, 2012 at 11:10:06PM +0200, Uwe Kleine-K?nig wrote:
> diff --git a/arch/arm/include/asm/irqflags.h b/arch/arm/include/asm/irqflags.h
> index 1e6cca5..ac4d548 100644
> --- a/arch/arm/include/asm/irqflags.h
> +++ b/arch/arm/include/asm/irqflags.h
> @@ -10,6 +10,18 @@
>   */
>  #if __LINUX_ARM_ARCH__ >= 6
>  
> +#ifdef CONFIG_CPU_V7M
> +static inline unsigned long arch_local_irq_save(void)
> +{
> +	unsigned long flags;
> +
> +	asm volatile(
> +		"	mrs	%0, primask	@ arch_local_irq_save\n"
> +		"	cpsid	i"
> +		: "=r" (flags) : : "memory", "cc");

Wouldn't it produce a smaller patch to define something for "primask"
vs "cpsr", rather than having to re-implement the entire function ?

> diff --git a/arch/arm/include/asm/ptrace.h b/arch/arm/include/asm/ptrace.h
> index 451808b..696bbf4 100644
> --- a/arch/arm/include/asm/ptrace.h
> +++ b/arch/arm/include/asm/ptrace.h
> @@ -39,16 +39,25 @@
>  #define FIQ26_MODE	0x00000001
>  #define IRQ26_MODE	0x00000002
>  #define SVC26_MODE	0x00000003
> +#ifndef CONFIG_CPU_V7M
>  #define USR_MODE	0x00000010
> +#define SVC_MODE	0x00000013
> +#else
> +#define USR_MODE	0x00000000
> +#define SVC_MODE	0x00000000
> +#endif

Note that these definitions are exported to userspace.  Do we want to expose
this kind of change there?

>  #define FIQ_MODE	0x00000011
>  #define IRQ_MODE	0x00000012
> -#define SVC_MODE	0x00000013
>  #define ABT_MODE	0x00000017
>  #define UND_MODE	0x0000001b
>  #define SYSTEM_MODE	0x0000001f
>  #define MODE32_BIT	0x00000010
>  #define MODE_MASK	0x0000001f
> +#ifndef CONFIG_CPU_V7M
>  #define PSR_T_BIT	0x00000020
> +#else
> +#define PSR_T_BIT	0x01000000
> +#endif

Ditto.  Note also that these bits changing will be visible via the pt_regs
to userspace too.  Maybe overloading these constants like this and making
them depend on the kernel configuration is not a good idea for userspace.

Maybe giving the V7M definitions a new prefix would be a better idea?

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-04-23  9:46 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-04 21:08 [PATCH v4 0/3] ARM: Initial support for Cortex-M3 Uwe Kleine-König
2012-04-04 21:10 ` [PATCH v4 1/3] ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15 Uwe Kleine-König
2012-04-04 21:10 ` [PATCH v4 2/3] Cortex-M3: Add base support for Cortex-M3 Uwe Kleine-König
2012-04-23  9:46   ` Russell King - ARM Linux
2012-04-04 21:10 ` [PATCH v4 3/3] Cortex-M3: Add support for exception handling Uwe Kleine-König
2012-04-16  8:27 ` [PATCH v4 0/3] ARM: Initial support for Cortex-M3 Uwe Kleine-König
2012-04-23  9:27   ` Uwe Kleine-König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.