* [PATCH v2 00/28] ARM: KVM: Rewrite the world switch in C (mostly)
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm
Now that the arm64 rewrite is in mainline, I've taken a stab at fixing
the 32bit code the same way. This is fairly straightforward (once
you've been through it once...), with a few patches that adapt the
code to be similar to the 64bit version.
Note that the timer and GIC code should be made common between the two
architectures, as this is litterally the exact same code (I've posted
some proof of concept for that a while ago, see
http://www.spinics.net/lists/kvm/msg126775.html).
This has been tested on a Dual A7, and the code is pushed on a branch:
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm/wsinc
M.
* From v1:
- Rebased on -rc2
- Moved VTCR setup out of the init sequence (and into C code)
- Now depends on the first patch of the VHE series
Marc Zyngier (28):
ARM: KVM: Move the HYP code to its own section
ARM: KVM: Remove __kvm_hyp_code_start/__kvm_hyp_code_end
ARM: KVM: Move VFP registers to a CPU context structure
ARM: KVM: Move CP15 array into the CPU context structure
ARM: KVM: Move GP registers into the CPU context structure
ARM: KVM: Add a HYP-specific header file
ARM: KVM: Add system register accessor macros
ARM: KVM: Add TLB invalidation code
ARM: KVM: Add CP15 save/restore code
ARM: KVM: Add timer save/restore
ARM: KVM: Add vgic v2 save/restore
ARM: KVM: Add VFP save/restore
ARM: KVM: Add banked registers save/restore
ARM: KVM: Add guest entry code
ARM: KVM: Add VFP lazy save/restore handler
ARM: KVM: Add the new world switch implementation
ARM: KVM: Add populating of fault data structure
ARM: KVM: Add HYP mode entry code
ARM: KVM: Add panic handling code
ARM: KVM: Change kvm_call_hyp return type to unsigned long
ARM: KVM: Remove the old world switch
ARM: KVM: Switch to C-based stage2 init
ARM: KVM: Remove __weak attributes
ARM: KVM: Turn CP15 defines to an enum
ARM: KVM: Cleanup asm-offsets.c
ARM: KVM: Remove unused hyp_pc field
ARM: KVM: Remove handling of ARM_EXCEPTION_DATA/PREF_ABORT
ARM: KVM: Remove __kvm_hyp_exit/__kvm_hyp_exit_end
arch/arm/include/asm/kvm_asm.h | 41 +--
arch/arm/include/asm/kvm_emulate.h | 15 +-
arch/arm/include/asm/kvm_host.h | 61 +++-
arch/arm/include/asm/kvm_mmu.h | 2 +-
arch/arm/include/asm/virt.h | 4 +
arch/arm/kernel/asm-offsets.c | 40 +--
arch/arm/kernel/vmlinux.lds.S | 6 +
arch/arm/kvm/Makefile | 1 +
arch/arm/kvm/arm.c | 2 +-
arch/arm/kvm/coproc.c | 52 +--
arch/arm/kvm/coproc.h | 16 +-
arch/arm/kvm/emulate.c | 34 +-
arch/arm/kvm/guest.c | 5 +-
arch/arm/kvm/handle_exit.c | 7 -
arch/arm/kvm/hyp/Makefile | 14 +
arch/arm/kvm/hyp/banked-sr.c | 77 +++++
arch/arm/kvm/hyp/cp15-sr.c | 84 +++++
arch/arm/kvm/hyp/entry.S | 101 ++++++
arch/arm/kvm/hyp/hyp-entry.S | 169 ++++++++++
arch/arm/kvm/hyp/hyp.h | 130 ++++++++
arch/arm/kvm/hyp/s2-setup.c | 34 ++
arch/arm/kvm/hyp/switch.c | 228 +++++++++++++
arch/arm/kvm/hyp/timer-sr.c | 71 ++++
arch/arm/kvm/hyp/tlb.c | 71 ++++
arch/arm/kvm/hyp/vfp.S | 68 ++++
arch/arm/kvm/hyp/vgic-v2-sr.c | 84 +++++
arch/arm/kvm/init.S | 8 -
arch/arm/kvm/interrupts.S | 480 +--------------------------
arch/arm/kvm/interrupts_head.S | 648 -------------------------------------
arch/arm/kvm/reset.c | 2 +-
arch/arm64/include/asm/kvm_asm.h | 3 -
31 files changed, 1265 insertions(+), 1293 deletions(-)
create mode 100644 arch/arm/kvm/hyp/Makefile
create mode 100644 arch/arm/kvm/hyp/banked-sr.c
create mode 100644 arch/arm/kvm/hyp/cp15-sr.c
create mode 100644 arch/arm/kvm/hyp/entry.S
create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
create mode 100644 arch/arm/kvm/hyp/hyp.h
create mode 100644 arch/arm/kvm/hyp/s2-setup.c
create mode 100644 arch/arm/kvm/hyp/switch.c
create mode 100644 arch/arm/kvm/hyp/timer-sr.c
create mode 100644 arch/arm/kvm/hyp/tlb.c
create mode 100644 arch/arm/kvm/hyp/vfp.S
create mode 100644 arch/arm/kvm/hyp/vgic-v2-sr.c
delete mode 100644 arch/arm/kvm/interrupts_head.S
--
2.1.4
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 00/28] ARM: KVM: Rewrite the world switch in C (mostly)
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Now that the arm64 rewrite is in mainline, I've taken a stab at fixing
the 32bit code the same way. This is fairly straightforward (once
you've been through it once...), with a few patches that adapt the
code to be similar to the 64bit version.
Note that the timer and GIC code should be made common between the two
architectures, as this is litterally the exact same code (I've posted
some proof of concept for that a while ago, see
http://www.spinics.net/lists/kvm/msg126775.html).
This has been tested on a Dual A7, and the code is pushed on a branch:
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm/wsinc
M.
* From v1:
- Rebased on -rc2
- Moved VTCR setup out of the init sequence (and into C code)
- Now depends on the first patch of the VHE series
Marc Zyngier (28):
ARM: KVM: Move the HYP code to its own section
ARM: KVM: Remove __kvm_hyp_code_start/__kvm_hyp_code_end
ARM: KVM: Move VFP registers to a CPU context structure
ARM: KVM: Move CP15 array into the CPU context structure
ARM: KVM: Move GP registers into the CPU context structure
ARM: KVM: Add a HYP-specific header file
ARM: KVM: Add system register accessor macros
ARM: KVM: Add TLB invalidation code
ARM: KVM: Add CP15 save/restore code
ARM: KVM: Add timer save/restore
ARM: KVM: Add vgic v2 save/restore
ARM: KVM: Add VFP save/restore
ARM: KVM: Add banked registers save/restore
ARM: KVM: Add guest entry code
ARM: KVM: Add VFP lazy save/restore handler
ARM: KVM: Add the new world switch implementation
ARM: KVM: Add populating of fault data structure
ARM: KVM: Add HYP mode entry code
ARM: KVM: Add panic handling code
ARM: KVM: Change kvm_call_hyp return type to unsigned long
ARM: KVM: Remove the old world switch
ARM: KVM: Switch to C-based stage2 init
ARM: KVM: Remove __weak attributes
ARM: KVM: Turn CP15 defines to an enum
ARM: KVM: Cleanup asm-offsets.c
ARM: KVM: Remove unused hyp_pc field
ARM: KVM: Remove handling of ARM_EXCEPTION_DATA/PREF_ABORT
ARM: KVM: Remove __kvm_hyp_exit/__kvm_hyp_exit_end
arch/arm/include/asm/kvm_asm.h | 41 +--
arch/arm/include/asm/kvm_emulate.h | 15 +-
arch/arm/include/asm/kvm_host.h | 61 +++-
arch/arm/include/asm/kvm_mmu.h | 2 +-
arch/arm/include/asm/virt.h | 4 +
arch/arm/kernel/asm-offsets.c | 40 +--
arch/arm/kernel/vmlinux.lds.S | 6 +
arch/arm/kvm/Makefile | 1 +
arch/arm/kvm/arm.c | 2 +-
arch/arm/kvm/coproc.c | 52 +--
arch/arm/kvm/coproc.h | 16 +-
arch/arm/kvm/emulate.c | 34 +-
arch/arm/kvm/guest.c | 5 +-
arch/arm/kvm/handle_exit.c | 7 -
arch/arm/kvm/hyp/Makefile | 14 +
arch/arm/kvm/hyp/banked-sr.c | 77 +++++
arch/arm/kvm/hyp/cp15-sr.c | 84 +++++
arch/arm/kvm/hyp/entry.S | 101 ++++++
arch/arm/kvm/hyp/hyp-entry.S | 169 ++++++++++
arch/arm/kvm/hyp/hyp.h | 130 ++++++++
arch/arm/kvm/hyp/s2-setup.c | 34 ++
arch/arm/kvm/hyp/switch.c | 228 +++++++++++++
arch/arm/kvm/hyp/timer-sr.c | 71 ++++
arch/arm/kvm/hyp/tlb.c | 71 ++++
arch/arm/kvm/hyp/vfp.S | 68 ++++
arch/arm/kvm/hyp/vgic-v2-sr.c | 84 +++++
arch/arm/kvm/init.S | 8 -
arch/arm/kvm/interrupts.S | 480 +--------------------------
arch/arm/kvm/interrupts_head.S | 648 -------------------------------------
arch/arm/kvm/reset.c | 2 +-
arch/arm64/include/asm/kvm_asm.h | 3 -
31 files changed, 1265 insertions(+), 1293 deletions(-)
create mode 100644 arch/arm/kvm/hyp/Makefile
create mode 100644 arch/arm/kvm/hyp/banked-sr.c
create mode 100644 arch/arm/kvm/hyp/cp15-sr.c
create mode 100644 arch/arm/kvm/hyp/entry.S
create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
create mode 100644 arch/arm/kvm/hyp/hyp.h
create mode 100644 arch/arm/kvm/hyp/s2-setup.c
create mode 100644 arch/arm/kvm/hyp/switch.c
create mode 100644 arch/arm/kvm/hyp/timer-sr.c
create mode 100644 arch/arm/kvm/hyp/tlb.c
create mode 100644 arch/arm/kvm/hyp/vfp.S
create mode 100644 arch/arm/kvm/hyp/vgic-v2-sr.c
delete mode 100644 arch/arm/kvm/interrupts_head.S
--
2.1.4
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 01/28] ARM: KVM: Move the HYP code to its own section
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm
In order to be able to spread the HYP code into multiple compilation
units, adopt a layout similar to that of arm64:
- the HYP text is emited in its own section (.hyp.text)
- two linker generated symbols are use to identify the boundaries
of that section
No functionnal change.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 6 ++++--
arch/arm/include/asm/virt.h | 4 ++++
arch/arm/kernel/vmlinux.lds.S | 6 ++++++
arch/arm/kvm/interrupts.S | 13 +++++--------
4 files changed, 19 insertions(+), 10 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 194c91b..fa2fd25 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -19,6 +19,8 @@
#ifndef __ARM_KVM_ASM_H__
#define __ARM_KVM_ASM_H__
+#include <asm/virt.h>
+
/* 0 is reserved as an invalid value. */
#define c0_MPIDR 1 /* MultiProcessor ID Register */
#define c0_CSSELR 2 /* Cache Size Selection Register */
@@ -91,8 +93,8 @@ extern char __kvm_hyp_exit_end[];
extern char __kvm_hyp_vector[];
-extern char __kvm_hyp_code_start[];
-extern char __kvm_hyp_code_end[];
+#define __kvm_hyp_code_start __hyp_text_start
+#define __kvm_hyp_code_end __hyp_text_end
extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index 4371f45..5fdbfea 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -74,6 +74,10 @@ static inline bool is_hyp_mode_mismatched(void)
{
return !!(__boot_cpu_mode & BOOT_CPU_MODE_MISMATCH);
}
+
+/* The section containing the hypervisor text */
+extern char __hyp_text_start[];
+extern char __hyp_text_end[];
#endif
#endif /* __ASSEMBLY__ */
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 8b60fde..b4139cb 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -18,6 +18,11 @@
*(.proc.info.init) \
VMLINUX_SYMBOL(__proc_info_end) = .;
+#define HYPERVISOR_TEXT \
+ VMLINUX_SYMBOL(__hyp_text_start) = .; \
+ *(.hyp.text) \
+ VMLINUX_SYMBOL(__hyp_text_end) = .;
+
#define IDMAP_TEXT \
ALIGN_FUNCTION(); \
VMLINUX_SYMBOL(__idmap_text_start) = .; \
@@ -108,6 +113,7 @@ SECTIONS
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
+ HYPERVISOR_TEXT
KPROBES_TEXT
*(.gnu.warning)
*(.glue_7)
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 900ef6d..9d9cb71 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -28,9 +28,7 @@
#include "interrupts_head.S"
.text
-
-__kvm_hyp_code_start:
- .globl __kvm_hyp_code_start
+ .pushsection .hyp.text, "ax"
/********************************************************************
* Flush per-VMID TLBs
@@ -314,8 +312,6 @@ THUMB( orr r2, r2, #PSR_T_BIT )
eret
.endm
- .text
-
.align 5
__kvm_hyp_vector:
.globl __kvm_hyp_vector
@@ -511,10 +507,9 @@ hyp_fiq:
.ltorg
-__kvm_hyp_code_end:
- .globl __kvm_hyp_code_end
+ .popsection
- .section ".rodata"
+ .pushsection ".rodata"
und_die_str:
.ascii "unexpected undefined exception in Hyp mode at: %#08x\n"
@@ -524,3 +519,5 @@ dabt_die_str:
.ascii "unexpected data abort in Hyp mode at: %#08x\n"
svc_die_str:
.ascii "unexpected HVC/SVC trap in Hyp mode at: %#08x\n"
+
+ .popsection
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 01/28] ARM: KVM: Move the HYP code to its own section
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
In order to be able to spread the HYP code into multiple compilation
units, adopt a layout similar to that of arm64:
- the HYP text is emited in its own section (.hyp.text)
- two linker generated symbols are use to identify the boundaries
of that section
No functionnal change.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 6 ++++--
arch/arm/include/asm/virt.h | 4 ++++
arch/arm/kernel/vmlinux.lds.S | 6 ++++++
arch/arm/kvm/interrupts.S | 13 +++++--------
4 files changed, 19 insertions(+), 10 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 194c91b..fa2fd25 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -19,6 +19,8 @@
#ifndef __ARM_KVM_ASM_H__
#define __ARM_KVM_ASM_H__
+#include <asm/virt.h>
+
/* 0 is reserved as an invalid value. */
#define c0_MPIDR 1 /* MultiProcessor ID Register */
#define c0_CSSELR 2 /* Cache Size Selection Register */
@@ -91,8 +93,8 @@ extern char __kvm_hyp_exit_end[];
extern char __kvm_hyp_vector[];
-extern char __kvm_hyp_code_start[];
-extern char __kvm_hyp_code_end[];
+#define __kvm_hyp_code_start __hyp_text_start
+#define __kvm_hyp_code_end __hyp_text_end
extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index 4371f45..5fdbfea 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -74,6 +74,10 @@ static inline bool is_hyp_mode_mismatched(void)
{
return !!(__boot_cpu_mode & BOOT_CPU_MODE_MISMATCH);
}
+
+/* The section containing the hypervisor text */
+extern char __hyp_text_start[];
+extern char __hyp_text_end[];
#endif
#endif /* __ASSEMBLY__ */
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 8b60fde..b4139cb 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -18,6 +18,11 @@
*(.proc.info.init) \
VMLINUX_SYMBOL(__proc_info_end) = .;
+#define HYPERVISOR_TEXT \
+ VMLINUX_SYMBOL(__hyp_text_start) = .; \
+ *(.hyp.text) \
+ VMLINUX_SYMBOL(__hyp_text_end) = .;
+
#define IDMAP_TEXT \
ALIGN_FUNCTION(); \
VMLINUX_SYMBOL(__idmap_text_start) = .; \
@@ -108,6 +113,7 @@ SECTIONS
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
+ HYPERVISOR_TEXT
KPROBES_TEXT
*(.gnu.warning)
*(.glue_7)
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 900ef6d..9d9cb71 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -28,9 +28,7 @@
#include "interrupts_head.S"
.text
-
-__kvm_hyp_code_start:
- .globl __kvm_hyp_code_start
+ .pushsection .hyp.text, "ax"
/********************************************************************
* Flush per-VMID TLBs
@@ -314,8 +312,6 @@ THUMB( orr r2, r2, #PSR_T_BIT )
eret
.endm
- .text
-
.align 5
__kvm_hyp_vector:
.globl __kvm_hyp_vector
@@ -511,10 +507,9 @@ hyp_fiq:
.ltorg
-__kvm_hyp_code_end:
- .globl __kvm_hyp_code_end
+ .popsection
- .section ".rodata"
+ .pushsection ".rodata"
und_die_str:
.ascii "unexpected undefined exception in Hyp mode at: %#08x\n"
@@ -524,3 +519,5 @@ dabt_die_str:
.ascii "unexpected data abort in Hyp mode at: %#08x\n"
svc_die_str:
.ascii "unexpected HVC/SVC trap in Hyp mode at: %#08x\n"
+
+ .popsection
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 02/28] ARM: KVM: Remove __kvm_hyp_code_start/__kvm_hyp_code_end
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Now that we've unified the way we refer to the HYP text between
arm and arm64, drop __kvm_hyp_code_start/end, and just use the
__hyp_text_start/end symbols.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 3 ---
arch/arm/kvm/arm.c | 2 +-
arch/arm64/include/asm/kvm_asm.h | 3 ---
3 files changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index fa2fd25..4841225 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -93,9 +93,6 @@ extern char __kvm_hyp_exit_end[];
extern char __kvm_hyp_vector[];
-#define __kvm_hyp_code_start __hyp_text_start
-#define __kvm_hyp_code_end __hyp_text_end
-
extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 6b76e01..fcf6c13 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -1075,7 +1075,7 @@ static int init_hyp_mode(void)
/*
* Map the Hyp-code called directly from the host
*/
- err = create_hyp_mappings(__kvm_hyp_code_start, __kvm_hyp_code_end);
+ err = create_hyp_mappings(__hyp_text_start, __hyp_text_end);
if (err) {
kvm_err("Cannot map world-switch code\n");
goto out_free_mappings;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 52b777b..2ad8930 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -35,9 +35,6 @@ extern char __kvm_hyp_init_end[];
extern char __kvm_hyp_vector[];
-#define __kvm_hyp_code_start __hyp_text_start
-#define __kvm_hyp_code_end __hyp_text_end
-
extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 02/28] ARM: KVM: Remove __kvm_hyp_code_start/__kvm_hyp_code_end
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Now that we've unified the way we refer to the HYP text between
arm and arm64, drop __kvm_hyp_code_start/end, and just use the
__hyp_text_start/end symbols.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 3 ---
arch/arm/kvm/arm.c | 2 +-
arch/arm64/include/asm/kvm_asm.h | 3 ---
3 files changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index fa2fd25..4841225 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -93,9 +93,6 @@ extern char __kvm_hyp_exit_end[];
extern char __kvm_hyp_vector[];
-#define __kvm_hyp_code_start __hyp_text_start
-#define __kvm_hyp_code_end __hyp_text_end
-
extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 6b76e01..fcf6c13 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -1075,7 +1075,7 @@ static int init_hyp_mode(void)
/*
* Map the Hyp-code called directly from the host
*/
- err = create_hyp_mappings(__kvm_hyp_code_start, __kvm_hyp_code_end);
+ err = create_hyp_mappings(__hyp_text_start, __hyp_text_end);
if (err) {
kvm_err("Cannot map world-switch code\n");
goto out_free_mappings;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 52b777b..2ad8930 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -35,9 +35,6 @@ extern char __kvm_hyp_init_end[];
extern char __kvm_hyp_vector[];
-#define __kvm_hyp_code_start __hyp_text_start
-#define __kvm_hyp_code_end __hyp_text_end
-
extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 03/28] ARM: KVM: Move VFP registers to a CPU context structure
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
In order to turn the WS code into something that looks a bit
more like the arm64 version, move the VFP registers into a
CPU context container for both the host and the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_host.h | 11 +++++++----
arch/arm/kernel/asm-offsets.c | 5 +++--
arch/arm/kvm/coproc.c | 20 ++++++++++----------
arch/arm/kvm/interrupts.S | 10 ++++++----
4 files changed, 26 insertions(+), 20 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f1e86f1..b64ac8e 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -88,9 +88,15 @@ struct kvm_vcpu_fault_info {
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
};
-typedef struct vfp_hard_struct kvm_cpu_context_t;
+struct kvm_cpu_context {
+ struct vfp_hard_struct vfp;
+};
+
+typedef struct kvm_cpu_context kvm_cpu_context_t;
struct kvm_vcpu_arch {
+ struct kvm_cpu_context ctxt;
+
struct kvm_regs regs;
int target; /* Processor target */
@@ -111,9 +117,6 @@ struct kvm_vcpu_arch {
/* Exception Information */
struct kvm_vcpu_fault_info fault;
- /* Floating point registers (VFP and Advanced SIMD/NEON) */
- struct vfp_hard_struct vfp_guest;
-
/* Host FP context */
kvm_cpu_context_t *host_cpu_context;
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 871b826..346bfca 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -173,8 +173,9 @@ int main(void)
DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
DEFINE(VCPU_CP15, offsetof(struct kvm_vcpu, arch.cp15));
- DEFINE(VCPU_VFP_GUEST, offsetof(struct kvm_vcpu, arch.vfp_guest));
- DEFINE(VCPU_VFP_HOST, offsetof(struct kvm_vcpu, arch.host_cpu_context));
+ DEFINE(VCPU_GUEST_CTXT, offsetof(struct kvm_vcpu, arch.ctxt));
+ DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
+ DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
DEFINE(VCPU_USR_REGS, offsetof(struct kvm_vcpu, arch.regs.usr_regs));
DEFINE(VCPU_SVC_REGS, offsetof(struct kvm_vcpu, arch.regs.svc_regs));
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index f3d88dc..1a643f3 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -901,7 +901,7 @@ static int vfp_get_reg(const struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
if (vfpid < num_fp_regs()) {
if (KVM_REG_SIZE(id) != 8)
return -ENOENT;
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpregs[vfpid],
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpregs[vfpid],
id);
}
@@ -911,13 +911,13 @@ static int vfp_get_reg(const struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
switch (vfpid) {
case KVM_REG_ARM_VFP_FPEXC:
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpexc, id);
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpexc, id);
case KVM_REG_ARM_VFP_FPSCR:
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpscr, id);
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpscr, id);
case KVM_REG_ARM_VFP_FPINST:
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpinst, id);
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpinst, id);
case KVM_REG_ARM_VFP_FPINST2:
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpinst2, id);
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpinst2, id);
case KVM_REG_ARM_VFP_MVFR0:
val = fmrx(MVFR0);
return reg_to_user(uaddr, &val, id);
@@ -945,7 +945,7 @@ static int vfp_set_reg(struct kvm_vcpu *vcpu, u64 id, const void __user *uaddr)
if (vfpid < num_fp_regs()) {
if (KVM_REG_SIZE(id) != 8)
return -ENOENT;
- return reg_from_user(&vcpu->arch.vfp_guest.fpregs[vfpid],
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpregs[vfpid],
uaddr, id);
}
@@ -955,13 +955,13 @@ static int vfp_set_reg(struct kvm_vcpu *vcpu, u64 id, const void __user *uaddr)
switch (vfpid) {
case KVM_REG_ARM_VFP_FPEXC:
- return reg_from_user(&vcpu->arch.vfp_guest.fpexc, uaddr, id);
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpexc, uaddr, id);
case KVM_REG_ARM_VFP_FPSCR:
- return reg_from_user(&vcpu->arch.vfp_guest.fpscr, uaddr, id);
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpscr, uaddr, id);
case KVM_REG_ARM_VFP_FPINST:
- return reg_from_user(&vcpu->arch.vfp_guest.fpinst, uaddr, id);
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpinst, uaddr, id);
case KVM_REG_ARM_VFP_FPINST2:
- return reg_from_user(&vcpu->arch.vfp_guest.fpinst2, uaddr, id);
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpinst2, uaddr, id);
/* These are invariant. */
case KVM_REG_ARM_VFP_MVFR0:
if (reg_from_user(&val, uaddr, id))
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 9d9cb71..7bfb289 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -172,10 +172,11 @@ __kvm_vcpu_return:
#ifdef CONFIG_VFPv3
@ Switch VFP/NEON hardware state to the host's
- add r7, vcpu, #VCPU_VFP_GUEST
+ add r7, vcpu, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
store_vfp_state r7
- add r7, vcpu, #VCPU_VFP_HOST
+ add r7, vcpu, #VCPU_HOST_CTXT
ldr r7, [r7]
+ add r7, r7, #CPU_CTXT_VFP
restore_vfp_state r7
after_vfp_restore:
@@ -482,10 +483,11 @@ switch_to_guest_vfp:
set_hcptr vmtrap, (HCPTR_TCP(10) | HCPTR_TCP(11))
@ Switch VFP/NEON hardware state to the guest's
- add r7, r0, #VCPU_VFP_HOST
+ add r7, r0, #VCPU_HOST_CTXT
ldr r7, [r7]
+ add r7, r7, #CPU_CTXT_VFP
store_vfp_state r7
- add r7, r0, #VCPU_VFP_GUEST
+ add r7, r0, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
restore_vfp_state r7
pop {r3-r7}
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 03/28] ARM: KVM: Move VFP registers to a CPU context structure
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
In order to turn the WS code into something that looks a bit
more like the arm64 version, move the VFP registers into a
CPU context container for both the host and the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_host.h | 11 +++++++----
arch/arm/kernel/asm-offsets.c | 5 +++--
arch/arm/kvm/coproc.c | 20 ++++++++++----------
arch/arm/kvm/interrupts.S | 10 ++++++----
4 files changed, 26 insertions(+), 20 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f1e86f1..b64ac8e 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -88,9 +88,15 @@ struct kvm_vcpu_fault_info {
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
};
-typedef struct vfp_hard_struct kvm_cpu_context_t;
+struct kvm_cpu_context {
+ struct vfp_hard_struct vfp;
+};
+
+typedef struct kvm_cpu_context kvm_cpu_context_t;
struct kvm_vcpu_arch {
+ struct kvm_cpu_context ctxt;
+
struct kvm_regs regs;
int target; /* Processor target */
@@ -111,9 +117,6 @@ struct kvm_vcpu_arch {
/* Exception Information */
struct kvm_vcpu_fault_info fault;
- /* Floating point registers (VFP and Advanced SIMD/NEON) */
- struct vfp_hard_struct vfp_guest;
-
/* Host FP context */
kvm_cpu_context_t *host_cpu_context;
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 871b826..346bfca 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -173,8 +173,9 @@ int main(void)
DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
DEFINE(VCPU_CP15, offsetof(struct kvm_vcpu, arch.cp15));
- DEFINE(VCPU_VFP_GUEST, offsetof(struct kvm_vcpu, arch.vfp_guest));
- DEFINE(VCPU_VFP_HOST, offsetof(struct kvm_vcpu, arch.host_cpu_context));
+ DEFINE(VCPU_GUEST_CTXT, offsetof(struct kvm_vcpu, arch.ctxt));
+ DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
+ DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
DEFINE(VCPU_USR_REGS, offsetof(struct kvm_vcpu, arch.regs.usr_regs));
DEFINE(VCPU_SVC_REGS, offsetof(struct kvm_vcpu, arch.regs.svc_regs));
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index f3d88dc..1a643f3 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -901,7 +901,7 @@ static int vfp_get_reg(const struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
if (vfpid < num_fp_regs()) {
if (KVM_REG_SIZE(id) != 8)
return -ENOENT;
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpregs[vfpid],
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpregs[vfpid],
id);
}
@@ -911,13 +911,13 @@ static int vfp_get_reg(const struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
switch (vfpid) {
case KVM_REG_ARM_VFP_FPEXC:
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpexc, id);
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpexc, id);
case KVM_REG_ARM_VFP_FPSCR:
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpscr, id);
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpscr, id);
case KVM_REG_ARM_VFP_FPINST:
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpinst, id);
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpinst, id);
case KVM_REG_ARM_VFP_FPINST2:
- return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpinst2, id);
+ return reg_to_user(uaddr, &vcpu->arch.ctxt.vfp.fpinst2, id);
case KVM_REG_ARM_VFP_MVFR0:
val = fmrx(MVFR0);
return reg_to_user(uaddr, &val, id);
@@ -945,7 +945,7 @@ static int vfp_set_reg(struct kvm_vcpu *vcpu, u64 id, const void __user *uaddr)
if (vfpid < num_fp_regs()) {
if (KVM_REG_SIZE(id) != 8)
return -ENOENT;
- return reg_from_user(&vcpu->arch.vfp_guest.fpregs[vfpid],
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpregs[vfpid],
uaddr, id);
}
@@ -955,13 +955,13 @@ static int vfp_set_reg(struct kvm_vcpu *vcpu, u64 id, const void __user *uaddr)
switch (vfpid) {
case KVM_REG_ARM_VFP_FPEXC:
- return reg_from_user(&vcpu->arch.vfp_guest.fpexc, uaddr, id);
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpexc, uaddr, id);
case KVM_REG_ARM_VFP_FPSCR:
- return reg_from_user(&vcpu->arch.vfp_guest.fpscr, uaddr, id);
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpscr, uaddr, id);
case KVM_REG_ARM_VFP_FPINST:
- return reg_from_user(&vcpu->arch.vfp_guest.fpinst, uaddr, id);
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpinst, uaddr, id);
case KVM_REG_ARM_VFP_FPINST2:
- return reg_from_user(&vcpu->arch.vfp_guest.fpinst2, uaddr, id);
+ return reg_from_user(&vcpu->arch.ctxt.vfp.fpinst2, uaddr, id);
/* These are invariant. */
case KVM_REG_ARM_VFP_MVFR0:
if (reg_from_user(&val, uaddr, id))
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 9d9cb71..7bfb289 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -172,10 +172,11 @@ __kvm_vcpu_return:
#ifdef CONFIG_VFPv3
@ Switch VFP/NEON hardware state to the host's
- add r7, vcpu, #VCPU_VFP_GUEST
+ add r7, vcpu, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
store_vfp_state r7
- add r7, vcpu, #VCPU_VFP_HOST
+ add r7, vcpu, #VCPU_HOST_CTXT
ldr r7, [r7]
+ add r7, r7, #CPU_CTXT_VFP
restore_vfp_state r7
after_vfp_restore:
@@ -482,10 +483,11 @@ switch_to_guest_vfp:
set_hcptr vmtrap, (HCPTR_TCP(10) | HCPTR_TCP(11))
@ Switch VFP/NEON hardware state to the guest's
- add r7, r0, #VCPU_VFP_HOST
+ add r7, r0, #VCPU_HOST_CTXT
ldr r7, [r7]
+ add r7, r7, #CPU_CTXT_VFP
store_vfp_state r7
- add r7, r0, #VCPU_VFP_GUEST
+ add r7, r0, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
restore_vfp_state r7
pop {r3-r7}
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 04/28] ARM: KVM: Move CP15 array into the CPU context structure
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Continuing our rework of the CPU context, we now move the CP15
array into the CPU context structure. As this causes quite a bit
of churn, we introduce the vcpu_cp15() macro that abstract the
location of the actual array. This will probably help next time
we have to revisit that code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_emulate.h | 2 +-
arch/arm/include/asm/kvm_host.h | 6 +++---
arch/arm/include/asm/kvm_mmu.h | 2 +-
arch/arm/kernel/asm-offsets.c | 2 +-
arch/arm/kvm/coproc.c | 32 ++++++++++++++++----------------
arch/arm/kvm/coproc.h | 16 ++++++++--------
arch/arm/kvm/emulate.c | 22 +++++++++++-----------
arch/arm/kvm/interrupts_head.S | 3 ++-
8 files changed, 43 insertions(+), 42 deletions(-)
diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index 3095df0..32bb52a 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -192,7 +192,7 @@ static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu)
static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
{
- return vcpu->arch.cp15[c0_MPIDR] & MPIDR_HWID_BITMASK;
+ return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
}
static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index b64ac8e..4203701 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -90,6 +90,7 @@ struct kvm_vcpu_fault_info {
struct kvm_cpu_context {
struct vfp_hard_struct vfp;
+ u32 cp15[NR_CP15_REGS];
};
typedef struct kvm_cpu_context kvm_cpu_context_t;
@@ -102,9 +103,6 @@ struct kvm_vcpu_arch {
int target; /* Processor target */
DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
- /* System control coprocessor (cp15) */
- u32 cp15[NR_CP15_REGS];
-
/* The CPU type we expose to the VM */
u32 midr;
@@ -161,6 +159,8 @@ struct kvm_vcpu_stat {
u64 exits;
};
+#define vcpu_cp15(v,r) (v)->arch.ctxt.cp15[r]
+
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index a520b79..da44be9 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -179,7 +179,7 @@ struct kvm;
static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
{
- return (vcpu->arch.cp15[c1_SCTLR] & 0b101) == 0b101;
+ return (vcpu_cp15(vcpu, c1_SCTLR) & 0b101) == 0b101;
}
static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 346bfca..43f8b01 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -172,10 +172,10 @@ int main(void)
#ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
- DEFINE(VCPU_CP15, offsetof(struct kvm_vcpu, arch.cp15));
DEFINE(VCPU_GUEST_CTXT, offsetof(struct kvm_vcpu, arch.ctxt));
DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
+ DEFINE(CPU_CTXT_CP15, offsetof(struct kvm_cpu_context, cp15));
DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
DEFINE(VCPU_USR_REGS, offsetof(struct kvm_vcpu, arch.regs.usr_regs));
DEFINE(VCPU_SVC_REGS, offsetof(struct kvm_vcpu, arch.regs.svc_regs));
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 1a643f3..e3e86c4 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -54,8 +54,8 @@ static inline void vcpu_cp15_reg64_set(struct kvm_vcpu *vcpu,
const struct coproc_reg *r,
u64 val)
{
- vcpu->arch.cp15[r->reg] = val & 0xffffffff;
- vcpu->arch.cp15[r->reg + 1] = val >> 32;
+ vcpu_cp15(vcpu, r->reg) = val & 0xffffffff;
+ vcpu_cp15(vcpu, r->reg + 1) = val >> 32;
}
static inline u64 vcpu_cp15_reg64_get(struct kvm_vcpu *vcpu,
@@ -63,9 +63,9 @@ static inline u64 vcpu_cp15_reg64_get(struct kvm_vcpu *vcpu,
{
u64 val;
- val = vcpu->arch.cp15[r->reg + 1];
+ val = vcpu_cp15(vcpu, r->reg + 1);
val = val << 32;
- val = val | vcpu->arch.cp15[r->reg];
+ val = val | vcpu_cp15(vcpu, r->reg);
return val;
}
@@ -104,7 +104,7 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
* vcpu_id, but we read the 'U' bit from the underlying
* hardware directly.
*/
- vcpu->arch.cp15[c0_MPIDR] = ((read_cpuid_mpidr() & MPIDR_SMP_BITMASK) |
+ vcpu_cp15(vcpu, c0_MPIDR) = ((read_cpuid_mpidr() & MPIDR_SMP_BITMASK) |
((vcpu->vcpu_id >> 2) << MPIDR_LEVEL_BITS) |
(vcpu->vcpu_id & 3));
}
@@ -117,7 +117,7 @@ static bool access_actlr(struct kvm_vcpu *vcpu,
if (p->is_write)
return ignore_write(vcpu, p);
- *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[c1_ACTLR];
+ *vcpu_reg(vcpu, p->Rt1) = vcpu_cp15(vcpu, c1_ACTLR);
return true;
}
@@ -139,7 +139,7 @@ static bool access_l2ctlr(struct kvm_vcpu *vcpu,
if (p->is_write)
return ignore_write(vcpu, p);
- *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[c9_L2CTLR];
+ *vcpu_reg(vcpu, p->Rt1) = vcpu_cp15(vcpu, c9_L2CTLR);
return true;
}
@@ -156,7 +156,7 @@ static void reset_l2ctlr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
ncores = min(ncores, 3U);
l2ctlr |= (ncores & 3) << 24;
- vcpu->arch.cp15[c9_L2CTLR] = l2ctlr;
+ vcpu_cp15(vcpu, c9_L2CTLR) = l2ctlr;
}
static void reset_actlr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
@@ -171,7 +171,7 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
else
actlr &= ~(1U << 6);
- vcpu->arch.cp15[c1_ACTLR] = actlr;
+ vcpu_cp15(vcpu, c1_ACTLR) = actlr;
}
/*
@@ -218,9 +218,9 @@ bool access_vm_reg(struct kvm_vcpu *vcpu,
BUG_ON(!p->is_write);
- vcpu->arch.cp15[r->reg] = *vcpu_reg(vcpu, p->Rt1);
+ vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt1);
if (p->is_64bit)
- vcpu->arch.cp15[r->reg + 1] = *vcpu_reg(vcpu, p->Rt2);
+ vcpu_cp15(vcpu, r->reg + 1) = *vcpu_reg(vcpu, p->Rt2);
kvm_toggle_cache(vcpu, was_enabled);
return true;
@@ -1030,7 +1030,7 @@ int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
val = vcpu_cp15_reg64_get(vcpu, r);
ret = reg_to_user(uaddr, &val, reg->id);
} else if (KVM_REG_SIZE(reg->id) == 4) {
- ret = reg_to_user(uaddr, &vcpu->arch.cp15[r->reg], reg->id);
+ ret = reg_to_user(uaddr, &vcpu_cp15(vcpu, r->reg), reg->id);
}
return ret;
@@ -1060,7 +1060,7 @@ int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (!ret)
vcpu_cp15_reg64_set(vcpu, r, val);
} else if (KVM_REG_SIZE(reg->id) == 4) {
- ret = reg_from_user(&vcpu->arch.cp15[r->reg], uaddr, reg->id);
+ ret = reg_from_user(&vcpu_cp15(vcpu, r->reg), uaddr, reg->id);
}
return ret;
@@ -1248,7 +1248,7 @@ void kvm_reset_coprocs(struct kvm_vcpu *vcpu)
const struct coproc_reg *table;
/* Catch someone adding a register without putting in reset entry. */
- memset(vcpu->arch.cp15, 0x42, sizeof(vcpu->arch.cp15));
+ memset(vcpu->arch.ctxt.cp15, 0x42, sizeof(vcpu->arch.ctxt.cp15));
/* Generic chip reset first (so target could override). */
reset_coproc_regs(vcpu, cp15_regs, ARRAY_SIZE(cp15_regs));
@@ -1257,6 +1257,6 @@ void kvm_reset_coprocs(struct kvm_vcpu *vcpu)
reset_coproc_regs(vcpu, table, num);
for (num = 1; num < NR_CP15_REGS; num++)
- if (vcpu->arch.cp15[num] == 0x42424242)
- panic("Didn't reset vcpu->arch.cp15[%zi]", num);
+ if (vcpu_cp15(vcpu, num) == 0x42424242)
+ panic("Didn't reset vcpu_cp15(vcpu, %zi)", num);
}
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index 88d24a3..2735132 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -47,7 +47,7 @@ struct coproc_reg {
/* Initialization for vcpu. */
void (*reset)(struct kvm_vcpu *, const struct coproc_reg *);
- /* Index into vcpu->arch.cp15[], or 0 if we don't need to save it. */
+ /* Index into vcpu_cp15(vcpu, ...), or 0 if we don't need to save it. */
unsigned long reg;
/* Value (usually reset value) */
@@ -104,25 +104,25 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
const struct coproc_reg *r)
{
BUG_ON(!r->reg);
- BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.cp15));
- vcpu->arch.cp15[r->reg] = 0xdecafbad;
+ BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.ctxt.cp15));
+ vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
}
static inline void reset_val(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
{
BUG_ON(!r->reg);
- BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.cp15));
- vcpu->arch.cp15[r->reg] = r->val;
+ BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.ctxt.cp15));
+ vcpu_cp15(vcpu, r->reg) = r->val;
}
static inline void reset_unknown64(struct kvm_vcpu *vcpu,
const struct coproc_reg *r)
{
BUG_ON(!r->reg);
- BUG_ON(r->reg + 1 >= ARRAY_SIZE(vcpu->arch.cp15));
+ BUG_ON(r->reg + 1 >= ARRAY_SIZE(vcpu->arch.ctxt.cp15));
- vcpu->arch.cp15[r->reg] = 0xdecafbad;
- vcpu->arch.cp15[r->reg+1] = 0xd0c0ffee;
+ vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
+ vcpu_cp15(vcpu, r->reg+1) = 0xd0c0ffee;
}
static inline int cmp_reg(const struct coproc_reg *i1,
diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c
index dc99159..ee161b1 100644
--- a/arch/arm/kvm/emulate.c
+++ b/arch/arm/kvm/emulate.c
@@ -266,8 +266,8 @@ void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
static u32 exc_vector_base(struct kvm_vcpu *vcpu)
{
- u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
- u32 vbar = vcpu->arch.cp15[c12_VBAR];
+ u32 sctlr = vcpu_cp15(vcpu, c1_SCTLR);
+ u32 vbar = vcpu_cp15(vcpu, c12_VBAR);
if (sctlr & SCTLR_V)
return 0xffff0000;
@@ -282,7 +282,7 @@ static u32 exc_vector_base(struct kvm_vcpu *vcpu)
static void kvm_update_psr(struct kvm_vcpu *vcpu, unsigned long mode)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
- u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
+ u32 sctlr = vcpu_cp15(vcpu, c1_SCTLR);
*vcpu_cpsr(vcpu) = (cpsr & ~MODE_MASK) | mode;
@@ -357,22 +357,22 @@ static void inject_abt(struct kvm_vcpu *vcpu, bool is_pabt, unsigned long addr)
if (is_pabt) {
/* Set IFAR and IFSR */
- vcpu->arch.cp15[c6_IFAR] = addr;
- is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31);
+ vcpu_cp15(vcpu, c6_IFAR) = addr;
+ is_lpae = (vcpu_cp15(vcpu, c2_TTBCR) >> 31);
/* Always give debug fault for now - should give guest a clue */
if (is_lpae)
- vcpu->arch.cp15[c5_IFSR] = 1 << 9 | 0x22;
+ vcpu_cp15(vcpu, c5_IFSR) = 1 << 9 | 0x22;
else
- vcpu->arch.cp15[c5_IFSR] = 2;
+ vcpu_cp15(vcpu, c5_IFSR) = 2;
} else { /* !iabt */
/* Set DFAR and DFSR */
- vcpu->arch.cp15[c6_DFAR] = addr;
- is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31);
+ vcpu_cp15(vcpu, c6_DFAR) = addr;
+ is_lpae = (vcpu_cp15(vcpu, c2_TTBCR) >> 31);
/* Always give debug fault for now - should give guest a clue */
if (is_lpae)
- vcpu->arch.cp15[c5_DFSR] = 1 << 9 | 0x22;
+ vcpu_cp15(vcpu, c5_DFSR) = 1 << 9 | 0x22;
else
- vcpu->arch.cp15[c5_DFSR] = 2;
+ vcpu_cp15(vcpu, c5_DFSR) = 2;
}
}
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 51a5950..b9d9531 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -4,7 +4,8 @@
#define VCPU_USR_REG(_reg_nr) (VCPU_USR_REGS + (_reg_nr * 4))
#define VCPU_USR_SP (VCPU_USR_REG(13))
#define VCPU_USR_LR (VCPU_USR_REG(14))
-#define CP15_OFFSET(_cp15_reg_idx) (VCPU_CP15 + (_cp15_reg_idx * 4))
+#define VCPU_CP15_BASE (VCPU_GUEST_CTXT + CPU_CTXT_CP15)
+#define CP15_OFFSET(_cp15_reg_idx) (VCPU_CP15_BASE + (_cp15_reg_idx * 4))
/*
* Many of these macros need to access the VCPU structure, which is always
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 04/28] ARM: KVM: Move CP15 array into the CPU context structure
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Continuing our rework of the CPU context, we now move the CP15
array into the CPU context structure. As this causes quite a bit
of churn, we introduce the vcpu_cp15() macro that abstract the
location of the actual array. This will probably help next time
we have to revisit that code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_emulate.h | 2 +-
arch/arm/include/asm/kvm_host.h | 6 +++---
arch/arm/include/asm/kvm_mmu.h | 2 +-
arch/arm/kernel/asm-offsets.c | 2 +-
arch/arm/kvm/coproc.c | 32 ++++++++++++++++----------------
arch/arm/kvm/coproc.h | 16 ++++++++--------
arch/arm/kvm/emulate.c | 22 +++++++++++-----------
arch/arm/kvm/interrupts_head.S | 3 ++-
8 files changed, 43 insertions(+), 42 deletions(-)
diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index 3095df0..32bb52a 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -192,7 +192,7 @@ static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu)
static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
{
- return vcpu->arch.cp15[c0_MPIDR] & MPIDR_HWID_BITMASK;
+ return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
}
static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index b64ac8e..4203701 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -90,6 +90,7 @@ struct kvm_vcpu_fault_info {
struct kvm_cpu_context {
struct vfp_hard_struct vfp;
+ u32 cp15[NR_CP15_REGS];
};
typedef struct kvm_cpu_context kvm_cpu_context_t;
@@ -102,9 +103,6 @@ struct kvm_vcpu_arch {
int target; /* Processor target */
DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
- /* System control coprocessor (cp15) */
- u32 cp15[NR_CP15_REGS];
-
/* The CPU type we expose to the VM */
u32 midr;
@@ -161,6 +159,8 @@ struct kvm_vcpu_stat {
u64 exits;
};
+#define vcpu_cp15(v,r) (v)->arch.ctxt.cp15[r]
+
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index a520b79..da44be9 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -179,7 +179,7 @@ struct kvm;
static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
{
- return (vcpu->arch.cp15[c1_SCTLR] & 0b101) == 0b101;
+ return (vcpu_cp15(vcpu, c1_SCTLR) & 0b101) == 0b101;
}
static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 346bfca..43f8b01 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -172,10 +172,10 @@ int main(void)
#ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
- DEFINE(VCPU_CP15, offsetof(struct kvm_vcpu, arch.cp15));
DEFINE(VCPU_GUEST_CTXT, offsetof(struct kvm_vcpu, arch.ctxt));
DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
+ DEFINE(CPU_CTXT_CP15, offsetof(struct kvm_cpu_context, cp15));
DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
DEFINE(VCPU_USR_REGS, offsetof(struct kvm_vcpu, arch.regs.usr_regs));
DEFINE(VCPU_SVC_REGS, offsetof(struct kvm_vcpu, arch.regs.svc_regs));
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 1a643f3..e3e86c4 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -54,8 +54,8 @@ static inline void vcpu_cp15_reg64_set(struct kvm_vcpu *vcpu,
const struct coproc_reg *r,
u64 val)
{
- vcpu->arch.cp15[r->reg] = val & 0xffffffff;
- vcpu->arch.cp15[r->reg + 1] = val >> 32;
+ vcpu_cp15(vcpu, r->reg) = val & 0xffffffff;
+ vcpu_cp15(vcpu, r->reg + 1) = val >> 32;
}
static inline u64 vcpu_cp15_reg64_get(struct kvm_vcpu *vcpu,
@@ -63,9 +63,9 @@ static inline u64 vcpu_cp15_reg64_get(struct kvm_vcpu *vcpu,
{
u64 val;
- val = vcpu->arch.cp15[r->reg + 1];
+ val = vcpu_cp15(vcpu, r->reg + 1);
val = val << 32;
- val = val | vcpu->arch.cp15[r->reg];
+ val = val | vcpu_cp15(vcpu, r->reg);
return val;
}
@@ -104,7 +104,7 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
* vcpu_id, but we read the 'U' bit from the underlying
* hardware directly.
*/
- vcpu->arch.cp15[c0_MPIDR] = ((read_cpuid_mpidr() & MPIDR_SMP_BITMASK) |
+ vcpu_cp15(vcpu, c0_MPIDR) = ((read_cpuid_mpidr() & MPIDR_SMP_BITMASK) |
((vcpu->vcpu_id >> 2) << MPIDR_LEVEL_BITS) |
(vcpu->vcpu_id & 3));
}
@@ -117,7 +117,7 @@ static bool access_actlr(struct kvm_vcpu *vcpu,
if (p->is_write)
return ignore_write(vcpu, p);
- *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[c1_ACTLR];
+ *vcpu_reg(vcpu, p->Rt1) = vcpu_cp15(vcpu, c1_ACTLR);
return true;
}
@@ -139,7 +139,7 @@ static bool access_l2ctlr(struct kvm_vcpu *vcpu,
if (p->is_write)
return ignore_write(vcpu, p);
- *vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[c9_L2CTLR];
+ *vcpu_reg(vcpu, p->Rt1) = vcpu_cp15(vcpu, c9_L2CTLR);
return true;
}
@@ -156,7 +156,7 @@ static void reset_l2ctlr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
ncores = min(ncores, 3U);
l2ctlr |= (ncores & 3) << 24;
- vcpu->arch.cp15[c9_L2CTLR] = l2ctlr;
+ vcpu_cp15(vcpu, c9_L2CTLR) = l2ctlr;
}
static void reset_actlr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
@@ -171,7 +171,7 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
else
actlr &= ~(1U << 6);
- vcpu->arch.cp15[c1_ACTLR] = actlr;
+ vcpu_cp15(vcpu, c1_ACTLR) = actlr;
}
/*
@@ -218,9 +218,9 @@ bool access_vm_reg(struct kvm_vcpu *vcpu,
BUG_ON(!p->is_write);
- vcpu->arch.cp15[r->reg] = *vcpu_reg(vcpu, p->Rt1);
+ vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt1);
if (p->is_64bit)
- vcpu->arch.cp15[r->reg + 1] = *vcpu_reg(vcpu, p->Rt2);
+ vcpu_cp15(vcpu, r->reg + 1) = *vcpu_reg(vcpu, p->Rt2);
kvm_toggle_cache(vcpu, was_enabled);
return true;
@@ -1030,7 +1030,7 @@ int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
val = vcpu_cp15_reg64_get(vcpu, r);
ret = reg_to_user(uaddr, &val, reg->id);
} else if (KVM_REG_SIZE(reg->id) == 4) {
- ret = reg_to_user(uaddr, &vcpu->arch.cp15[r->reg], reg->id);
+ ret = reg_to_user(uaddr, &vcpu_cp15(vcpu, r->reg), reg->id);
}
return ret;
@@ -1060,7 +1060,7 @@ int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (!ret)
vcpu_cp15_reg64_set(vcpu, r, val);
} else if (KVM_REG_SIZE(reg->id) == 4) {
- ret = reg_from_user(&vcpu->arch.cp15[r->reg], uaddr, reg->id);
+ ret = reg_from_user(&vcpu_cp15(vcpu, r->reg), uaddr, reg->id);
}
return ret;
@@ -1248,7 +1248,7 @@ void kvm_reset_coprocs(struct kvm_vcpu *vcpu)
const struct coproc_reg *table;
/* Catch someone adding a register without putting in reset entry. */
- memset(vcpu->arch.cp15, 0x42, sizeof(vcpu->arch.cp15));
+ memset(vcpu->arch.ctxt.cp15, 0x42, sizeof(vcpu->arch.ctxt.cp15));
/* Generic chip reset first (so target could override). */
reset_coproc_regs(vcpu, cp15_regs, ARRAY_SIZE(cp15_regs));
@@ -1257,6 +1257,6 @@ void kvm_reset_coprocs(struct kvm_vcpu *vcpu)
reset_coproc_regs(vcpu, table, num);
for (num = 1; num < NR_CP15_REGS; num++)
- if (vcpu->arch.cp15[num] == 0x42424242)
- panic("Didn't reset vcpu->arch.cp15[%zi]", num);
+ if (vcpu_cp15(vcpu, num) == 0x42424242)
+ panic("Didn't reset vcpu_cp15(vcpu, %zi)", num);
}
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index 88d24a3..2735132 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -47,7 +47,7 @@ struct coproc_reg {
/* Initialization for vcpu. */
void (*reset)(struct kvm_vcpu *, const struct coproc_reg *);
- /* Index into vcpu->arch.cp15[], or 0 if we don't need to save it. */
+ /* Index into vcpu_cp15(vcpu, ...), or 0 if we don't need to save it. */
unsigned long reg;
/* Value (usually reset value) */
@@ -104,25 +104,25 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
const struct coproc_reg *r)
{
BUG_ON(!r->reg);
- BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.cp15));
- vcpu->arch.cp15[r->reg] = 0xdecafbad;
+ BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.ctxt.cp15));
+ vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
}
static inline void reset_val(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
{
BUG_ON(!r->reg);
- BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.cp15));
- vcpu->arch.cp15[r->reg] = r->val;
+ BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.ctxt.cp15));
+ vcpu_cp15(vcpu, r->reg) = r->val;
}
static inline void reset_unknown64(struct kvm_vcpu *vcpu,
const struct coproc_reg *r)
{
BUG_ON(!r->reg);
- BUG_ON(r->reg + 1 >= ARRAY_SIZE(vcpu->arch.cp15));
+ BUG_ON(r->reg + 1 >= ARRAY_SIZE(vcpu->arch.ctxt.cp15));
- vcpu->arch.cp15[r->reg] = 0xdecafbad;
- vcpu->arch.cp15[r->reg+1] = 0xd0c0ffee;
+ vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
+ vcpu_cp15(vcpu, r->reg+1) = 0xd0c0ffee;
}
static inline int cmp_reg(const struct coproc_reg *i1,
diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c
index dc99159..ee161b1 100644
--- a/arch/arm/kvm/emulate.c
+++ b/arch/arm/kvm/emulate.c
@@ -266,8 +266,8 @@ void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
static u32 exc_vector_base(struct kvm_vcpu *vcpu)
{
- u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
- u32 vbar = vcpu->arch.cp15[c12_VBAR];
+ u32 sctlr = vcpu_cp15(vcpu, c1_SCTLR);
+ u32 vbar = vcpu_cp15(vcpu, c12_VBAR);
if (sctlr & SCTLR_V)
return 0xffff0000;
@@ -282,7 +282,7 @@ static u32 exc_vector_base(struct kvm_vcpu *vcpu)
static void kvm_update_psr(struct kvm_vcpu *vcpu, unsigned long mode)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
- u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
+ u32 sctlr = vcpu_cp15(vcpu, c1_SCTLR);
*vcpu_cpsr(vcpu) = (cpsr & ~MODE_MASK) | mode;
@@ -357,22 +357,22 @@ static void inject_abt(struct kvm_vcpu *vcpu, bool is_pabt, unsigned long addr)
if (is_pabt) {
/* Set IFAR and IFSR */
- vcpu->arch.cp15[c6_IFAR] = addr;
- is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31);
+ vcpu_cp15(vcpu, c6_IFAR) = addr;
+ is_lpae = (vcpu_cp15(vcpu, c2_TTBCR) >> 31);
/* Always give debug fault for now - should give guest a clue */
if (is_lpae)
- vcpu->arch.cp15[c5_IFSR] = 1 << 9 | 0x22;
+ vcpu_cp15(vcpu, c5_IFSR) = 1 << 9 | 0x22;
else
- vcpu->arch.cp15[c5_IFSR] = 2;
+ vcpu_cp15(vcpu, c5_IFSR) = 2;
} else { /* !iabt */
/* Set DFAR and DFSR */
- vcpu->arch.cp15[c6_DFAR] = addr;
- is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31);
+ vcpu_cp15(vcpu, c6_DFAR) = addr;
+ is_lpae = (vcpu_cp15(vcpu, c2_TTBCR) >> 31);
/* Always give debug fault for now - should give guest a clue */
if (is_lpae)
- vcpu->arch.cp15[c5_DFSR] = 1 << 9 | 0x22;
+ vcpu_cp15(vcpu, c5_DFSR) = 1 << 9 | 0x22;
else
- vcpu->arch.cp15[c5_DFSR] = 2;
+ vcpu_cp15(vcpu, c5_DFSR) = 2;
}
}
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 51a5950..b9d9531 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -4,7 +4,8 @@
#define VCPU_USR_REG(_reg_nr) (VCPU_USR_REGS + (_reg_nr * 4))
#define VCPU_USR_SP (VCPU_USR_REG(13))
#define VCPU_USR_LR (VCPU_USR_REG(14))
-#define CP15_OFFSET(_cp15_reg_idx) (VCPU_CP15 + (_cp15_reg_idx * 4))
+#define VCPU_CP15_BASE (VCPU_GUEST_CTXT + CPU_CTXT_CP15)
+#define CP15_OFFSET(_cp15_reg_idx) (VCPU_CP15_BASE + (_cp15_reg_idx * 4))
/*
* Many of these macros need to access the VCPU structure, which is always
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 05/28] ARM: KVM: Move GP registers into the CPU context structure
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Continuing our rework of the CPU context, we now move the GP
registers into the CPU context structure.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_emulate.h | 8 ++++----
arch/arm/include/asm/kvm_host.h | 3 +--
arch/arm/kernel/asm-offsets.c | 18 +++++++++---------
arch/arm/kvm/emulate.c | 12 ++++++------
arch/arm/kvm/guest.c | 4 ++--
arch/arm/kvm/interrupts_head.S | 11 +++++++++++
arch/arm/kvm/reset.c | 2 +-
7 files changed, 34 insertions(+), 24 deletions(-)
diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index 32bb52a..f710616 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -68,12 +68,12 @@ static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu)
{
- return &vcpu->arch.regs.usr_regs.ARM_pc;
+ return &vcpu->arch.ctxt.gp_regs.usr_regs.ARM_pc;
}
static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
{
- return &vcpu->arch.regs.usr_regs.ARM_cpsr;
+ return &vcpu->arch.ctxt.gp_regs.usr_regs.ARM_cpsr;
}
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
@@ -83,13 +83,13 @@ static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
static inline bool mode_has_spsr(struct kvm_vcpu *vcpu)
{
- unsigned long cpsr_mode = vcpu->arch.regs.usr_regs.ARM_cpsr & MODE_MASK;
+ unsigned long cpsr_mode = vcpu->arch.ctxt.gp_regs.usr_regs.ARM_cpsr & MODE_MASK;
return (cpsr_mode > USR_MODE && cpsr_mode < SYSTEM_MODE);
}
static inline bool vcpu_mode_priv(struct kvm_vcpu *vcpu)
{
- unsigned long cpsr_mode = vcpu->arch.regs.usr_regs.ARM_cpsr & MODE_MASK;
+ unsigned long cpsr_mode = vcpu->arch.ctxt.gp_regs.usr_regs.ARM_cpsr & MODE_MASK;
return cpsr_mode > USR_MODE;;
}
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 4203701..02932ba 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -89,6 +89,7 @@ struct kvm_vcpu_fault_info {
};
struct kvm_cpu_context {
+ struct kvm_regs gp_regs;
struct vfp_hard_struct vfp;
u32 cp15[NR_CP15_REGS];
};
@@ -98,8 +99,6 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
struct kvm_vcpu_arch {
struct kvm_cpu_context ctxt;
- struct kvm_regs regs;
-
int target; /* Processor target */
DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 43f8b01..2f3e0b0 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -176,15 +176,15 @@ int main(void)
DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
DEFINE(CPU_CTXT_CP15, offsetof(struct kvm_cpu_context, cp15));
- DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
- DEFINE(VCPU_USR_REGS, offsetof(struct kvm_vcpu, arch.regs.usr_regs));
- DEFINE(VCPU_SVC_REGS, offsetof(struct kvm_vcpu, arch.regs.svc_regs));
- DEFINE(VCPU_ABT_REGS, offsetof(struct kvm_vcpu, arch.regs.abt_regs));
- DEFINE(VCPU_UND_REGS, offsetof(struct kvm_vcpu, arch.regs.und_regs));
- DEFINE(VCPU_IRQ_REGS, offsetof(struct kvm_vcpu, arch.regs.irq_regs));
- DEFINE(VCPU_FIQ_REGS, offsetof(struct kvm_vcpu, arch.regs.fiq_regs));
- DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
- DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
+ DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
+ DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
+ DEFINE(GP_REGS_SVC, offsetof(struct kvm_regs, svc_regs));
+ DEFINE(GP_REGS_ABT, offsetof(struct kvm_regs, abt_regs));
+ DEFINE(GP_REGS_UND, offsetof(struct kvm_regs, und_regs));
+ DEFINE(GP_REGS_IRQ, offsetof(struct kvm_regs, irq_regs));
+ DEFINE(GP_REGS_FIQ, offsetof(struct kvm_regs, fiq_regs));
+ DEFINE(GP_REGS_PC, offsetof(struct kvm_regs, usr_regs.ARM_pc));
+ DEFINE(GP_REGS_CPSR, offsetof(struct kvm_regs, usr_regs.ARM_cpsr));
DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr));
DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c
index ee161b1..a494def 100644
--- a/arch/arm/kvm/emulate.c
+++ b/arch/arm/kvm/emulate.c
@@ -112,7 +112,7 @@ static const unsigned long vcpu_reg_offsets[VCPU_NR_MODES][15] = {
*/
unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
{
- unsigned long *reg_array = (unsigned long *)&vcpu->arch.regs;
+ unsigned long *reg_array = (unsigned long *)&vcpu->arch.ctxt.gp_regs;
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) {
@@ -147,15 +147,15 @@ unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) {
case SVC_MODE:
- return &vcpu->arch.regs.KVM_ARM_SVC_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_SVC_spsr;
case ABT_MODE:
- return &vcpu->arch.regs.KVM_ARM_ABT_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_ABT_spsr;
case UND_MODE:
- return &vcpu->arch.regs.KVM_ARM_UND_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_UND_spsr;
case IRQ_MODE:
- return &vcpu->arch.regs.KVM_ARM_IRQ_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_IRQ_spsr;
case FIQ_MODE:
- return &vcpu->arch.regs.KVM_ARM_FIQ_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_FIQ_spsr;
default:
BUG();
}
diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c
index 5fa69d7..86e26fb 100644
--- a/arch/arm/kvm/guest.c
+++ b/arch/arm/kvm/guest.c
@@ -55,7 +55,7 @@ static u64 core_reg_offset_from_id(u64 id)
static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
u32 __user *uaddr = (u32 __user *)(long)reg->addr;
- struct kvm_regs *regs = &vcpu->arch.regs;
+ struct kvm_regs *regs = &vcpu->arch.ctxt.gp_regs;
u64 off;
if (KVM_REG_SIZE(reg->id) != 4)
@@ -72,7 +72,7 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
u32 __user *uaddr = (u32 __user *)(long)reg->addr;
- struct kvm_regs *regs = &vcpu->arch.regs;
+ struct kvm_regs *regs = &vcpu->arch.ctxt.gp_regs;
u64 off, val;
if (KVM_REG_SIZE(reg->id) != 4)
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index b9d9531..e0943cb8 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -1,6 +1,17 @@
#include <linux/irqchip/arm-gic.h>
#include <asm/assembler.h>
+/* Compat macro, until we get rid of this file entierely */
+#define VCPU_GP_REGS (VCPU_GUEST_CTXT + CPU_CTXT_GP_REGS)
+#define VCPU_USR_REGS (VCPU_GP_REGS + GP_REGS_USR)
+#define VCPU_SVC_REGS (VCPU_GP_REGS + GP_REGS_SVC)
+#define VCPU_ABT_REGS (VCPU_GP_REGS + GP_REGS_ABT)
+#define VCPU_UND_REGS (VCPU_GP_REGS + GP_REGS_UND)
+#define VCPU_IRQ_REGS (VCPU_GP_REGS + GP_REGS_IRQ)
+#define VCPU_FIQ_REGS (VCPU_GP_REGS + GP_REGS_FIQ)
+#define VCPU_PC (VCPU_GP_REGS + GP_REGS_PC)
+#define VCPU_CPSR (VCPU_GP_REGS + GP_REGS_CPSR)
+
#define VCPU_USR_REG(_reg_nr) (VCPU_USR_REGS + (_reg_nr * 4))
#define VCPU_USR_SP (VCPU_USR_REG(13))
#define VCPU_USR_LR (VCPU_USR_REG(14))
diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
index eeb85858..0048b5a 100644
--- a/arch/arm/kvm/reset.c
+++ b/arch/arm/kvm/reset.c
@@ -71,7 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
}
/* Reset core registers */
- memcpy(&vcpu->arch.regs, reset_regs, sizeof(vcpu->arch.regs));
+ memcpy(&vcpu->arch.ctxt.gp_regs, reset_regs, sizeof(vcpu->arch.ctxt.gp_regs));
/* Reset CP15 registers */
kvm_reset_coprocs(vcpu);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 05/28] ARM: KVM: Move GP registers into the CPU context structure
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Continuing our rework of the CPU context, we now move the GP
registers into the CPU context structure.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_emulate.h | 8 ++++----
arch/arm/include/asm/kvm_host.h | 3 +--
arch/arm/kernel/asm-offsets.c | 18 +++++++++---------
arch/arm/kvm/emulate.c | 12 ++++++------
arch/arm/kvm/guest.c | 4 ++--
arch/arm/kvm/interrupts_head.S | 11 +++++++++++
arch/arm/kvm/reset.c | 2 +-
7 files changed, 34 insertions(+), 24 deletions(-)
diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index 32bb52a..f710616 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -68,12 +68,12 @@ static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu)
{
- return &vcpu->arch.regs.usr_regs.ARM_pc;
+ return &vcpu->arch.ctxt.gp_regs.usr_regs.ARM_pc;
}
static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
{
- return &vcpu->arch.regs.usr_regs.ARM_cpsr;
+ return &vcpu->arch.ctxt.gp_regs.usr_regs.ARM_cpsr;
}
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
@@ -83,13 +83,13 @@ static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
static inline bool mode_has_spsr(struct kvm_vcpu *vcpu)
{
- unsigned long cpsr_mode = vcpu->arch.regs.usr_regs.ARM_cpsr & MODE_MASK;
+ unsigned long cpsr_mode = vcpu->arch.ctxt.gp_regs.usr_regs.ARM_cpsr & MODE_MASK;
return (cpsr_mode > USR_MODE && cpsr_mode < SYSTEM_MODE);
}
static inline bool vcpu_mode_priv(struct kvm_vcpu *vcpu)
{
- unsigned long cpsr_mode = vcpu->arch.regs.usr_regs.ARM_cpsr & MODE_MASK;
+ unsigned long cpsr_mode = vcpu->arch.ctxt.gp_regs.usr_regs.ARM_cpsr & MODE_MASK;
return cpsr_mode > USR_MODE;;
}
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 4203701..02932ba 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -89,6 +89,7 @@ struct kvm_vcpu_fault_info {
};
struct kvm_cpu_context {
+ struct kvm_regs gp_regs;
struct vfp_hard_struct vfp;
u32 cp15[NR_CP15_REGS];
};
@@ -98,8 +99,6 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
struct kvm_vcpu_arch {
struct kvm_cpu_context ctxt;
- struct kvm_regs regs;
-
int target; /* Processor target */
DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 43f8b01..2f3e0b0 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -176,15 +176,15 @@ int main(void)
DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
DEFINE(CPU_CTXT_CP15, offsetof(struct kvm_cpu_context, cp15));
- DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
- DEFINE(VCPU_USR_REGS, offsetof(struct kvm_vcpu, arch.regs.usr_regs));
- DEFINE(VCPU_SVC_REGS, offsetof(struct kvm_vcpu, arch.regs.svc_regs));
- DEFINE(VCPU_ABT_REGS, offsetof(struct kvm_vcpu, arch.regs.abt_regs));
- DEFINE(VCPU_UND_REGS, offsetof(struct kvm_vcpu, arch.regs.und_regs));
- DEFINE(VCPU_IRQ_REGS, offsetof(struct kvm_vcpu, arch.regs.irq_regs));
- DEFINE(VCPU_FIQ_REGS, offsetof(struct kvm_vcpu, arch.regs.fiq_regs));
- DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
- DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
+ DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
+ DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
+ DEFINE(GP_REGS_SVC, offsetof(struct kvm_regs, svc_regs));
+ DEFINE(GP_REGS_ABT, offsetof(struct kvm_regs, abt_regs));
+ DEFINE(GP_REGS_UND, offsetof(struct kvm_regs, und_regs));
+ DEFINE(GP_REGS_IRQ, offsetof(struct kvm_regs, irq_regs));
+ DEFINE(GP_REGS_FIQ, offsetof(struct kvm_regs, fiq_regs));
+ DEFINE(GP_REGS_PC, offsetof(struct kvm_regs, usr_regs.ARM_pc));
+ DEFINE(GP_REGS_CPSR, offsetof(struct kvm_regs, usr_regs.ARM_cpsr));
DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr));
DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c
index ee161b1..a494def 100644
--- a/arch/arm/kvm/emulate.c
+++ b/arch/arm/kvm/emulate.c
@@ -112,7 +112,7 @@ static const unsigned long vcpu_reg_offsets[VCPU_NR_MODES][15] = {
*/
unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
{
- unsigned long *reg_array = (unsigned long *)&vcpu->arch.regs;
+ unsigned long *reg_array = (unsigned long *)&vcpu->arch.ctxt.gp_regs;
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) {
@@ -147,15 +147,15 @@ unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) {
case SVC_MODE:
- return &vcpu->arch.regs.KVM_ARM_SVC_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_SVC_spsr;
case ABT_MODE:
- return &vcpu->arch.regs.KVM_ARM_ABT_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_ABT_spsr;
case UND_MODE:
- return &vcpu->arch.regs.KVM_ARM_UND_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_UND_spsr;
case IRQ_MODE:
- return &vcpu->arch.regs.KVM_ARM_IRQ_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_IRQ_spsr;
case FIQ_MODE:
- return &vcpu->arch.regs.KVM_ARM_FIQ_spsr;
+ return &vcpu->arch.ctxt.gp_regs.KVM_ARM_FIQ_spsr;
default:
BUG();
}
diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c
index 5fa69d7..86e26fb 100644
--- a/arch/arm/kvm/guest.c
+++ b/arch/arm/kvm/guest.c
@@ -55,7 +55,7 @@ static u64 core_reg_offset_from_id(u64 id)
static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
u32 __user *uaddr = (u32 __user *)(long)reg->addr;
- struct kvm_regs *regs = &vcpu->arch.regs;
+ struct kvm_regs *regs = &vcpu->arch.ctxt.gp_regs;
u64 off;
if (KVM_REG_SIZE(reg->id) != 4)
@@ -72,7 +72,7 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
u32 __user *uaddr = (u32 __user *)(long)reg->addr;
- struct kvm_regs *regs = &vcpu->arch.regs;
+ struct kvm_regs *regs = &vcpu->arch.ctxt.gp_regs;
u64 off, val;
if (KVM_REG_SIZE(reg->id) != 4)
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index b9d9531..e0943cb8 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -1,6 +1,17 @@
#include <linux/irqchip/arm-gic.h>
#include <asm/assembler.h>
+/* Compat macro, until we get rid of this file entierely */
+#define VCPU_GP_REGS (VCPU_GUEST_CTXT + CPU_CTXT_GP_REGS)
+#define VCPU_USR_REGS (VCPU_GP_REGS + GP_REGS_USR)
+#define VCPU_SVC_REGS (VCPU_GP_REGS + GP_REGS_SVC)
+#define VCPU_ABT_REGS (VCPU_GP_REGS + GP_REGS_ABT)
+#define VCPU_UND_REGS (VCPU_GP_REGS + GP_REGS_UND)
+#define VCPU_IRQ_REGS (VCPU_GP_REGS + GP_REGS_IRQ)
+#define VCPU_FIQ_REGS (VCPU_GP_REGS + GP_REGS_FIQ)
+#define VCPU_PC (VCPU_GP_REGS + GP_REGS_PC)
+#define VCPU_CPSR (VCPU_GP_REGS + GP_REGS_CPSR)
+
#define VCPU_USR_REG(_reg_nr) (VCPU_USR_REGS + (_reg_nr * 4))
#define VCPU_USR_SP (VCPU_USR_REG(13))
#define VCPU_USR_LR (VCPU_USR_REG(14))
diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
index eeb85858..0048b5a 100644
--- a/arch/arm/kvm/reset.c
+++ b/arch/arm/kvm/reset.c
@@ -71,7 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
}
/* Reset core registers */
- memcpy(&vcpu->arch.regs, reset_regs, sizeof(vcpu->arch.regs));
+ memcpy(&vcpu->arch.ctxt.gp_regs, reset_regs, sizeof(vcpu->arch.ctxt.gp_regs));
/* Reset CP15 registers */
kvm_reset_coprocs(vcpu);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 06/28] ARM: KVM: Add a HYP-specific header file
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
In order to expose the various HYP services that are private to
the hypervisor, add a new hyp.h file.
So far, it only contains mundane things such as section annotation
and VA manipulation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp.h | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
create mode 100644 arch/arm/kvm/hyp/hyp.h
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
new file mode 100644
index 0000000..c723870
--- /dev/null
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright (C) 2015 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_KVM_HYP_H__
+#define __ARM_KVM_HYP_H__
+
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+#include <asm/kvm_mmu.h>
+
+#define __hyp_text __section(.hyp.text) notrace
+
+#define kern_hyp_va(v) (v)
+#define hyp_kern_va(v) (v)
+
+#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 06/28] ARM: KVM: Add a HYP-specific header file
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
In order to expose the various HYP services that are private to
the hypervisor, add a new hyp.h file.
So far, it only contains mundane things such as section annotation
and VA manipulation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp.h | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
create mode 100644 arch/arm/kvm/hyp/hyp.h
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
new file mode 100644
index 0000000..c723870
--- /dev/null
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright (C) 2015 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_KVM_HYP_H__
+#define __ARM_KVM_HYP_H__
+
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+#include <asm/kvm_mmu.h>
+
+#define __hyp_text __section(.hyp.text) notrace
+
+#define kern_hyp_va(v) (v)
+#define hyp_kern_va(v) (v)
+
+#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 07/28] ARM: KVM: Add system register accessor macros
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
In order to move system register (CP15, mostly) access to C code,
add a few macros to facilitate this, and minimize the difference
between 32 and 64bit CP15 registers.
This will get heavily used in the following patches.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index c723870..727089f 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -27,4 +27,19 @@
#define kern_hyp_va(v) (v)
#define hyp_kern_va(v) (v)
+#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
+ "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
+#define __ACCESS_CP15_64(Op1, CRm) \
+ "mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
+
+#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
+#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+
+#define __read_sysreg(r, w, c, t) ({ \
+ t __val; \
+ asm volatile(r " " c : "=r" (__val)); \
+ __val; \
+})
+#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
+
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 07/28] ARM: KVM: Add system register accessor macros
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
In order to move system register (CP15, mostly) access to C code,
add a few macros to facilitate this, and minimize the difference
between 32 and 64bit CP15 registers.
This will get heavily used in the following patches.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index c723870..727089f 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -27,4 +27,19 @@
#define kern_hyp_va(v) (v)
#define hyp_kern_va(v) (v)
+#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
+ "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
+#define __ACCESS_CP15_64(Op1, CRm) \
+ "mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
+
+#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
+#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+
+#define __read_sysreg(r, w, c, t) ({ \
+ t __val; \
+ asm volatile(r " " c : "=r" (__val)); \
+ __val; \
+})
+#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
+
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 08/28] ARM: KVM: Add TLB invalidation code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Convert the TLB invalidation code to C, hooking it into the
build system whilst we're at it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/Makefile | 1 +
arch/arm/kvm/hyp/Makefile | 5 ++++
arch/arm/kvm/hyp/hyp.h | 5 ++++
arch/arm/kvm/hyp/tlb.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 82 insertions(+)
create mode 100644 arch/arm/kvm/hyp/Makefile
create mode 100644 arch/arm/kvm/hyp/tlb.c
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index c5eef02c..eb1bf43 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -17,6 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
KVM := ../../../virt/kvm
kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o
+obj-$(CONFIG_KVM_ARM_HOST) += hyp/
obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
new file mode 100644
index 0000000..36c760d
--- /dev/null
+++ b/arch/arm/kvm/hyp/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for Kernel-based Virtual Machine module, HYP part
+#
+
+obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 727089f..5808bbd 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -42,4 +42,9 @@
})
#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+
#endif /* __ARM_KVM_HYP_H__ */
diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
new file mode 100644
index 0000000..993fe89
--- /dev/null
+++ b/arch/arm/kvm/hyp/tlb.c
@@ -0,0 +1,71 @@
+/*
+ * Original code:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "hyp.h"
+
+/**
+ * Flush per-VMID TLBs
+ *
+ * __kvm_tlb_flush_vmid(struct kvm *kvm);
+ *
+ * We rely on the hardware to broadcast the TLB invalidation to all CPUs
+ * inside the inner-shareable domain (which is the case for all v7
+ * implementations). If we come across a non-IS SMP implementation, we'll
+ * have to use an IPI based mechanism. Until then, we stick to the simple
+ * hardware assisted version.
+ *
+ * As v7 does not support flushing per IPA, just nuke the whole TLB
+ * instead, ignoring the ipa value.
+ */
+static void __hyp_text __tlb_flush_vmid(struct kvm *kvm)
+{
+ dsb(ishst);
+
+ /* Switch to requested VMID */
+ kvm = kern_hyp_va(kvm);
+ write_sysreg(kvm->arch.vttbr, VTTBR);
+ isb();
+
+ write_sysreg(0, TLBIALLIS);
+ dsb(ish);
+ isb();
+
+ write_sysreg(0, VTTBR);
+}
+
+__alias(__tlb_flush_vmid) void __weak __kvm_tlb_flush_vmid(struct kvm *kvm);
+
+static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
+{
+ __tlb_flush_vmid(kvm);
+}
+
+__alias(__tlb_flush_vmid_ipa) void __weak __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
+ phys_addr_t ipa);
+
+static void __hyp_text __tlb_flush_vm_context(void)
+{
+ dsb(ishst);
+ write_sysreg(0, TLBIALLNSNHIS);
+ write_sysreg(0, ICIALLUIS);
+ dsb(ish);
+}
+
+__alias(__tlb_flush_vm_context) void __weak __kvm_flush_vm_context(void);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 08/28] ARM: KVM: Add TLB invalidation code
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Convert the TLB invalidation code to C, hooking it into the
build system whilst we're at it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/Makefile | 1 +
arch/arm/kvm/hyp/Makefile | 5 ++++
arch/arm/kvm/hyp/hyp.h | 5 ++++
arch/arm/kvm/hyp/tlb.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 82 insertions(+)
create mode 100644 arch/arm/kvm/hyp/Makefile
create mode 100644 arch/arm/kvm/hyp/tlb.c
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index c5eef02c..eb1bf43 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -17,6 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
KVM := ../../../virt/kvm
kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o
+obj-$(CONFIG_KVM_ARM_HOST) += hyp/
obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
new file mode 100644
index 0000000..36c760d
--- /dev/null
+++ b/arch/arm/kvm/hyp/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for Kernel-based Virtual Machine module, HYP part
+#
+
+obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 727089f..5808bbd 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -42,4 +42,9 @@
})
#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+
#endif /* __ARM_KVM_HYP_H__ */
diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
new file mode 100644
index 0000000..993fe89
--- /dev/null
+++ b/arch/arm/kvm/hyp/tlb.c
@@ -0,0 +1,71 @@
+/*
+ * Original code:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "hyp.h"
+
+/**
+ * Flush per-VMID TLBs
+ *
+ * __kvm_tlb_flush_vmid(struct kvm *kvm);
+ *
+ * We rely on the hardware to broadcast the TLB invalidation to all CPUs
+ * inside the inner-shareable domain (which is the case for all v7
+ * implementations). If we come across a non-IS SMP implementation, we'll
+ * have to use an IPI based mechanism. Until then, we stick to the simple
+ * hardware assisted version.
+ *
+ * As v7 does not support flushing per IPA, just nuke the whole TLB
+ * instead, ignoring the ipa value.
+ */
+static void __hyp_text __tlb_flush_vmid(struct kvm *kvm)
+{
+ dsb(ishst);
+
+ /* Switch to requested VMID */
+ kvm = kern_hyp_va(kvm);
+ write_sysreg(kvm->arch.vttbr, VTTBR);
+ isb();
+
+ write_sysreg(0, TLBIALLIS);
+ dsb(ish);
+ isb();
+
+ write_sysreg(0, VTTBR);
+}
+
+__alias(__tlb_flush_vmid) void __weak __kvm_tlb_flush_vmid(struct kvm *kvm);
+
+static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
+{
+ __tlb_flush_vmid(kvm);
+}
+
+__alias(__tlb_flush_vmid_ipa) void __weak __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
+ phys_addr_t ipa);
+
+static void __hyp_text __tlb_flush_vm_context(void)
+{
+ dsb(ishst);
+ write_sysreg(0, TLBIALLNSNHIS);
+ write_sysreg(0, ICIALLUIS);
+ dsb(ish);
+}
+
+__alias(__tlb_flush_vm_context) void __weak __kvm_flush_vm_context(void);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 09/28] ARM: KVM: Add CP15 save/restore code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Concert the CP15 save/restore code to C.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/cp15-sr.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/hyp.h | 28 ++++++++++++++++
3 files changed, 113 insertions(+)
create mode 100644 arch/arm/kvm/hyp/cp15-sr.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 36c760d..9f96fcb 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -3,3 +3,4 @@
#
obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
+obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
new file mode 100644
index 0000000..732abbc
--- /dev/null
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -0,0 +1,84 @@
+/*
+ * Original code:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "hyp.h"
+
+static u64 *cp15_64(struct kvm_cpu_context *ctxt, int idx)
+{
+ return (u64 *)(ctxt->cp15 + idx);
+}
+
+void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
+{
+ ctxt->cp15[c0_MPIDR] = read_sysreg(VMPIDR);
+ ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
+ ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
+ ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
+ ctxt->cp15[c3_DACR] = read_sysreg(DACR);
+ ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
+ ctxt->cp15[c5_IFSR] = read_sysreg(IFSR);
+ ctxt->cp15[c5_ADFSR] = read_sysreg(ADFSR);
+ ctxt->cp15[c5_AIFSR] = read_sysreg(AIFSR);
+ ctxt->cp15[c6_DFAR] = read_sysreg(DFAR);
+ ctxt->cp15[c6_IFAR] = read_sysreg(IFAR);
+ *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR);
+ ctxt->cp15[c10_PRRR] = read_sysreg(PRRR);
+ ctxt->cp15[c10_NMRR] = read_sysreg(NMRR);
+ ctxt->cp15[c10_AMAIR0] = read_sysreg(AMAIR0);
+ ctxt->cp15[c10_AMAIR1] = read_sysreg(AMAIR1);
+ ctxt->cp15[c12_VBAR] = read_sysreg(VBAR);
+ ctxt->cp15[c13_CID] = read_sysreg(CID);
+ ctxt->cp15[c13_TID_URW] = read_sysreg(TID_URW);
+ ctxt->cp15[c13_TID_URO] = read_sysreg(TID_URO);
+ ctxt->cp15[c13_TID_PRIV] = read_sysreg(TID_PRIV);
+ ctxt->cp15[c14_CNTKCTL] = read_sysreg(CNTKCTL);
+}
+
+void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
+{
+ write_sysreg(ctxt->cp15[c0_MPIDR], VMPIDR);
+ write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
+ write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
+ write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
+ write_sysreg(ctxt->cp15[c3_DACR], DACR);
+ write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
+ write_sysreg(ctxt->cp15[c5_IFSR], IFSR);
+ write_sysreg(ctxt->cp15[c5_ADFSR], ADFSR);
+ write_sysreg(ctxt->cp15[c5_AIFSR], AIFSR);
+ write_sysreg(ctxt->cp15[c6_DFAR], DFAR);
+ write_sysreg(ctxt->cp15[c6_IFAR], IFAR);
+ write_sysreg(*cp15_64(ctxt, c7_PAR), PAR);
+ write_sysreg(ctxt->cp15[c10_PRRR], PRRR);
+ write_sysreg(ctxt->cp15[c10_NMRR], NMRR);
+ write_sysreg(ctxt->cp15[c10_AMAIR0], AMAIR0);
+ write_sysreg(ctxt->cp15[c10_AMAIR1], AMAIR1);
+ write_sysreg(ctxt->cp15[c12_VBAR], VBAR);
+ write_sysreg(ctxt->cp15[c13_CID], CID);
+ write_sysreg(ctxt->cp15[c13_TID_URW], TID_URW);
+ write_sysreg(ctxt->cp15[c13_TID_URO], TID_URO);
+ write_sysreg(ctxt->cp15[c13_TID_PRIV], TID_PRIV);
+ write_sysreg(ctxt->cp15[c14_CNTKCTL], CNTKCTL);
+}
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 5808bbd..ab2cb82 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -42,9 +42,37 @@
})
#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+
+void __sysreg_save_state(struct kvm_cpu_context *ctxt);
+void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 09/28] ARM: KVM: Add CP15 save/restore code
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Concert the CP15 save/restore code to C.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/cp15-sr.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/hyp.h | 28 ++++++++++++++++
3 files changed, 113 insertions(+)
create mode 100644 arch/arm/kvm/hyp/cp15-sr.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 36c760d..9f96fcb 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -3,3 +3,4 @@
#
obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
+obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
new file mode 100644
index 0000000..732abbc
--- /dev/null
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -0,0 +1,84 @@
+/*
+ * Original code:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "hyp.h"
+
+static u64 *cp15_64(struct kvm_cpu_context *ctxt, int idx)
+{
+ return (u64 *)(ctxt->cp15 + idx);
+}
+
+void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
+{
+ ctxt->cp15[c0_MPIDR] = read_sysreg(VMPIDR);
+ ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
+ ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
+ ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
+ ctxt->cp15[c3_DACR] = read_sysreg(DACR);
+ ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
+ ctxt->cp15[c5_IFSR] = read_sysreg(IFSR);
+ ctxt->cp15[c5_ADFSR] = read_sysreg(ADFSR);
+ ctxt->cp15[c5_AIFSR] = read_sysreg(AIFSR);
+ ctxt->cp15[c6_DFAR] = read_sysreg(DFAR);
+ ctxt->cp15[c6_IFAR] = read_sysreg(IFAR);
+ *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR);
+ ctxt->cp15[c10_PRRR] = read_sysreg(PRRR);
+ ctxt->cp15[c10_NMRR] = read_sysreg(NMRR);
+ ctxt->cp15[c10_AMAIR0] = read_sysreg(AMAIR0);
+ ctxt->cp15[c10_AMAIR1] = read_sysreg(AMAIR1);
+ ctxt->cp15[c12_VBAR] = read_sysreg(VBAR);
+ ctxt->cp15[c13_CID] = read_sysreg(CID);
+ ctxt->cp15[c13_TID_URW] = read_sysreg(TID_URW);
+ ctxt->cp15[c13_TID_URO] = read_sysreg(TID_URO);
+ ctxt->cp15[c13_TID_PRIV] = read_sysreg(TID_PRIV);
+ ctxt->cp15[c14_CNTKCTL] = read_sysreg(CNTKCTL);
+}
+
+void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
+{
+ write_sysreg(ctxt->cp15[c0_MPIDR], VMPIDR);
+ write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
+ write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
+ write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
+ write_sysreg(ctxt->cp15[c3_DACR], DACR);
+ write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
+ write_sysreg(ctxt->cp15[c5_IFSR], IFSR);
+ write_sysreg(ctxt->cp15[c5_ADFSR], ADFSR);
+ write_sysreg(ctxt->cp15[c5_AIFSR], AIFSR);
+ write_sysreg(ctxt->cp15[c6_DFAR], DFAR);
+ write_sysreg(ctxt->cp15[c6_IFAR], IFAR);
+ write_sysreg(*cp15_64(ctxt, c7_PAR), PAR);
+ write_sysreg(ctxt->cp15[c10_PRRR], PRRR);
+ write_sysreg(ctxt->cp15[c10_NMRR], NMRR);
+ write_sysreg(ctxt->cp15[c10_AMAIR0], AMAIR0);
+ write_sysreg(ctxt->cp15[c10_AMAIR1], AMAIR1);
+ write_sysreg(ctxt->cp15[c12_VBAR], VBAR);
+ write_sysreg(ctxt->cp15[c13_CID], CID);
+ write_sysreg(ctxt->cp15[c13_TID_URW], TID_URW);
+ write_sysreg(ctxt->cp15[c13_TID_URO], TID_URO);
+ write_sysreg(ctxt->cp15[c13_TID_PRIV], TID_PRIV);
+ write_sysreg(ctxt->cp15[c14_CNTKCTL], CNTKCTL);
+}
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 5808bbd..ab2cb82 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -42,9 +42,37 @@
})
#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
+#define TTBR0 __ACCESS_CP15_64(0, c2)
+#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
+#define PAR __ACCESS_CP15_64(0, c7)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+
+void __sysreg_save_state(struct kvm_cpu_context *ctxt);
+void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 10/28] ARM: KVM: Add timer save/restore
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
This patch shouldn't exist, as we should be able to reuse the
arm64 version for free. I'll get there eventually, but in the
meantime I need a timer ticking.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 8 +++++
arch/arm/kvm/hyp/timer-sr.c | 71 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 80 insertions(+)
create mode 100644 arch/arm/kvm/hyp/timer-sr.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 9f96fcb..9241ae8 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -4,3 +4,4 @@
obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
+obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index ab2cb82..4924418 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -46,6 +46,9 @@
#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
#define PAR __ACCESS_CP15_64(0, c7)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
@@ -71,6 +74,11 @@
#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
+void __timer_save_state(struct kvm_vcpu *vcpu);
+void __timer_restore_state(struct kvm_vcpu *vcpu);
void __sysreg_save_state(struct kvm_cpu_context *ctxt);
void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
diff --git a/arch/arm/kvm/hyp/timer-sr.c b/arch/arm/kvm/hyp/timer-sr.c
new file mode 100644
index 0000000..d7535fd
--- /dev/null
+++ b/arch/arm/kvm/hyp/timer-sr.c
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) 2012-2015 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <clocksource/arm_arch_timer.h>
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_mmu.h>
+
+#include "hyp.h"
+
+/* vcpu is already in the HYP VA space */
+void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ u64 val;
+
+ if (kvm->arch.timer.enabled) {
+ timer->cntv_ctl = read_sysreg(CNTV_CTL);
+ timer->cntv_cval = read_sysreg(CNTV_CVAL);
+ }
+
+ /* Disable the virtual timer */
+ write_sysreg(0, CNTV_CTL);
+
+ /* Allow physical timer/counter access for the host */
+ val = read_sysreg(CNTHCTL);
+ val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
+ write_sysreg(val, CNTHCTL);
+
+ /* Clear cntvoff for the host */
+ write_sysreg(0, CNTVOFF);
+}
+
+void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ u64 val;
+
+ /*
+ * Disallow physical timer access for the guest
+ * Physical counter access is allowed
+ */
+ val = read_sysreg(CNTHCTL);
+ val &= ~CNTHCTL_EL1PCEN;
+ val |= CNTHCTL_EL1PCTEN;
+ write_sysreg(val, CNTHCTL);
+
+ if (kvm->arch.timer.enabled) {
+ write_sysreg(kvm->arch.timer.cntvoff, CNTVOFF);
+ write_sysreg(timer->cntv_cval, CNTV_CVAL);
+ isb();
+ write_sysreg(timer->cntv_ctl, CNTV_CTL);
+ }
+}
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 10/28] ARM: KVM: Add timer save/restore
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
This patch shouldn't exist, as we should be able to reuse the
arm64 version for free. I'll get there eventually, but in the
meantime I need a timer ticking.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 8 +++++
arch/arm/kvm/hyp/timer-sr.c | 71 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 80 insertions(+)
create mode 100644 arch/arm/kvm/hyp/timer-sr.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 9f96fcb..9241ae8 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -4,3 +4,4 @@
obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
+obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index ab2cb82..4924418 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -46,6 +46,9 @@
#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
#define PAR __ACCESS_CP15_64(0, c7)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
@@ -71,6 +74,11 @@
#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
+void __timer_save_state(struct kvm_vcpu *vcpu);
+void __timer_restore_state(struct kvm_vcpu *vcpu);
void __sysreg_save_state(struct kvm_cpu_context *ctxt);
void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
diff --git a/arch/arm/kvm/hyp/timer-sr.c b/arch/arm/kvm/hyp/timer-sr.c
new file mode 100644
index 0000000..d7535fd
--- /dev/null
+++ b/arch/arm/kvm/hyp/timer-sr.c
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) 2012-2015 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <clocksource/arm_arch_timer.h>
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_mmu.h>
+
+#include "hyp.h"
+
+/* vcpu is already in the HYP VA space */
+void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ u64 val;
+
+ if (kvm->arch.timer.enabled) {
+ timer->cntv_ctl = read_sysreg(CNTV_CTL);
+ timer->cntv_cval = read_sysreg(CNTV_CVAL);
+ }
+
+ /* Disable the virtual timer */
+ write_sysreg(0, CNTV_CTL);
+
+ /* Allow physical timer/counter access for the host */
+ val = read_sysreg(CNTHCTL);
+ val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
+ write_sysreg(val, CNTHCTL);
+
+ /* Clear cntvoff for the host */
+ write_sysreg(0, CNTVOFF);
+}
+
+void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ u64 val;
+
+ /*
+ * Disallow physical timer access for the guest
+ * Physical counter access is allowed
+ */
+ val = read_sysreg(CNTHCTL);
+ val &= ~CNTHCTL_EL1PCEN;
+ val |= CNTHCTL_EL1PCTEN;
+ write_sysreg(val, CNTHCTL);
+
+ if (kvm->arch.timer.enabled) {
+ write_sysreg(kvm->arch.timer.cntvoff, CNTVOFF);
+ write_sysreg(timer->cntv_cval, CNTV_CVAL);
+ isb();
+ write_sysreg(timer->cntv_ctl, CNTV_CTL);
+ }
+}
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 11/28] ARM: KVM: Add vgic v2 save/restore
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
This patch shouldn't exist, as we should be able to reuse the
arm64 version for free. I'll get there eventually, but in the
meantime I need an interrupt controller.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 3 ++
arch/arm/kvm/hyp/vgic-v2-sr.c | 84 +++++++++++++++++++++++++++++++++++++++++++
3 files changed, 88 insertions(+)
create mode 100644 arch/arm/kvm/hyp/vgic-v2-sr.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 9241ae8..d8acbb6 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -5,3 +5,4 @@
obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
+obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 4924418..7eb1c21 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -80,6 +80,9 @@
void __timer_save_state(struct kvm_vcpu *vcpu);
void __timer_restore_state(struct kvm_vcpu *vcpu);
+void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
+void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
+
void __sysreg_save_state(struct kvm_cpu_context *ctxt);
void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
diff --git a/arch/arm/kvm/hyp/vgic-v2-sr.c b/arch/arm/kvm/hyp/vgic-v2-sr.c
new file mode 100644
index 0000000..e717612
--- /dev/null
+++ b/arch/arm/kvm/hyp/vgic-v2-sr.c
@@ -0,0 +1,84 @@
+/*
+ * Copyright (C) 2012-2015 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/compiler.h>
+#include <linux/irqchip/arm-gic.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_mmu.h>
+
+#include "hyp.h"
+
+/* vcpu is already in the HYP VA space */
+void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
+ struct vgic_dist *vgic = &kvm->arch.vgic;
+ void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+ u32 eisr0, eisr1, elrsr0, elrsr1;
+ int i, nr_lr;
+
+ if (!base)
+ return;
+
+ nr_lr = vcpu->arch.vgic_cpu.nr_lr;
+ cpu_if->vgic_vmcr = readl_relaxed(base + GICH_VMCR);
+ cpu_if->vgic_misr = readl_relaxed(base + GICH_MISR);
+ eisr0 = readl_relaxed(base + GICH_EISR0);
+ elrsr0 = readl_relaxed(base + GICH_ELRSR0);
+ if (unlikely(nr_lr > 32)) {
+ eisr1 = readl_relaxed(base + GICH_EISR1);
+ elrsr1 = readl_relaxed(base + GICH_ELRSR1);
+ } else {
+ eisr1 = elrsr1 = 0;
+ }
+#ifdef CONFIG_CPU_BIG_ENDIAN
+ cpu_if->vgic_eisr = ((u64)eisr0 << 32) | eisr1;
+ cpu_if->vgic_elrsr = ((u64)elrsr0 << 32) | elrsr1;
+#else
+ cpu_if->vgic_eisr = ((u64)eisr1 << 32) | eisr0;
+ cpu_if->vgic_elrsr = ((u64)elrsr1 << 32) | elrsr0;
+#endif
+ cpu_if->vgic_apr = readl_relaxed(base + GICH_APR);
+
+ writel_relaxed(0, base + GICH_HCR);
+
+ for (i = 0; i < nr_lr; i++)
+ cpu_if->vgic_lr[i] = readl_relaxed(base + GICH_LR0 + (i * 4));
+}
+
+/* vcpu is already in the HYP VA space */
+void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
+ struct vgic_dist *vgic = &kvm->arch.vgic;
+ void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+ int i, nr_lr;
+
+ if (!base)
+ return;
+
+ writel_relaxed(cpu_if->vgic_hcr, base + GICH_HCR);
+ writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR);
+ writel_relaxed(cpu_if->vgic_apr, base + GICH_APR);
+
+ nr_lr = vcpu->arch.vgic_cpu.nr_lr;
+ for (i = 0; i < nr_lr; i++)
+ writel_relaxed(cpu_if->vgic_lr[i], base + GICH_LR0 + (i * 4));
+}
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 11/28] ARM: KVM: Add vgic v2 save/restore
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
This patch shouldn't exist, as we should be able to reuse the
arm64 version for free. I'll get there eventually, but in the
meantime I need an interrupt controller.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 3 ++
arch/arm/kvm/hyp/vgic-v2-sr.c | 84 +++++++++++++++++++++++++++++++++++++++++++
3 files changed, 88 insertions(+)
create mode 100644 arch/arm/kvm/hyp/vgic-v2-sr.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 9241ae8..d8acbb6 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -5,3 +5,4 @@
obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
+obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 4924418..7eb1c21 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -80,6 +80,9 @@
void __timer_save_state(struct kvm_vcpu *vcpu);
void __timer_restore_state(struct kvm_vcpu *vcpu);
+void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
+void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
+
void __sysreg_save_state(struct kvm_cpu_context *ctxt);
void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
diff --git a/arch/arm/kvm/hyp/vgic-v2-sr.c b/arch/arm/kvm/hyp/vgic-v2-sr.c
new file mode 100644
index 0000000..e717612
--- /dev/null
+++ b/arch/arm/kvm/hyp/vgic-v2-sr.c
@@ -0,0 +1,84 @@
+/*
+ * Copyright (C) 2012-2015 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/compiler.h>
+#include <linux/irqchip/arm-gic.h>
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_mmu.h>
+
+#include "hyp.h"
+
+/* vcpu is already in the HYP VA space */
+void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
+ struct vgic_dist *vgic = &kvm->arch.vgic;
+ void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+ u32 eisr0, eisr1, elrsr0, elrsr1;
+ int i, nr_lr;
+
+ if (!base)
+ return;
+
+ nr_lr = vcpu->arch.vgic_cpu.nr_lr;
+ cpu_if->vgic_vmcr = readl_relaxed(base + GICH_VMCR);
+ cpu_if->vgic_misr = readl_relaxed(base + GICH_MISR);
+ eisr0 = readl_relaxed(base + GICH_EISR0);
+ elrsr0 = readl_relaxed(base + GICH_ELRSR0);
+ if (unlikely(nr_lr > 32)) {
+ eisr1 = readl_relaxed(base + GICH_EISR1);
+ elrsr1 = readl_relaxed(base + GICH_ELRSR1);
+ } else {
+ eisr1 = elrsr1 = 0;
+ }
+#ifdef CONFIG_CPU_BIG_ENDIAN
+ cpu_if->vgic_eisr = ((u64)eisr0 << 32) | eisr1;
+ cpu_if->vgic_elrsr = ((u64)elrsr0 << 32) | elrsr1;
+#else
+ cpu_if->vgic_eisr = ((u64)eisr1 << 32) | eisr0;
+ cpu_if->vgic_elrsr = ((u64)elrsr1 << 32) | elrsr0;
+#endif
+ cpu_if->vgic_apr = readl_relaxed(base + GICH_APR);
+
+ writel_relaxed(0, base + GICH_HCR);
+
+ for (i = 0; i < nr_lr; i++)
+ cpu_if->vgic_lr[i] = readl_relaxed(base + GICH_LR0 + (i * 4));
+}
+
+/* vcpu is already in the HYP VA space */
+void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
+ struct vgic_dist *vgic = &kvm->arch.vgic;
+ void __iomem *base = kern_hyp_va(vgic->vctrl_base);
+ int i, nr_lr;
+
+ if (!base)
+ return;
+
+ writel_relaxed(cpu_if->vgic_hcr, base + GICH_HCR);
+ writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR);
+ writel_relaxed(cpu_if->vgic_apr, base + GICH_APR);
+
+ nr_lr = vcpu->arch.vgic_cpu.nr_lr;
+ for (i = 0; i < nr_lr; i++)
+ writel_relaxed(cpu_if->vgic_lr[i], base + GICH_LR0 + (i * 4));
+}
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 12/28] ARM: KVM: Add VFP save/restore
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
This is almost a copy/paste of the existing version, with a couple
of subtle differences:
- Only write to FPEXC once on the save path
- Add an isb when enabling VFP access
The patch also defines a few sysreg accessors and a __vfp_enabled
predicate that test the VFP trapping state.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 13 +++++++++
arch/arm/kvm/hyp/vfp.S | 68 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 82 insertions(+)
create mode 100644 arch/arm/kvm/hyp/vfp.S
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index d8acbb6..5a45f4c 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -6,3 +6,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
+obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 7eb1c21..dce0f73 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -21,6 +21,7 @@
#include <linux/compiler.h>
#include <linux/kvm_host.h>
#include <asm/kvm_mmu.h>
+#include <asm/vfp.h>
#define __hyp_text __section(.hyp.text) notrace
@@ -31,6 +32,8 @@
"mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
#define __ACCESS_CP15_64(Op1, CRm) \
"mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
+#define __ACCESS_VFP(CRn) \
+ "mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
@@ -53,6 +56,7 @@
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
#define DACR __ACCESS_CP15(c3, 0, c0, 0)
#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
@@ -77,6 +81,8 @@
#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+#define VFP_FPEXC __ACCESS_VFP(FPEXC)
+
void __timer_save_state(struct kvm_vcpu *vcpu);
void __timer_restore_state(struct kvm_vcpu *vcpu);
@@ -86,4 +92,11 @@ void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
void __sysreg_save_state(struct kvm_cpu_context *ctxt);
void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
+void asmlinkage __vfp_save_state(struct vfp_hard_struct *vfp);
+void asmlinkage __vfp_restore_state(struct vfp_hard_struct *vfp);
+static inline bool __vfp_enabled(void)
+{
+ return !(read_sysreg(HCPTR) & (HCPTR_TCP(11) | HCPTR_TCP(10)));
+}
+
#endif /* __ARM_KVM_HYP_H__ */
diff --git a/arch/arm/kvm/hyp/vfp.S b/arch/arm/kvm/hyp/vfp.S
new file mode 100644
index 0000000..7c297e8
--- /dev/null
+++ b/arch/arm/kvm/hyp/vfp.S
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/linkage.h>
+#include <asm/vfpmacros.h>
+
+ .text
+ .pushsection .hyp.text, "ax"
+
+/* void __vfp_save_state(struct vfp_hard_struct *vfp); */
+ENTRY(__vfp_save_state)
+ push {r4, r5}
+ VFPFMRX r1, FPEXC
+
+ @ Make sure *really* VFP is enabled so we can touch the registers.
+ orr r5, r1, #FPEXC_EN
+ tst r5, #FPEXC_EX @ Check for VFP Subarchitecture
+ bic r5, r5, #FPEXC_EX @ FPEXC_EX disable
+ VFPFMXR FPEXC, r5
+ isb
+
+ VFPFMRX r2, FPSCR
+ beq 1f
+
+ @ If FPEXC_EX is 0, then FPINST/FPINST2 reads are upredictable, so
+ @ we only need to save them if FPEXC_EX is set.
+ VFPFMRX r3, FPINST
+ tst r5, #FPEXC_FP2V
+ VFPFMRX r4, FPINST2, ne @ vmrsne
+1:
+ VFPFSTMIA r0, r5 @ Save VFP registers
+ stm r0, {r1-r4} @ Save FPEXC, FPSCR, FPINST, FPINST2
+ pop {r4, r5}
+ bx lr
+ENDPROC(__vfp_save_state)
+
+/* void __vfp_restore_state(struct vfp_hard_struct *vfp);
+ * Assume FPEXC_EN is on and FPEXC_EX is off */
+ENTRY(__vfp_restore_state)
+ VFPFLDMIA r0, r1 @ Load VFP registers
+ ldm r0, {r0-r3} @ Load FPEXC, FPSCR, FPINST, FPINST2
+
+ VFPFMXR FPSCR, r1
+ tst r0, #FPEXC_EX @ Check for VFP Subarchitecture
+ beq 1f
+ VFPFMXR FPINST, r2
+ tst r0, #FPEXC_FP2V
+ VFPFMXR FPINST2, r3, ne
+1:
+ VFPFMXR FPEXC, r0 @ FPEXC (last, in case !EN)
+ bx lr
+ENDPROC(__vfp_restore_state)
+
+ .popsection
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 12/28] ARM: KVM: Add VFP save/restore
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
This is almost a copy/paste of the existing version, with a couple
of subtle differences:
- Only write to FPEXC once on the save path
- Add an isb when enabling VFP access
The patch also defines a few sysreg accessors and a __vfp_enabled
predicate that test the VFP trapping state.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 13 +++++++++
arch/arm/kvm/hyp/vfp.S | 68 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 82 insertions(+)
create mode 100644 arch/arm/kvm/hyp/vfp.S
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index d8acbb6..5a45f4c 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -6,3 +6,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
+obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 7eb1c21..dce0f73 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -21,6 +21,7 @@
#include <linux/compiler.h>
#include <linux/kvm_host.h>
#include <asm/kvm_mmu.h>
+#include <asm/vfp.h>
#define __hyp_text __section(.hyp.text) notrace
@@ -31,6 +32,8 @@
"mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
#define __ACCESS_CP15_64(Op1, CRm) \
"mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
+#define __ACCESS_VFP(CRn) \
+ "mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
@@ -53,6 +56,7 @@
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
#define DACR __ACCESS_CP15(c3, 0, c0, 0)
#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
@@ -77,6 +81,8 @@
#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+#define VFP_FPEXC __ACCESS_VFP(FPEXC)
+
void __timer_save_state(struct kvm_vcpu *vcpu);
void __timer_restore_state(struct kvm_vcpu *vcpu);
@@ -86,4 +92,11 @@ void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
void __sysreg_save_state(struct kvm_cpu_context *ctxt);
void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
+void asmlinkage __vfp_save_state(struct vfp_hard_struct *vfp);
+void asmlinkage __vfp_restore_state(struct vfp_hard_struct *vfp);
+static inline bool __vfp_enabled(void)
+{
+ return !(read_sysreg(HCPTR) & (HCPTR_TCP(11) | HCPTR_TCP(10)));
+}
+
#endif /* __ARM_KVM_HYP_H__ */
diff --git a/arch/arm/kvm/hyp/vfp.S b/arch/arm/kvm/hyp/vfp.S
new file mode 100644
index 0000000..7c297e8
--- /dev/null
+++ b/arch/arm/kvm/hyp/vfp.S
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/linkage.h>
+#include <asm/vfpmacros.h>
+
+ .text
+ .pushsection .hyp.text, "ax"
+
+/* void __vfp_save_state(struct vfp_hard_struct *vfp); */
+ENTRY(__vfp_save_state)
+ push {r4, r5}
+ VFPFMRX r1, FPEXC
+
+ @ Make sure *really* VFP is enabled so we can touch the registers.
+ orr r5, r1, #FPEXC_EN
+ tst r5, #FPEXC_EX @ Check for VFP Subarchitecture
+ bic r5, r5, #FPEXC_EX @ FPEXC_EX disable
+ VFPFMXR FPEXC, r5
+ isb
+
+ VFPFMRX r2, FPSCR
+ beq 1f
+
+ @ If FPEXC_EX is 0, then FPINST/FPINST2 reads are upredictable, so
+ @ we only need to save them if FPEXC_EX is set.
+ VFPFMRX r3, FPINST
+ tst r5, #FPEXC_FP2V
+ VFPFMRX r4, FPINST2, ne @ vmrsne
+1:
+ VFPFSTMIA r0, r5 @ Save VFP registers
+ stm r0, {r1-r4} @ Save FPEXC, FPSCR, FPINST, FPINST2
+ pop {r4, r5}
+ bx lr
+ENDPROC(__vfp_save_state)
+
+/* void __vfp_restore_state(struct vfp_hard_struct *vfp);
+ * Assume FPEXC_EN is on and FPEXC_EX is off */
+ENTRY(__vfp_restore_state)
+ VFPFLDMIA r0, r1 @ Load VFP registers
+ ldm r0, {r0-r3} @ Load FPEXC, FPSCR, FPINST, FPINST2
+
+ VFPFMXR FPSCR, r1
+ tst r0, #FPEXC_EX @ Check for VFP Subarchitecture
+ beq 1f
+ VFPFMXR FPINST, r2
+ tst r0, #FPEXC_FP2V
+ VFPFMXR FPINST2, r3, ne
+1:
+ VFPFMXR FPEXC, r0 @ FPEXC (last, in case !EN)
+ bx lr
+ENDPROC(__vfp_restore_state)
+
+ .popsection
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 13/28] ARM: KVM: Add banked registers save/restore
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Banked registers are one of the many perks of the 32bit architecture,
and the world switch needs to cope with it.
This requires some "special" accessors, as these are not accessed
using a standard coprocessor instruction.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/banked-sr.c | 77 ++++++++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/hyp.h | 11 +++++++
3 files changed, 89 insertions(+)
create mode 100644 arch/arm/kvm/hyp/banked-sr.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 5a45f4c..173bd1d 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -7,3 +7,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
+obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
diff --git a/arch/arm/kvm/hyp/banked-sr.c b/arch/arm/kvm/hyp/banked-sr.c
new file mode 100644
index 0000000..d02dc80
--- /dev/null
+++ b/arch/arm/kvm/hyp/banked-sr.c
@@ -0,0 +1,77 @@
+/*
+ * Original code:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "hyp.h"
+
+__asm__(".arch_extension virt");
+
+void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt)
+{
+ ctxt->gp_regs.usr_regs.ARM_sp = read_special(SP_usr);
+ ctxt->gp_regs.usr_regs.ARM_pc = read_special(ELR_hyp);
+ ctxt->gp_regs.usr_regs.ARM_cpsr = read_special(SPSR);
+ ctxt->gp_regs.KVM_ARM_SVC_sp = read_special(SP_svc);
+ ctxt->gp_regs.KVM_ARM_SVC_lr = read_special(LR_svc);
+ ctxt->gp_regs.KVM_ARM_SVC_spsr = read_special(SPSR_svc);
+ ctxt->gp_regs.KVM_ARM_ABT_sp = read_special(SP_abt);
+ ctxt->gp_regs.KVM_ARM_ABT_lr = read_special(LR_abt);
+ ctxt->gp_regs.KVM_ARM_ABT_spsr = read_special(SPSR_abt);
+ ctxt->gp_regs.KVM_ARM_UND_sp = read_special(SP_und);
+ ctxt->gp_regs.KVM_ARM_UND_lr = read_special(LR_und);
+ ctxt->gp_regs.KVM_ARM_UND_spsr = read_special(SPSR_und);
+ ctxt->gp_regs.KVM_ARM_IRQ_sp = read_special(SP_irq);
+ ctxt->gp_regs.KVM_ARM_IRQ_lr = read_special(LR_irq);
+ ctxt->gp_regs.KVM_ARM_IRQ_spsr = read_special(SPSR_irq);
+ ctxt->gp_regs.KVM_ARM_FIQ_r8 = read_special(R8_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_r9 = read_special(R9_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_r10 = read_special(R10_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_fp = read_special(R11_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_ip = read_special(R12_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_sp = read_special(SP_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_lr = read_special(LR_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_spsr = read_special(SPSR_fiq);
+}
+
+void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt)
+{
+ write_special(ctxt->gp_regs.usr_regs.ARM_sp, SP_usr);
+ write_special(ctxt->gp_regs.usr_regs.ARM_pc, ELR_hyp);
+ write_special(ctxt->gp_regs.usr_regs.ARM_cpsr, SPSR_cxsf);
+ write_special(ctxt->gp_regs.KVM_ARM_SVC_sp, SP_svc);
+ write_special(ctxt->gp_regs.KVM_ARM_SVC_lr, LR_svc);
+ write_special(ctxt->gp_regs.KVM_ARM_SVC_spsr, SPSR_svc);
+ write_special(ctxt->gp_regs.KVM_ARM_ABT_sp, SP_abt);
+ write_special(ctxt->gp_regs.KVM_ARM_ABT_lr, LR_abt);
+ write_special(ctxt->gp_regs.KVM_ARM_ABT_spsr, SPSR_abt);
+ write_special(ctxt->gp_regs.KVM_ARM_UND_sp, SP_und);
+ write_special(ctxt->gp_regs.KVM_ARM_UND_lr, LR_und);
+ write_special(ctxt->gp_regs.KVM_ARM_UND_spsr, SPSR_und);
+ write_special(ctxt->gp_regs.KVM_ARM_IRQ_sp, SP_irq);
+ write_special(ctxt->gp_regs.KVM_ARM_IRQ_lr, LR_irq);
+ write_special(ctxt->gp_regs.KVM_ARM_IRQ_spsr, SPSR_irq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_r8, R8_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_r9, R9_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_r10, R10_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_fp, R11_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_ip, R12_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_sp, SP_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_lr, LR_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_spsr, SPSR_fiq);
+}
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index dce0f73..278eb1f 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -45,6 +45,14 @@
})
#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
+#define write_special(v, r) \
+ asm volatile("msr " __stringify(r) ", %0" : : "r" (v))
+#define read_special(r) ({ \
+ u32 __val; \
+ asm volatile("mrs %0, " __stringify(r) : "=r" (__val)); \
+ __val; \
+})
+
#define TTBR0 __ACCESS_CP15_64(0, c2)
#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
@@ -99,4 +107,7 @@ static inline bool __vfp_enabled(void)
return !(read_sysreg(HCPTR) & (HCPTR_TCP(11) | HCPTR_TCP(10)));
}
+void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt);
+void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
+
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 13/28] ARM: KVM: Add banked registers save/restore
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Banked registers are one of the many perks of the 32bit architecture,
and the world switch needs to cope with it.
This requires some "special" accessors, as these are not accessed
using a standard coprocessor instruction.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/banked-sr.c | 77 ++++++++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/hyp.h | 11 +++++++
3 files changed, 89 insertions(+)
create mode 100644 arch/arm/kvm/hyp/banked-sr.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 5a45f4c..173bd1d 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -7,3 +7,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
+obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
diff --git a/arch/arm/kvm/hyp/banked-sr.c b/arch/arm/kvm/hyp/banked-sr.c
new file mode 100644
index 0000000..d02dc80
--- /dev/null
+++ b/arch/arm/kvm/hyp/banked-sr.c
@@ -0,0 +1,77 @@
+/*
+ * Original code:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "hyp.h"
+
+__asm__(".arch_extension virt");
+
+void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt)
+{
+ ctxt->gp_regs.usr_regs.ARM_sp = read_special(SP_usr);
+ ctxt->gp_regs.usr_regs.ARM_pc = read_special(ELR_hyp);
+ ctxt->gp_regs.usr_regs.ARM_cpsr = read_special(SPSR);
+ ctxt->gp_regs.KVM_ARM_SVC_sp = read_special(SP_svc);
+ ctxt->gp_regs.KVM_ARM_SVC_lr = read_special(LR_svc);
+ ctxt->gp_regs.KVM_ARM_SVC_spsr = read_special(SPSR_svc);
+ ctxt->gp_regs.KVM_ARM_ABT_sp = read_special(SP_abt);
+ ctxt->gp_regs.KVM_ARM_ABT_lr = read_special(LR_abt);
+ ctxt->gp_regs.KVM_ARM_ABT_spsr = read_special(SPSR_abt);
+ ctxt->gp_regs.KVM_ARM_UND_sp = read_special(SP_und);
+ ctxt->gp_regs.KVM_ARM_UND_lr = read_special(LR_und);
+ ctxt->gp_regs.KVM_ARM_UND_spsr = read_special(SPSR_und);
+ ctxt->gp_regs.KVM_ARM_IRQ_sp = read_special(SP_irq);
+ ctxt->gp_regs.KVM_ARM_IRQ_lr = read_special(LR_irq);
+ ctxt->gp_regs.KVM_ARM_IRQ_spsr = read_special(SPSR_irq);
+ ctxt->gp_regs.KVM_ARM_FIQ_r8 = read_special(R8_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_r9 = read_special(R9_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_r10 = read_special(R10_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_fp = read_special(R11_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_ip = read_special(R12_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_sp = read_special(SP_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_lr = read_special(LR_fiq);
+ ctxt->gp_regs.KVM_ARM_FIQ_spsr = read_special(SPSR_fiq);
+}
+
+void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt)
+{
+ write_special(ctxt->gp_regs.usr_regs.ARM_sp, SP_usr);
+ write_special(ctxt->gp_regs.usr_regs.ARM_pc, ELR_hyp);
+ write_special(ctxt->gp_regs.usr_regs.ARM_cpsr, SPSR_cxsf);
+ write_special(ctxt->gp_regs.KVM_ARM_SVC_sp, SP_svc);
+ write_special(ctxt->gp_regs.KVM_ARM_SVC_lr, LR_svc);
+ write_special(ctxt->gp_regs.KVM_ARM_SVC_spsr, SPSR_svc);
+ write_special(ctxt->gp_regs.KVM_ARM_ABT_sp, SP_abt);
+ write_special(ctxt->gp_regs.KVM_ARM_ABT_lr, LR_abt);
+ write_special(ctxt->gp_regs.KVM_ARM_ABT_spsr, SPSR_abt);
+ write_special(ctxt->gp_regs.KVM_ARM_UND_sp, SP_und);
+ write_special(ctxt->gp_regs.KVM_ARM_UND_lr, LR_und);
+ write_special(ctxt->gp_regs.KVM_ARM_UND_spsr, SPSR_und);
+ write_special(ctxt->gp_regs.KVM_ARM_IRQ_sp, SP_irq);
+ write_special(ctxt->gp_regs.KVM_ARM_IRQ_lr, LR_irq);
+ write_special(ctxt->gp_regs.KVM_ARM_IRQ_spsr, SPSR_irq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_r8, R8_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_r9, R9_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_r10, R10_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_fp, R11_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_ip, R12_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_sp, SP_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_lr, LR_fiq);
+ write_special(ctxt->gp_regs.KVM_ARM_FIQ_spsr, SPSR_fiq);
+}
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index dce0f73..278eb1f 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -45,6 +45,14 @@
})
#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
+#define write_special(v, r) \
+ asm volatile("msr " __stringify(r) ", %0" : : "r" (v))
+#define read_special(r) ({ \
+ u32 __val; \
+ asm volatile("mrs %0, " __stringify(r) : "=r" (__val)); \
+ __val; \
+})
+
#define TTBR0 __ACCESS_CP15_64(0, c2)
#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
@@ -99,4 +107,7 @@ static inline bool __vfp_enabled(void)
return !(read_sysreg(HCPTR) & (HCPTR_TCP(11) | HCPTR_TCP(10)));
}
+void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt);
+void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
+
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 14/28] ARM: KVM: Add guest entry code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Add the very minimal piece of code that is now required to jump
into the guest (and return from it). This code is only concerned
with save/restoring the USR registers (r0-r12+lr for the guest,
r4-r12+lr for the host), as everything else is dealt with in C
(VFP is another matter though).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/entry.S | 70 +++++++++++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/hyp.h | 2 ++
3 files changed, 73 insertions(+)
create mode 100644 arch/arm/kvm/hyp/entry.S
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 173bd1d..c779690 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -8,3 +8,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
+obj-$(CONFIG_KVM_ARM_HOST) += entry.o
diff --git a/arch/arm/kvm/hyp/entry.S b/arch/arm/kvm/hyp/entry.S
new file mode 100644
index 0000000..32f79b0
--- /dev/null
+++ b/arch/arm/kvm/hyp/entry.S
@@ -0,0 +1,70 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+*/
+
+#include <linux/linkage.h>
+#include <asm/asm-offsets.h>
+#include <asm/kvm_arm.h>
+
+ .arch_extension virt
+
+ .text
+ .pushsection .hyp.text, "ax"
+
+#define USR_REGS_OFFSET (CPU_CTXT_GP_REGS + GP_REGS_USR)
+
+/* int __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host) */
+ENTRY(__guest_enter)
+ @ Save host registers
+ add r1, r1, #(USR_REGS_OFFSET + S_R4)
+ stm r1!, {r4-r12}
+ str lr, [r1, #4] @ Skip SP_usr (already saved)
+
+ @ Restore guest registers
+ add r0, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
+ ldr lr, [r0, #S_LR]
+ ldm r0, {r0-r12}
+
+ clrex
+ eret
+ENDPROC(__guest_enter)
+
+ENTRY(__guest_exit)
+ /*
+ * return convention:
+ * guest r0, r1, r2 saved on the stack
+ * r0: vcpu pointer
+ * r1: exception code
+ */
+
+ add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R3)
+ stm r2!, {r3-r12}
+ str lr, [r2, #4]
+ add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
+ pop {r3, r4, r5} @ r0, r1, r2
+ stm r2, {r3-r5}
+
+ ldr r0, [r0, #VCPU_HOST_CTXT]
+ add r0, r0, #(USR_REGS_OFFSET + S_R4)
+ ldm r0!, {r4-r12}
+ ldr lr, [r0, #4]
+
+ mov r0, r1
+ bx lr
+ENDPROC(__guest_exit)
+
+ .popsection
+
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 278eb1f..b3f6ed2 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -110,4 +110,6 @@ static inline bool __vfp_enabled(void)
void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt);
void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
+int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_context *host);
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 14/28] ARM: KVM: Add guest entry code
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Add the very minimal piece of code that is now required to jump
into the guest (and return from it). This code is only concerned
with save/restoring the USR registers (r0-r12+lr for the guest,
r4-r12+lr for the host), as everything else is dealt with in C
(VFP is another matter though).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/entry.S | 70 +++++++++++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/hyp.h | 2 ++
3 files changed, 73 insertions(+)
create mode 100644 arch/arm/kvm/hyp/entry.S
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index 173bd1d..c779690 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -8,3 +8,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
+obj-$(CONFIG_KVM_ARM_HOST) += entry.o
diff --git a/arch/arm/kvm/hyp/entry.S b/arch/arm/kvm/hyp/entry.S
new file mode 100644
index 0000000..32f79b0
--- /dev/null
+++ b/arch/arm/kvm/hyp/entry.S
@@ -0,0 +1,70 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+*/
+
+#include <linux/linkage.h>
+#include <asm/asm-offsets.h>
+#include <asm/kvm_arm.h>
+
+ .arch_extension virt
+
+ .text
+ .pushsection .hyp.text, "ax"
+
+#define USR_REGS_OFFSET (CPU_CTXT_GP_REGS + GP_REGS_USR)
+
+/* int __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host) */
+ENTRY(__guest_enter)
+ @ Save host registers
+ add r1, r1, #(USR_REGS_OFFSET + S_R4)
+ stm r1!, {r4-r12}
+ str lr, [r1, #4] @ Skip SP_usr (already saved)
+
+ @ Restore guest registers
+ add r0, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
+ ldr lr, [r0, #S_LR]
+ ldm r0, {r0-r12}
+
+ clrex
+ eret
+ENDPROC(__guest_enter)
+
+ENTRY(__guest_exit)
+ /*
+ * return convention:
+ * guest r0, r1, r2 saved on the stack
+ * r0: vcpu pointer
+ * r1: exception code
+ */
+
+ add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R3)
+ stm r2!, {r3-r12}
+ str lr, [r2, #4]
+ add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
+ pop {r3, r4, r5} @ r0, r1, r2
+ stm r2, {r3-r5}
+
+ ldr r0, [r0, #VCPU_HOST_CTXT]
+ add r0, r0, #(USR_REGS_OFFSET + S_R4)
+ ldm r0!, {r4-r12}
+ ldr lr, [r0, #4]
+
+ mov r0, r1
+ bx lr
+ENDPROC(__guest_exit)
+
+ .popsection
+
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 278eb1f..b3f6ed2 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -110,4 +110,6 @@ static inline bool __vfp_enabled(void)
void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt);
void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
+int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_context *host);
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 15/28] ARM: KVM: Add VFP lazy save/restore handler
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Similar to the arm64 version, add the code that deals with VFP traps,
re-enabling VFP, save/restoring the registers and resuming the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/entry.S | 31 +++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/arch/arm/kvm/hyp/entry.S b/arch/arm/kvm/hyp/entry.S
index 32f79b0..21c2388 100644
--- a/arch/arm/kvm/hyp/entry.S
+++ b/arch/arm/kvm/hyp/entry.S
@@ -66,5 +66,36 @@ ENTRY(__guest_exit)
bx lr
ENDPROC(__guest_exit)
+/*
+ * If VFPv3 support is not available, then we will not switch the VFP
+ * registers; however cp10 and cp11 accesses will still trap and fallback
+ * to the regular coprocessor emulation code, which currently will
+ * inject an undefined exception to the guest.
+ */
+#ifdef CONFIG_VFPv3
+ENTRY(__vfp_guest_restore)
+ push {r3, r4, lr}
+
+ @ NEON/VFP used. Turn on VFP access.
+ mrc p15, 4, r1, c1, c1, 2 @ HCPTR
+ bic r1, r1, #(HCPTR_TCP(10) | HCPTR_TCP(11))
+ mcr p15, 4, r1, c1, c1, 2 @ HCPTR
+ isb
+
+ @ Switch VFP/NEON hardware state to the guest's
+ mov r4, r0
+ ldr r0, [r0, #VCPU_HOST_CTXT]
+ add r0, r0, #CPU_CTXT_VFP
+ bl __vfp_save_state
+ add r0, r4, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
+ bl __vfp_restore_state
+
+ pop {r3, r4, lr}
+ pop {r0, r1, r2}
+ clrex
+ eret
+ENDPROC(__vfp_guest_restore)
+#endif
+
.popsection
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 15/28] ARM: KVM: Add VFP lazy save/restore handler
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Similar to the arm64 version, add the code that deals with VFP traps,
re-enabling VFP, save/restoring the registers and resuming the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/entry.S | 31 +++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/arch/arm/kvm/hyp/entry.S b/arch/arm/kvm/hyp/entry.S
index 32f79b0..21c2388 100644
--- a/arch/arm/kvm/hyp/entry.S
+++ b/arch/arm/kvm/hyp/entry.S
@@ -66,5 +66,36 @@ ENTRY(__guest_exit)
bx lr
ENDPROC(__guest_exit)
+/*
+ * If VFPv3 support is not available, then we will not switch the VFP
+ * registers; however cp10 and cp11 accesses will still trap and fallback
+ * to the regular coprocessor emulation code, which currently will
+ * inject an undefined exception to the guest.
+ */
+#ifdef CONFIG_VFPv3
+ENTRY(__vfp_guest_restore)
+ push {r3, r4, lr}
+
+ @ NEON/VFP used. Turn on VFP access.
+ mrc p15, 4, r1, c1, c1, 2 @ HCPTR
+ bic r1, r1, #(HCPTR_TCP(10) | HCPTR_TCP(11))
+ mcr p15, 4, r1, c1, c1, 2 @ HCPTR
+ isb
+
+ @ Switch VFP/NEON hardware state to the guest's
+ mov r4, r0
+ ldr r0, [r0, #VCPU_HOST_CTXT]
+ add r0, r0, #CPU_CTXT_VFP
+ bl __vfp_save_state
+ add r0, r4, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
+ bl __vfp_restore_state
+
+ pop {r3, r4, lr}
+ pop {r0, r1, r2}
+ clrex
+ eret
+ENDPROC(__vfp_guest_restore)
+#endif
+
.popsection
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 16/28] ARM: KVM: Add the new world switch implementation
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
The new world switch implementation is modeled after the arm64 one,
calling the various save/restore functions in turn, and having as
little state as possible.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 7 +++
arch/arm/kvm/hyp/switch.c | 136 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 144 insertions(+)
create mode 100644 arch/arm/kvm/hyp/switch.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index c779690..cfab402 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -9,3 +9,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += switch.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index b3f6ed2..2ca651f 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -60,11 +60,16 @@
#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
#define CNTVOFF __ACCESS_CP15_64(4, c14)
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VMIDR __ACCESS_CP15(c0, 4, c0, 0)
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
#define DACR __ACCESS_CP15(c3, 0, c0, 0)
#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
@@ -73,6 +78,7 @@
#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
@@ -85,6 +91,7 @@
#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
new file mode 100644
index 0000000..f715b0d
--- /dev/null
+++ b/arch/arm/kvm/hyp/switch.c
@@ -0,0 +1,136 @@
+/*
+ * Copyright (C) 2015 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/kvm_asm.h>
+#include "hyp.h"
+
+__asm__(".arch_extension virt");
+
+static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu, u32 *fpexc)
+{
+ u32 val;
+
+ /*
+ * We are about to set HCPTR.TCP10/11 to trap all floating point
+ * register accesses to HYP, however, the ARM ARM clearly states that
+ * traps are only taken to HYP if the operation would not otherwise
+ * trap to SVC. Therefore, always make sure that for 32-bit guests,
+ * we set FPEXC.EN to prevent traps to SVC, when setting the TCP bits.
+ */
+ val = read_sysreg(VFP_FPEXC);
+ *fpexc = val;
+ if (!(val & FPEXC_EN)) {
+ write_sysreg(val | FPEXC_EN, VFP_FPEXC);
+ isb();
+ }
+
+ write_sysreg(vcpu->arch.hcr | vcpu->arch.irq_lines, HCR);
+ /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
+ write_sysreg(HSTR_T(15), HSTR);
+ write_sysreg(HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11), HCPTR);
+ val = read_sysreg(HDCR);
+ write_sysreg(val | HDCR_TPM | HDCR_TPMCR, HDCR);
+}
+
+static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+{
+ u32 val;
+
+ write_sysreg(0, HCR);
+ write_sysreg(0, HSTR);
+ val = read_sysreg(HDCR);
+ write_sysreg(val & ~(HDCR_TPM | HDCR_TPMCR), HDCR);
+ write_sysreg(0, HCPTR);
+}
+
+static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ write_sysreg(kvm->arch.vttbr, VTTBR);
+ write_sysreg(vcpu->arch.midr, VMIDR);
+}
+
+static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu)
+{
+ write_sysreg(0, VTTBR);
+ write_sysreg(read_sysreg(MIDR), VMIDR);
+}
+
+static void __hyp_text __vgic_save_state(struct kvm_vcpu *vcpu)
+{
+ __vgic_v2_save_state(vcpu);
+}
+
+static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
+{
+ __vgic_v2_restore_state(vcpu);
+}
+
+static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
+{
+ struct kvm_cpu_context *host_ctxt;
+ struct kvm_cpu_context *guest_ctxt;
+ bool fp_enabled;
+ u64 exit_code;
+ u32 fpexc;
+
+ vcpu = kern_hyp_va(vcpu);
+ write_sysreg(vcpu, HTPIDR);
+
+ host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+ guest_ctxt = &vcpu->arch.ctxt;
+
+ __sysreg_save_state(host_ctxt);
+ __banked_save_state(host_ctxt);
+
+ __activate_traps(vcpu, &fpexc);
+ __activate_vm(vcpu);
+
+ __vgic_restore_state(vcpu);
+ __timer_restore_state(vcpu);
+
+ __sysreg_restore_state(guest_ctxt);
+ __banked_restore_state(guest_ctxt);
+
+ /* Jump in the fire! */
+ exit_code = __guest_enter(vcpu, host_ctxt);
+ /* And we're baaack! */
+
+ fp_enabled = __vfp_enabled();
+
+ __banked_save_state(guest_ctxt);
+ __sysreg_save_state(guest_ctxt);
+ __timer_save_state(vcpu);
+ __vgic_save_state(vcpu);
+
+ __deactivate_traps(vcpu);
+ __deactivate_vm(vcpu);
+
+ __banked_restore_state(host_ctxt);
+ __sysreg_restore_state(host_ctxt);
+
+ if (fp_enabled) {
+ __vfp_save_state(&guest_ctxt->vfp);
+ __vfp_restore_state(&host_ctxt->vfp);
+ }
+
+ write_sysreg(fpexc, VFP_FPEXC);
+
+ return exit_code;
+}
+
+__alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 16/28] ARM: KVM: Add the new world switch implementation
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
The new world switch implementation is modeled after the arm64 one,
calling the various save/restore functions in turn, and having as
little state as possible.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 7 +++
arch/arm/kvm/hyp/switch.c | 136 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 144 insertions(+)
create mode 100644 arch/arm/kvm/hyp/switch.c
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index c779690..cfab402 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -9,3 +9,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += switch.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index b3f6ed2..2ca651f 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -60,11 +60,16 @@
#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
#define CNTVOFF __ACCESS_CP15_64(4, c14)
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VMIDR __ACCESS_CP15(c0, 4, c0, 0)
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
#define DACR __ACCESS_CP15(c3, 0, c0, 0)
#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
@@ -73,6 +78,7 @@
#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
@@ -85,6 +91,7 @@
#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
new file mode 100644
index 0000000..f715b0d
--- /dev/null
+++ b/arch/arm/kvm/hyp/switch.c
@@ -0,0 +1,136 @@
+/*
+ * Copyright (C) 2015 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/kvm_asm.h>
+#include "hyp.h"
+
+__asm__(".arch_extension virt");
+
+static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu, u32 *fpexc)
+{
+ u32 val;
+
+ /*
+ * We are about to set HCPTR.TCP10/11 to trap all floating point
+ * register accesses to HYP, however, the ARM ARM clearly states that
+ * traps are only taken to HYP if the operation would not otherwise
+ * trap to SVC. Therefore, always make sure that for 32-bit guests,
+ * we set FPEXC.EN to prevent traps to SVC, when setting the TCP bits.
+ */
+ val = read_sysreg(VFP_FPEXC);
+ *fpexc = val;
+ if (!(val & FPEXC_EN)) {
+ write_sysreg(val | FPEXC_EN, VFP_FPEXC);
+ isb();
+ }
+
+ write_sysreg(vcpu->arch.hcr | vcpu->arch.irq_lines, HCR);
+ /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
+ write_sysreg(HSTR_T(15), HSTR);
+ write_sysreg(HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11), HCPTR);
+ val = read_sysreg(HDCR);
+ write_sysreg(val | HDCR_TPM | HDCR_TPMCR, HDCR);
+}
+
+static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+{
+ u32 val;
+
+ write_sysreg(0, HCR);
+ write_sysreg(0, HSTR);
+ val = read_sysreg(HDCR);
+ write_sysreg(val & ~(HDCR_TPM | HDCR_TPMCR), HDCR);
+ write_sysreg(0, HCPTR);
+}
+
+static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+ write_sysreg(kvm->arch.vttbr, VTTBR);
+ write_sysreg(vcpu->arch.midr, VMIDR);
+}
+
+static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu)
+{
+ write_sysreg(0, VTTBR);
+ write_sysreg(read_sysreg(MIDR), VMIDR);
+}
+
+static void __hyp_text __vgic_save_state(struct kvm_vcpu *vcpu)
+{
+ __vgic_v2_save_state(vcpu);
+}
+
+static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
+{
+ __vgic_v2_restore_state(vcpu);
+}
+
+static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
+{
+ struct kvm_cpu_context *host_ctxt;
+ struct kvm_cpu_context *guest_ctxt;
+ bool fp_enabled;
+ u64 exit_code;
+ u32 fpexc;
+
+ vcpu = kern_hyp_va(vcpu);
+ write_sysreg(vcpu, HTPIDR);
+
+ host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+ guest_ctxt = &vcpu->arch.ctxt;
+
+ __sysreg_save_state(host_ctxt);
+ __banked_save_state(host_ctxt);
+
+ __activate_traps(vcpu, &fpexc);
+ __activate_vm(vcpu);
+
+ __vgic_restore_state(vcpu);
+ __timer_restore_state(vcpu);
+
+ __sysreg_restore_state(guest_ctxt);
+ __banked_restore_state(guest_ctxt);
+
+ /* Jump in the fire! */
+ exit_code = __guest_enter(vcpu, host_ctxt);
+ /* And we're baaack! */
+
+ fp_enabled = __vfp_enabled();
+
+ __banked_save_state(guest_ctxt);
+ __sysreg_save_state(guest_ctxt);
+ __timer_save_state(vcpu);
+ __vgic_save_state(vcpu);
+
+ __deactivate_traps(vcpu);
+ __deactivate_vm(vcpu);
+
+ __banked_restore_state(host_ctxt);
+ __sysreg_restore_state(host_ctxt);
+
+ if (fp_enabled) {
+ __vfp_save_state(&guest_ctxt->vfp);
+ __vfp_restore_state(&host_ctxt->vfp);
+ }
+
+ write_sysreg(fpexc, VFP_FPEXC);
+
+ return exit_code;
+}
+
+__alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 17/28] ARM: KVM: Add populating of fault data structure
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
On guest exit, we must take care of populating our fault data
structure so that the host code can handle it. This includes
resolving the IPA for permission faults, which can result in
restarting the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp.h | 4 ++++
arch/arm/kvm/hyp/switch.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 58 insertions(+)
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 2ca651f..7ddca54 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -76,10 +76,14 @@
#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index f715b0d..8bfd729 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -80,6 +80,56 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
__vgic_v2_restore_state(vcpu);
}
+static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
+{
+ u32 hsr = read_sysreg(HSR);
+ u8 ec = hsr >> HSR_EC_SHIFT;
+ u32 hpfar, far;
+
+ vcpu->arch.fault.hsr = hsr;
+
+ if (ec == HSR_EC_IABT)
+ far = read_sysreg(HIFAR);
+ else if (ec == HSR_EC_DABT)
+ far = read_sysreg(HDFAR);
+ else
+ return true;
+
+ /*
+ * B3.13.5 Reporting exceptions taken to the Non-secure PL2 mode:
+ *
+ * Abort on the stage 2 translation for a memory access from a
+ * Non-secure PL1 or PL0 mode:
+ *
+ * For any Access flag fault or Translation fault, and also for any
+ * Permission fault on the stage 2 translation of a memory access
+ * made as part of a translation table walk for a stage 1 translation,
+ * the HPFAR holds the IPA that caused the fault. Otherwise, the HPFAR
+ * is UNKNOWN.
+ */
+ if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
+ u64 par, tmp;
+
+ par = read_sysreg(PAR);
+ write_sysreg(far, ATS1CPR);
+ isb();
+
+ tmp = read_sysreg(PAR);
+ write_sysreg(par, PAR);
+
+ if (unlikely(tmp & 1))
+ return false; /* Translation failed, back to guest */
+
+ hpfar = ((tmp >> 12) & ((1UL << 28) - 1)) << 4;
+ } else {
+ hpfar = read_sysreg(HPFAR);
+ }
+
+ vcpu->arch.fault.hxfar = far;
+ vcpu->arch.fault.hpfar = hpfar;
+ return true;
+}
+
static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *host_ctxt;
@@ -107,9 +157,13 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
__banked_restore_state(guest_ctxt);
/* Jump in the fire! */
+again:
exit_code = __guest_enter(vcpu, host_ctxt);
/* And we're baaack! */
+ if (exit_code == ARM_EXCEPTION_HVC && !__populate_fault_info(vcpu))
+ goto again;
+
fp_enabled = __vfp_enabled();
__banked_save_state(guest_ctxt);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 17/28] ARM: KVM: Add populating of fault data structure
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
On guest exit, we must take care of populating our fault data
structure so that the host code can handle it. This includes
resolving the IPA for permission faults, which can result in
restarting the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp.h | 4 ++++
arch/arm/kvm/hyp/switch.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 58 insertions(+)
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 2ca651f..7ddca54 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -76,10 +76,14 @@
#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index f715b0d..8bfd729 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -80,6 +80,56 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
__vgic_v2_restore_state(vcpu);
}
+static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
+{
+ u32 hsr = read_sysreg(HSR);
+ u8 ec = hsr >> HSR_EC_SHIFT;
+ u32 hpfar, far;
+
+ vcpu->arch.fault.hsr = hsr;
+
+ if (ec == HSR_EC_IABT)
+ far = read_sysreg(HIFAR);
+ else if (ec == HSR_EC_DABT)
+ far = read_sysreg(HDFAR);
+ else
+ return true;
+
+ /*
+ * B3.13.5 Reporting exceptions taken to the Non-secure PL2 mode:
+ *
+ * Abort on the stage 2 translation for a memory access from a
+ * Non-secure PL1 or PL0 mode:
+ *
+ * For any Access flag fault or Translation fault, and also for any
+ * Permission fault on the stage 2 translation of a memory access
+ * made as part of a translation table walk for a stage 1 translation,
+ * the HPFAR holds the IPA that caused the fault. Otherwise, the HPFAR
+ * is UNKNOWN.
+ */
+ if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
+ u64 par, tmp;
+
+ par = read_sysreg(PAR);
+ write_sysreg(far, ATS1CPR);
+ isb();
+
+ tmp = read_sysreg(PAR);
+ write_sysreg(par, PAR);
+
+ if (unlikely(tmp & 1))
+ return false; /* Translation failed, back to guest */
+
+ hpfar = ((tmp >> 12) & ((1UL << 28) - 1)) << 4;
+ } else {
+ hpfar = read_sysreg(HPFAR);
+ }
+
+ vcpu->arch.fault.hxfar = far;
+ vcpu->arch.fault.hpfar = hpfar;
+ return true;
+}
+
static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *host_ctxt;
@@ -107,9 +157,13 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
__banked_restore_state(guest_ctxt);
/* Jump in the fire! */
+again:
exit_code = __guest_enter(vcpu, host_ctxt);
/* And we're baaack! */
+ if (exit_code == ARM_EXCEPTION_HVC && !__populate_fault_info(vcpu))
+ goto again;
+
fp_enabled = __vfp_enabled();
__banked_save_state(guest_ctxt);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 18/28] ARM: KVM: Add HYP mode entry code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
This part is almost entierely borrowed from the existing code, just
slightly simplifying the HYP function call (as we now save SPSR_hyp
in the world switch).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp-entry.S | 157 +++++++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/hyp.h | 2 +
3 files changed, 160 insertions(+)
create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index cfab402..a7d3a7e 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -9,4 +9,5 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
obj-$(CONFIG_KVM_ARM_HOST) += switch.o
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
new file mode 100644
index 0000000..44bc11f
--- /dev/null
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -0,0 +1,157 @@
+/*
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/linkage.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
+
+ .arch_extension virt
+
+ .text
+ .pushsection .hyp.text, "ax"
+
+.macro load_vcpu reg
+ mrc p15, 4, \reg, c13, c0, 2 @ HTPIDR
+.endm
+
+/********************************************************************
+ * Hypervisor exception vector and handlers
+ *
+ *
+ * The KVM/ARM Hypervisor ABI is defined as follows:
+ *
+ * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
+ * instruction is issued since all traps are disabled when running the host
+ * kernel as per the Hyp-mode initialization at boot time.
+ *
+ * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
+ * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
+ * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
+ * instructions are called from within Hyp-mode.
+ *
+ * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
+ * Switching to Hyp mode is done through a simple HVC #0 instruction. The
+ * exception vector code will check that the HVC comes from VMID==0.
+ * - r0 contains a pointer to a HYP function
+ * - r1, r2, and r3 contain arguments to the above function.
+ * - The HYP function will be called with its arguments in r0, r1 and r2.
+ * On HYP function return, we return directly to SVC.
+ *
+ * Note that the above is used to execute code in Hyp-mode from a host-kernel
+ * point of view, and is a different concept from performing a world-switch and
+ * executing guest code SVC mode (with a VMID != 0).
+ */
+
+ .align 5
+__hyp_vector:
+ .global __hyp_vector
+__kvm_hyp_vector:
+ .weak __kvm_hyp_vector
+
+ @ Hyp-mode exception vector
+ W(b) hyp_reset
+ W(b) hyp_undef
+ W(b) hyp_svc
+ W(b) hyp_pabt
+ W(b) hyp_dabt
+ W(b) hyp_hvc
+ W(b) hyp_irq
+ W(b) hyp_fiq
+
+.macro invalid_vector label, cause
+ .align
+\label: b .
+.endm
+
+ invalid_vector hyp_reset
+ invalid_vector hyp_undef
+ invalid_vector hyp_svc
+ invalid_vector hyp_pabt
+ invalid_vector hyp_dabt
+ invalid_vector hyp_fiq
+
+hyp_hvc:
+ /*
+ * Getting here is either because of a trap from a guest,
+ * or from executing HVC from the host kernel, which means
+ * "do something in Hyp mode".
+ */
+ push {r0, r1, r2}
+
+ @ Check syndrome register
+ mrc p15, 4, r1, c5, c2, 0 @ HSR
+ lsr r0, r1, #HSR_EC_SHIFT
+ cmp r0, #HSR_EC_HVC
+ bne guest_trap @ Not HVC instr.
+
+ /*
+ * Let's check if the HVC came from VMID 0 and allow simple
+ * switch to Hyp mode
+ */
+ mrrc p15, 6, r0, r2, c2
+ lsr r2, r2, #16
+ and r2, r2, #0xff
+ cmp r2, #0
+ bne guest_trap @ Guest called HVC
+
+ /*
+ * Getting here means host called HVC, we shift parameters and branch
+ * to Hyp function.
+ */
+ pop {r0, r1, r2}
+
+ /* Check for __hyp_get_vectors */
+ cmp r0, #-1
+ mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
+ beq 1f
+
+ push {lr}
+
+ mov lr, r0
+ mov r0, r1
+ mov r1, r2
+ mov r2, r3
+
+THUMB( orr lr, #1)
+ blx lr @ Call the HYP function
+
+ pop {lr}
+1: eret
+
+guest_trap:
+ load_vcpu r0 @ Load VCPU pointer to r0
+
+ @ Check if we need the fault information
+ lsr r1, r1, #HSR_EC_SHIFT
+#ifdef CONFIG_VFPv3
+ cmp r1, #HSR_EC_CP_0_13
+ beq __vfp_guest_restore
+#endif
+
+ mov r1, #ARM_EXCEPTION_HVC
+ b __guest_exit
+
+hyp_irq:
+ push {r0, r1, r2}
+ mov r1, #ARM_EXCEPTION_IRQ
+ load_vcpu r0 @ Load VCPU pointer to r0
+ b __guest_exit
+
+ .ltorg
+
+ .popsection
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 7ddca54..8bbd2a7 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -123,4 +123,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
struct kvm_cpu_context *host);
+int asmlinkage __hyp_do_panic(const char *, int, u32);
+
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 18/28] ARM: KVM: Add HYP mode entry code
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
This part is almost entierely borrowed from the existing code, just
slightly simplifying the HYP function call (as we now save SPSR_hyp
in the world switch).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp-entry.S | 157 +++++++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/hyp.h | 2 +
3 files changed, 160 insertions(+)
create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index cfab402..a7d3a7e 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -9,4 +9,5 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
obj-$(CONFIG_KVM_ARM_HOST) += switch.o
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
new file mode 100644
index 0000000..44bc11f
--- /dev/null
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -0,0 +1,157 @@
+/*
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/linkage.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
+
+ .arch_extension virt
+
+ .text
+ .pushsection .hyp.text, "ax"
+
+.macro load_vcpu reg
+ mrc p15, 4, \reg, c13, c0, 2 @ HTPIDR
+.endm
+
+/********************************************************************
+ * Hypervisor exception vector and handlers
+ *
+ *
+ * The KVM/ARM Hypervisor ABI is defined as follows:
+ *
+ * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
+ * instruction is issued since all traps are disabled when running the host
+ * kernel as per the Hyp-mode initialization at boot time.
+ *
+ * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
+ * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
+ * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
+ * instructions are called from within Hyp-mode.
+ *
+ * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
+ * Switching to Hyp mode is done through a simple HVC #0 instruction. The
+ * exception vector code will check that the HVC comes from VMID==0.
+ * - r0 contains a pointer to a HYP function
+ * - r1, r2, and r3 contain arguments to the above function.
+ * - The HYP function will be called with its arguments in r0, r1 and r2.
+ * On HYP function return, we return directly to SVC.
+ *
+ * Note that the above is used to execute code in Hyp-mode from a host-kernel
+ * point of view, and is a different concept from performing a world-switch and
+ * executing guest code SVC mode (with a VMID != 0).
+ */
+
+ .align 5
+__hyp_vector:
+ .global __hyp_vector
+__kvm_hyp_vector:
+ .weak __kvm_hyp_vector
+
+ @ Hyp-mode exception vector
+ W(b) hyp_reset
+ W(b) hyp_undef
+ W(b) hyp_svc
+ W(b) hyp_pabt
+ W(b) hyp_dabt
+ W(b) hyp_hvc
+ W(b) hyp_irq
+ W(b) hyp_fiq
+
+.macro invalid_vector label, cause
+ .align
+\label: b .
+.endm
+
+ invalid_vector hyp_reset
+ invalid_vector hyp_undef
+ invalid_vector hyp_svc
+ invalid_vector hyp_pabt
+ invalid_vector hyp_dabt
+ invalid_vector hyp_fiq
+
+hyp_hvc:
+ /*
+ * Getting here is either because of a trap from a guest,
+ * or from executing HVC from the host kernel, which means
+ * "do something in Hyp mode".
+ */
+ push {r0, r1, r2}
+
+ @ Check syndrome register
+ mrc p15, 4, r1, c5, c2, 0 @ HSR
+ lsr r0, r1, #HSR_EC_SHIFT
+ cmp r0, #HSR_EC_HVC
+ bne guest_trap @ Not HVC instr.
+
+ /*
+ * Let's check if the HVC came from VMID 0 and allow simple
+ * switch to Hyp mode
+ */
+ mrrc p15, 6, r0, r2, c2
+ lsr r2, r2, #16
+ and r2, r2, #0xff
+ cmp r2, #0
+ bne guest_trap @ Guest called HVC
+
+ /*
+ * Getting here means host called HVC, we shift parameters and branch
+ * to Hyp function.
+ */
+ pop {r0, r1, r2}
+
+ /* Check for __hyp_get_vectors */
+ cmp r0, #-1
+ mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
+ beq 1f
+
+ push {lr}
+
+ mov lr, r0
+ mov r0, r1
+ mov r1, r2
+ mov r2, r3
+
+THUMB( orr lr, #1)
+ blx lr @ Call the HYP function
+
+ pop {lr}
+1: eret
+
+guest_trap:
+ load_vcpu r0 @ Load VCPU pointer to r0
+
+ @ Check if we need the fault information
+ lsr r1, r1, #HSR_EC_SHIFT
+#ifdef CONFIG_VFPv3
+ cmp r1, #HSR_EC_CP_0_13
+ beq __vfp_guest_restore
+#endif
+
+ mov r1, #ARM_EXCEPTION_HVC
+ b __guest_exit
+
+hyp_irq:
+ push {r0, r1, r2}
+ mov r1, #ARM_EXCEPTION_IRQ
+ load_vcpu r0 @ Load VCPU pointer to r0
+ b __guest_exit
+
+ .ltorg
+
+ .popsection
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 7ddca54..8bbd2a7 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -123,4 +123,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
struct kvm_cpu_context *host);
+int asmlinkage __hyp_do_panic(const char *, int, u32);
+
#endif /* __ARM_KVM_HYP_H__ */
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 19/28] ARM: KVM: Add panic handling code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Instead of spinning forever, let's "properly" handle any unexpected
exception ("properly" meaning "print a spat on the console and die").
This has proved useful quite a few times...
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp-entry.S | 28 +++++++++++++++++++++-------
arch/arm/kvm/hyp/switch.c | 38 ++++++++++++++++++++++++++++++++++++++
2 files changed, 59 insertions(+), 7 deletions(-)
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 44bc11f..ca412ad 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -75,15 +75,29 @@ __kvm_hyp_vector:
.macro invalid_vector label, cause
.align
-\label: b .
+\label: mov r0, #\cause
+ b __hyp_panic
.endm
- invalid_vector hyp_reset
- invalid_vector hyp_undef
- invalid_vector hyp_svc
- invalid_vector hyp_pabt
- invalid_vector hyp_dabt
- invalid_vector hyp_fiq
+ invalid_vector hyp_reset ARM_EXCEPTION_RESET
+ invalid_vector hyp_undef ARM_EXCEPTION_UNDEFINED
+ invalid_vector hyp_svc ARM_EXCEPTION_SOFTWARE
+ invalid_vector hyp_pabt ARM_EXCEPTION_PREF_ABORT
+ invalid_vector hyp_dabt ARM_EXCEPTION_DATA_ABORT
+ invalid_vector hyp_fiq ARM_EXCEPTION_FIQ
+
+ENTRY(__hyp_do_panic)
+ mrs lr, cpsr
+ bic lr, lr, #MODE_MASK
+ orr lr, lr, #SVC_MODE
+THUMB( orr lr, lr, #PSR_T_BIT )
+ msr spsr_cxsf, lr
+ ldr lr, =panic
+ msr ELR_hyp, lr
+ ldr lr, =kvm_call_hyp
+ clrex
+ eret
+ENDPROC(__hyp_do_panic)
hyp_hvc:
/*
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index 8bfd729..67f3944 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -188,3 +188,41 @@ again:
}
__alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+
+static const char * const __hyp_panic_string[] = {
+ [ARM_EXCEPTION_RESET] = "\nHYP panic: RST?? PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_UNDEFINED] = "\nHYP panic: UNDEF PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_SOFTWARE] = "\nHYP panic: SVC?? PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_PREF_ABORT] = "\nHYP panic: PABRT PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_DATA_ABORT] = "\nHYP panic: DABRT PC:%08x ADDR:%08x",
+ [ARM_EXCEPTION_IRQ] = "\nHYP panic: IRQ?? PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_FIQ] = "\nHYP panic: FIQ?? PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_HVC] = "\nHYP panic: HVC?? PC:%08x CPSR:%08x",
+};
+
+void __hyp_text __noreturn __hyp_panic(int cause)
+{
+ u32 elr = read_special(ELR_hyp);
+ u32 val;
+
+ if (cause == ARM_EXCEPTION_DATA_ABORT)
+ val = read_sysreg(HDFAR);
+ else
+ val = read_special(SPSR);
+
+ if (read_sysreg(VTTBR)) {
+ struct kvm_vcpu *vcpu;
+ struct kvm_cpu_context *host_ctxt;
+
+ vcpu = (struct kvm_vcpu *)read_sysreg(HTPIDR);
+ host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+ __deactivate_traps(vcpu);
+ __deactivate_vm(vcpu);
+ __sysreg_restore_state(host_ctxt);
+ }
+
+ /* Call panic for real */
+ __hyp_do_panic(__hyp_panic_string[cause], elr, val);
+
+ unreachable();
+}
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 19/28] ARM: KVM: Add panic handling code
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Instead of spinning forever, let's "properly" handle any unexpected
exception ("properly" meaning "print a spat on the console and die").
This has proved useful quite a few times...
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp-entry.S | 28 +++++++++++++++++++++-------
arch/arm/kvm/hyp/switch.c | 38 ++++++++++++++++++++++++++++++++++++++
2 files changed, 59 insertions(+), 7 deletions(-)
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index 44bc11f..ca412ad 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -75,15 +75,29 @@ __kvm_hyp_vector:
.macro invalid_vector label, cause
.align
-\label: b .
+\label: mov r0, #\cause
+ b __hyp_panic
.endm
- invalid_vector hyp_reset
- invalid_vector hyp_undef
- invalid_vector hyp_svc
- invalid_vector hyp_pabt
- invalid_vector hyp_dabt
- invalid_vector hyp_fiq
+ invalid_vector hyp_reset ARM_EXCEPTION_RESET
+ invalid_vector hyp_undef ARM_EXCEPTION_UNDEFINED
+ invalid_vector hyp_svc ARM_EXCEPTION_SOFTWARE
+ invalid_vector hyp_pabt ARM_EXCEPTION_PREF_ABORT
+ invalid_vector hyp_dabt ARM_EXCEPTION_DATA_ABORT
+ invalid_vector hyp_fiq ARM_EXCEPTION_FIQ
+
+ENTRY(__hyp_do_panic)
+ mrs lr, cpsr
+ bic lr, lr, #MODE_MASK
+ orr lr, lr, #SVC_MODE
+THUMB( orr lr, lr, #PSR_T_BIT )
+ msr spsr_cxsf, lr
+ ldr lr, =panic
+ msr ELR_hyp, lr
+ ldr lr, =kvm_call_hyp
+ clrex
+ eret
+ENDPROC(__hyp_do_panic)
hyp_hvc:
/*
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index 8bfd729..67f3944 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -188,3 +188,41 @@ again:
}
__alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+
+static const char * const __hyp_panic_string[] = {
+ [ARM_EXCEPTION_RESET] = "\nHYP panic: RST?? PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_UNDEFINED] = "\nHYP panic: UNDEF PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_SOFTWARE] = "\nHYP panic: SVC?? PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_PREF_ABORT] = "\nHYP panic: PABRT PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_DATA_ABORT] = "\nHYP panic: DABRT PC:%08x ADDR:%08x",
+ [ARM_EXCEPTION_IRQ] = "\nHYP panic: IRQ?? PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_FIQ] = "\nHYP panic: FIQ?? PC:%08x CPSR:%08x",
+ [ARM_EXCEPTION_HVC] = "\nHYP panic: HVC?? PC:%08x CPSR:%08x",
+};
+
+void __hyp_text __noreturn __hyp_panic(int cause)
+{
+ u32 elr = read_special(ELR_hyp);
+ u32 val;
+
+ if (cause == ARM_EXCEPTION_DATA_ABORT)
+ val = read_sysreg(HDFAR);
+ else
+ val = read_special(SPSR);
+
+ if (read_sysreg(VTTBR)) {
+ struct kvm_vcpu *vcpu;
+ struct kvm_cpu_context *host_ctxt;
+
+ vcpu = (struct kvm_vcpu *)read_sysreg(HTPIDR);
+ host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+ __deactivate_traps(vcpu);
+ __deactivate_vm(vcpu);
+ __sysreg_restore_state(host_ctxt);
+ }
+
+ /* Call panic for real */
+ __hyp_do_panic(__hyp_panic_string[cause], elr, val);
+
+ unreachable();
+}
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 20/28] ARM: KVM: Change kvm_call_hyp return type to unsigned long
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Having u64 as the kvm_call_hyp return type is problematic, as
it forces all kind of tricks for the return values from HYP
to be promoted to 64bit (LE has the LSB in r0, and BE has them
in r1).
Since the only user of the return value is perfectly happy with
a 32bit value, let's make kvm_call_hyp return an unsigned long,
which is 32bit on ARM.
This solves yet another headache.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_host.h | 2 +-
arch/arm/kvm/interrupts.S | 10 ++--------
2 files changed, 3 insertions(+), 9 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 02932ba..c62d717 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -165,7 +165,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
-u64 kvm_call_hyp(void *hypfn, ...);
+unsigned long kvm_call_hyp(void *hypfn, ...);
void force_vm_exit(const cpumask_t *mask);
#define KVM_ARCH_WANT_MMU_NOTIFIER
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 7bfb289..01eb169 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -207,20 +207,14 @@ after_vfp_restore:
restore_host_regs
clrex @ Clear exclusive monitor
-#ifndef CONFIG_CPU_ENDIAN_BE8
mov r0, r1 @ Return the return code
- mov r1, #0 @ Clear upper bits in return value
-#else
- @ r1 already has return code
- mov r0, #0 @ Clear upper bits in return value
-#endif /* CONFIG_CPU_ENDIAN_BE8 */
bx lr @ return to IOCTL
/********************************************************************
* Call function in Hyp mode
*
*
- * u64 kvm_call_hyp(void *hypfn, ...);
+ * unsigned long kvm_call_hyp(void *hypfn, ...);
*
* This is not really a variadic function in the classic C-way and care must
* be taken when calling this to ensure parameters are passed in registers
@@ -231,7 +225,7 @@ after_vfp_restore:
* passed as r0, r1, and r2 (a maximum of 3 arguments in addition to the
* function pointer can be passed). The function being called must be mapped
* in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
- * passed in r0 and r1.
+ * passed in r0 (strictly 32bit).
*
* A function pointer with a value of 0xffffffff has a special meaning,
* and is used to implement __hyp_get_vectors in the same way as in
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 20/28] ARM: KVM: Change kvm_call_hyp return type to unsigned long
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Having u64 as the kvm_call_hyp return type is problematic, as
it forces all kind of tricks for the return values from HYP
to be promoted to 64bit (LE has the LSB in r0, and BE has them
in r1).
Since the only user of the return value is perfectly happy with
a 32bit value, let's make kvm_call_hyp return an unsigned long,
which is 32bit on ARM.
This solves yet another headache.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_host.h | 2 +-
arch/arm/kvm/interrupts.S | 10 ++--------
2 files changed, 3 insertions(+), 9 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 02932ba..c62d717 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -165,7 +165,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
-u64 kvm_call_hyp(void *hypfn, ...);
+unsigned long kvm_call_hyp(void *hypfn, ...);
void force_vm_exit(const cpumask_t *mask);
#define KVM_ARCH_WANT_MMU_NOTIFIER
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 7bfb289..01eb169 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -207,20 +207,14 @@ after_vfp_restore:
restore_host_regs
clrex @ Clear exclusive monitor
-#ifndef CONFIG_CPU_ENDIAN_BE8
mov r0, r1 @ Return the return code
- mov r1, #0 @ Clear upper bits in return value
-#else
- @ r1 already has return code
- mov r0, #0 @ Clear upper bits in return value
-#endif /* CONFIG_CPU_ENDIAN_BE8 */
bx lr @ return to IOCTL
/********************************************************************
* Call function in Hyp mode
*
*
- * u64 kvm_call_hyp(void *hypfn, ...);
+ * unsigned long kvm_call_hyp(void *hypfn, ...);
*
* This is not really a variadic function in the classic C-way and care must
* be taken when calling this to ensure parameters are passed in registers
@@ -231,7 +225,7 @@ after_vfp_restore:
* passed as r0, r1, and r2 (a maximum of 3 arguments in addition to the
* function pointer can be passed). The function being called must be mapped
* in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
- * passed in r0 and r1.
+ * passed in r0 (strictly 32bit).
*
* A function pointer with a value of 0xffffffff has a special meaning,
* and is used to implement __hyp_get_vectors in the same way as in
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 21/28] ARM: KVM: Remove the old world switch
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
As we now have a full reimplementation of the world switch, it is
time to kiss the old stuff goodbye. I'm not sure we'll miss it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/interrupts.S | 469 +----------------------------
arch/arm/kvm/interrupts_head.S | 660 -----------------------------------------
2 files changed, 1 insertion(+), 1128 deletions(-)
delete mode 100644 arch/arm/kvm/interrupts_head.S
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 01eb169..b1bd316 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -17,198 +17,8 @@
*/
#include <linux/linkage.h>
-#include <linux/const.h>
-#include <asm/unified.h>
-#include <asm/page.h>
-#include <asm/ptrace.h>
-#include <asm/asm-offsets.h>
-#include <asm/kvm_asm.h>
-#include <asm/kvm_arm.h>
-#include <asm/vfpmacros.h>
-#include "interrupts_head.S"
.text
- .pushsection .hyp.text, "ax"
-
-/********************************************************************
- * Flush per-VMID TLBs
- *
- * void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
- *
- * We rely on the hardware to broadcast the TLB invalidation to all CPUs
- * inside the inner-shareable domain (which is the case for all v7
- * implementations). If we come across a non-IS SMP implementation, we'll
- * have to use an IPI based mechanism. Until then, we stick to the simple
- * hardware assisted version.
- *
- * As v7 does not support flushing per IPA, just nuke the whole TLB
- * instead, ignoring the ipa value.
- */
-ENTRY(__kvm_tlb_flush_vmid_ipa)
- push {r2, r3}
-
- dsb ishst
- add r0, r0, #KVM_VTTBR
- ldrd r2, r3, [r0]
- mcrr p15, 6, rr_lo_hi(r2, r3), c2 @ Write VTTBR
- isb
- mcr p15, 0, r0, c8, c3, 0 @ TLBIALLIS (rt ignored)
- dsb ish
- isb
- mov r2, #0
- mov r3, #0
- mcrr p15, 6, r2, r3, c2 @ Back to VMID #0
- isb @ Not necessary if followed by eret
-
- pop {r2, r3}
- bx lr
-ENDPROC(__kvm_tlb_flush_vmid_ipa)
-
-/**
- * void __kvm_tlb_flush_vmid(struct kvm *kvm) - Flush per-VMID TLBs
- *
- * Reuses __kvm_tlb_flush_vmid_ipa() for ARMv7, without passing address
- * parameter
- */
-
-ENTRY(__kvm_tlb_flush_vmid)
- b __kvm_tlb_flush_vmid_ipa
-ENDPROC(__kvm_tlb_flush_vmid)
-
-/********************************************************************
- * Flush TLBs and instruction caches of all CPUs inside the inner-shareable
- * domain, for all VMIDs
- *
- * void __kvm_flush_vm_context(void);
- */
-ENTRY(__kvm_flush_vm_context)
- mov r0, #0 @ rn parameter for c15 flushes is SBZ
-
- /* Invalidate NS Non-Hyp TLB Inner Shareable (TLBIALLNSNHIS) */
- mcr p15, 4, r0, c8, c3, 4
- /* Invalidate instruction caches Inner Shareable (ICIALLUIS) */
- mcr p15, 0, r0, c7, c1, 0
- dsb ish
- isb @ Not necessary if followed by eret
-
- bx lr
-ENDPROC(__kvm_flush_vm_context)
-
-
-/********************************************************************
- * Hypervisor world-switch code
- *
- *
- * int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
- */
-ENTRY(__kvm_vcpu_run)
- @ Save the vcpu pointer
- mcr p15, 4, vcpu, c13, c0, 2 @ HTPIDR
-
- save_host_regs
-
- restore_vgic_state
- restore_timer_state
-
- @ Store hardware CP15 state and load guest state
- read_cp15_state store_to_vcpu = 0
- write_cp15_state read_from_vcpu = 1
-
- @ If the host kernel has not been configured with VFPv3 support,
- @ then it is safer if we deny guests from using it as well.
-#ifdef CONFIG_VFPv3
- @ Set FPEXC_EN so the guest doesn't trap floating point instructions
- VFPFMRX r2, FPEXC @ VMRS
- push {r2}
- orr r2, r2, #FPEXC_EN
- VFPFMXR FPEXC, r2 @ VMSR
-#endif
-
- @ Configure Hyp-role
- configure_hyp_role vmentry
-
- @ Trap coprocessor CRx accesses
- set_hstr vmentry
- set_hcptr vmentry, (HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11))
- set_hdcr vmentry
-
- @ Write configured ID register into MIDR alias
- ldr r1, [vcpu, #VCPU_MIDR]
- mcr p15, 4, r1, c0, c0, 0
-
- @ Write guest view of MPIDR into VMPIDR
- ldr r1, [vcpu, #CP15_OFFSET(c0_MPIDR)]
- mcr p15, 4, r1, c0, c0, 5
-
- @ Set up guest memory translation
- ldr r1, [vcpu, #VCPU_KVM]
- add r1, r1, #KVM_VTTBR
- ldrd r2, r3, [r1]
- mcrr p15, 6, rr_lo_hi(r2, r3), c2 @ Write VTTBR
-
- @ We're all done, just restore the GPRs and go to the guest
- restore_guest_regs
- clrex @ Clear exclusive monitor
- eret
-
-__kvm_vcpu_return:
- /*
- * return convention:
- * guest r0, r1, r2 saved on the stack
- * r0: vcpu pointer
- * r1: exception code
- */
- save_guest_regs
-
- @ Set VMID == 0
- mov r2, #0
- mov r3, #0
- mcrr p15, 6, r2, r3, c2 @ Write VTTBR
-
- @ Don't trap coprocessor accesses for host kernel
- set_hstr vmexit
- set_hdcr vmexit
- set_hcptr vmexit, (HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11)), after_vfp_restore
-
-#ifdef CONFIG_VFPv3
- @ Switch VFP/NEON hardware state to the host's
- add r7, vcpu, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
- store_vfp_state r7
- add r7, vcpu, #VCPU_HOST_CTXT
- ldr r7, [r7]
- add r7, r7, #CPU_CTXT_VFP
- restore_vfp_state r7
-
-after_vfp_restore:
- @ Restore FPEXC_EN which we clobbered on entry
- pop {r2}
- VFPFMXR FPEXC, r2
-#else
-after_vfp_restore:
-#endif
-
- @ Reset Hyp-role
- configure_hyp_role vmexit
-
- @ Let host read hardware MIDR
- mrc p15, 0, r2, c0, c0, 0
- mcr p15, 4, r2, c0, c0, 0
-
- @ Back to hardware MPIDR
- mrc p15, 0, r2, c0, c0, 5
- mcr p15, 4, r2, c0, c0, 5
-
- @ Store guest CP15 state and restore host state
- read_cp15_state store_to_vcpu = 1
- write_cp15_state read_from_vcpu = 0
-
- save_timer_state
- save_vgic_state
-
- restore_host_regs
- clrex @ Clear exclusive monitor
- mov r0, r1 @ Return the return code
- bx lr @ return to IOCTL
/********************************************************************
* Call function in Hyp mode
@@ -239,281 +49,4 @@ after_vfp_restore:
ENTRY(kvm_call_hyp)
hvc #0
bx lr
-
-/********************************************************************
- * Hypervisor exception vector and handlers
- *
- *
- * The KVM/ARM Hypervisor ABI is defined as follows:
- *
- * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
- * instruction is issued since all traps are disabled when running the host
- * kernel as per the Hyp-mode initialization at boot time.
- *
- * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
- * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
- * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
- * instructions are called from within Hyp-mode.
- *
- * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
- * Switching to Hyp mode is done through a simple HVC #0 instruction. The
- * exception vector code will check that the HVC comes from VMID==0 and if
- * so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- * - r0 contains a pointer to a HYP function
- * - r1, r2, and r3 contain arguments to the above function.
- * - The HYP function will be called with its arguments in r0, r1 and r2.
- * On HYP function return, we return directly to SVC.
- *
- * Note that the above is used to execute code in Hyp-mode from a host-kernel
- * point of view, and is a different concept from performing a world-switch and
- * executing guest code SVC mode (with a VMID != 0).
- */
-
-/* Handle undef, svc, pabt, or dabt by crashing with a user notice */
-.macro bad_exception exception_code, panic_str
- push {r0-r2}
- mrrc p15, 6, r0, r1, c2 @ Read VTTBR
- lsr r1, r1, #16
- ands r1, r1, #0xff
- beq 99f
-
- load_vcpu @ Load VCPU pointer
- .if \exception_code == ARM_EXCEPTION_DATA_ABORT
- mrc p15, 4, r2, c5, c2, 0 @ HSR
- mrc p15, 4, r1, c6, c0, 0 @ HDFAR
- str r2, [vcpu, #VCPU_HSR]
- str r1, [vcpu, #VCPU_HxFAR]
- .endif
- .if \exception_code == ARM_EXCEPTION_PREF_ABORT
- mrc p15, 4, r2, c5, c2, 0 @ HSR
- mrc p15, 4, r1, c6, c0, 2 @ HIFAR
- str r2, [vcpu, #VCPU_HSR]
- str r1, [vcpu, #VCPU_HxFAR]
- .endif
- mov r1, #\exception_code
- b __kvm_vcpu_return
-
- @ We were in the host already. Let's craft a panic-ing return to SVC.
-99: mrs r2, cpsr
- bic r2, r2, #MODE_MASK
- orr r2, r2, #SVC_MODE
-THUMB( orr r2, r2, #PSR_T_BIT )
- msr spsr_cxsf, r2
- mrs r1, ELR_hyp
- ldr r2, =panic
- msr ELR_hyp, r2
- ldr r0, =\panic_str
- clrex @ Clear exclusive monitor
- eret
-.endm
-
- .align 5
-__kvm_hyp_vector:
- .globl __kvm_hyp_vector
-
- @ Hyp-mode exception vector
- W(b) hyp_reset
- W(b) hyp_undef
- W(b) hyp_svc
- W(b) hyp_pabt
- W(b) hyp_dabt
- W(b) hyp_hvc
- W(b) hyp_irq
- W(b) hyp_fiq
-
- .align
-hyp_reset:
- b hyp_reset
-
- .align
-hyp_undef:
- bad_exception ARM_EXCEPTION_UNDEFINED, und_die_str
-
- .align
-hyp_svc:
- bad_exception ARM_EXCEPTION_HVC, svc_die_str
-
- .align
-hyp_pabt:
- bad_exception ARM_EXCEPTION_PREF_ABORT, pabt_die_str
-
- .align
-hyp_dabt:
- bad_exception ARM_EXCEPTION_DATA_ABORT, dabt_die_str
-
- .align
-hyp_hvc:
- /*
- * Getting here is either becuase of a trap from a guest or from calling
- * HVC from the host kernel, which means "switch to Hyp mode".
- */
- push {r0, r1, r2}
-
- @ Check syndrome register
- mrc p15, 4, r1, c5, c2, 0 @ HSR
- lsr r0, r1, #HSR_EC_SHIFT
- cmp r0, #HSR_EC_HVC
- bne guest_trap @ Not HVC instr.
-
- /*
- * Let's check if the HVC came from VMID 0 and allow simple
- * switch to Hyp mode
- */
- mrrc p15, 6, r0, r2, c2
- lsr r2, r2, #16
- and r2, r2, #0xff
- cmp r2, #0
- bne guest_trap @ Guest called HVC
-
- /*
- * Getting here means host called HVC, we shift parameters and branch
- * to Hyp function.
- */
- pop {r0, r1, r2}
-
- /* Check for __hyp_get_vectors */
- cmp r0, #-1
- mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
- beq 1f
-
- push {lr}
- mrs lr, SPSR
- push {lr}
-
- mov lr, r0
- mov r0, r1
- mov r1, r2
- mov r2, r3
-
-THUMB( orr lr, #1)
- blx lr @ Call the HYP function
-
- pop {lr}
- msr SPSR_csxf, lr
- pop {lr}
-1: eret
-
-guest_trap:
- load_vcpu @ Load VCPU pointer to r0
- str r1, [vcpu, #VCPU_HSR]
-
- @ Check if we need the fault information
- lsr r1, r1, #HSR_EC_SHIFT
-#ifdef CONFIG_VFPv3
- cmp r1, #HSR_EC_CP_0_13
- beq switch_to_guest_vfp
-#endif
- cmp r1, #HSR_EC_IABT
- mrceq p15, 4, r2, c6, c0, 2 @ HIFAR
- beq 2f
- cmp r1, #HSR_EC_DABT
- bne 1f
- mrc p15, 4, r2, c6, c0, 0 @ HDFAR
-
-2: str r2, [vcpu, #VCPU_HxFAR]
-
- /*
- * B3.13.5 Reporting exceptions taken to the Non-secure PL2 mode:
- *
- * Abort on the stage 2 translation for a memory access from a
- * Non-secure PL1 or PL0 mode:
- *
- * For any Access flag fault or Translation fault, and also for any
- * Permission fault on the stage 2 translation of a memory access
- * made as part of a translation table walk for a stage 1 translation,
- * the HPFAR holds the IPA that caused the fault. Otherwise, the HPFAR
- * is UNKNOWN.
- */
-
- /* Check for permission fault, and S1PTW */
- mrc p15, 4, r1, c5, c2, 0 @ HSR
- and r0, r1, #HSR_FSC_TYPE
- cmp r0, #FSC_PERM
- tsteq r1, #(1 << 7) @ S1PTW
- mrcne p15, 4, r2, c6, c0, 4 @ HPFAR
- bne 3f
-
- /* Preserve PAR */
- mrrc p15, 0, r0, r1, c7 @ PAR
- push {r0, r1}
-
- /* Resolve IPA using the xFAR */
- mcr p15, 0, r2, c7, c8, 0 @ ATS1CPR
- isb
- mrrc p15, 0, r0, r1, c7 @ PAR
- tst r0, #1
- bne 4f @ Failed translation
- ubfx r2, r0, #12, #20
- lsl r2, r2, #4
- orr r2, r2, r1, lsl #24
-
- /* Restore PAR */
- pop {r0, r1}
- mcrr p15, 0, r0, r1, c7 @ PAR
-
-3: load_vcpu @ Load VCPU pointer to r0
- str r2, [r0, #VCPU_HPFAR]
-
-1: mov r1, #ARM_EXCEPTION_HVC
- b __kvm_vcpu_return
-
-4: pop {r0, r1} @ Failed translation, return to guest
- mcrr p15, 0, r0, r1, c7 @ PAR
- clrex
- pop {r0, r1, r2}
- eret
-
-/*
- * If VFPv3 support is not available, then we will not switch the VFP
- * registers; however cp10 and cp11 accesses will still trap and fallback
- * to the regular coprocessor emulation code, which currently will
- * inject an undefined exception to the guest.
- */
-#ifdef CONFIG_VFPv3
-switch_to_guest_vfp:
- push {r3-r7}
-
- @ NEON/VFP used. Turn on VFP access.
- set_hcptr vmtrap, (HCPTR_TCP(10) | HCPTR_TCP(11))
-
- @ Switch VFP/NEON hardware state to the guest's
- add r7, r0, #VCPU_HOST_CTXT
- ldr r7, [r7]
- add r7, r7, #CPU_CTXT_VFP
- store_vfp_state r7
- add r7, r0, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
- restore_vfp_state r7
-
- pop {r3-r7}
- pop {r0-r2}
- clrex
- eret
-#endif
-
- .align
-hyp_irq:
- push {r0, r1, r2}
- mov r1, #ARM_EXCEPTION_IRQ
- load_vcpu @ Load VCPU pointer to r0
- b __kvm_vcpu_return
-
- .align
-hyp_fiq:
- b hyp_fiq
-
- .ltorg
-
- .popsection
-
- .pushsection ".rodata"
-
-und_die_str:
- .ascii "unexpected undefined exception in Hyp mode at: %#08x\n"
-pabt_die_str:
- .ascii "unexpected prefetch abort in Hyp mode at: %#08x\n"
-dabt_die_str:
- .ascii "unexpected data abort in Hyp mode at: %#08x\n"
-svc_die_str:
- .ascii "unexpected HVC/SVC trap in Hyp mode at: %#08x\n"
-
- .popsection
+ENDPROC(kvm_call_hyp)
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
deleted file mode 100644
index e0943cb8..0000000
--- a/arch/arm/kvm/interrupts_head.S
+++ /dev/null
@@ -1,660 +0,0 @@
-#include <linux/irqchip/arm-gic.h>
-#include <asm/assembler.h>
-
-/* Compat macro, until we get rid of this file entierely */
-#define VCPU_GP_REGS (VCPU_GUEST_CTXT + CPU_CTXT_GP_REGS)
-#define VCPU_USR_REGS (VCPU_GP_REGS + GP_REGS_USR)
-#define VCPU_SVC_REGS (VCPU_GP_REGS + GP_REGS_SVC)
-#define VCPU_ABT_REGS (VCPU_GP_REGS + GP_REGS_ABT)
-#define VCPU_UND_REGS (VCPU_GP_REGS + GP_REGS_UND)
-#define VCPU_IRQ_REGS (VCPU_GP_REGS + GP_REGS_IRQ)
-#define VCPU_FIQ_REGS (VCPU_GP_REGS + GP_REGS_FIQ)
-#define VCPU_PC (VCPU_GP_REGS + GP_REGS_PC)
-#define VCPU_CPSR (VCPU_GP_REGS + GP_REGS_CPSR)
-
-#define VCPU_USR_REG(_reg_nr) (VCPU_USR_REGS + (_reg_nr * 4))
-#define VCPU_USR_SP (VCPU_USR_REG(13))
-#define VCPU_USR_LR (VCPU_USR_REG(14))
-#define VCPU_CP15_BASE (VCPU_GUEST_CTXT + CPU_CTXT_CP15)
-#define CP15_OFFSET(_cp15_reg_idx) (VCPU_CP15_BASE + (_cp15_reg_idx * 4))
-
-/*
- * Many of these macros need to access the VCPU structure, which is always
- * held in r0. These macros should never clobber r1, as it is used to hold the
- * exception code on the return path (except of course the macro that switches
- * all the registers before the final jump to the VM).
- */
-vcpu .req r0 @ vcpu pointer always in r0
-
-/* Clobbers {r2-r6} */
-.macro store_vfp_state vfp_base
- @ The VFPFMRX and VFPFMXR macros are the VMRS and VMSR instructions
- VFPFMRX r2, FPEXC
- @ Make sure VFP is enabled so we can touch the registers.
- orr r6, r2, #FPEXC_EN
- VFPFMXR FPEXC, r6
-
- VFPFMRX r3, FPSCR
- tst r2, #FPEXC_EX @ Check for VFP Subarchitecture
- beq 1f
- @ If FPEXC_EX is 0, then FPINST/FPINST2 reads are upredictable, so
- @ we only need to save them if FPEXC_EX is set.
- VFPFMRX r4, FPINST
- tst r2, #FPEXC_FP2V
- VFPFMRX r5, FPINST2, ne @ vmrsne
- bic r6, r2, #FPEXC_EX @ FPEXC_EX disable
- VFPFMXR FPEXC, r6
-1:
- VFPFSTMIA \vfp_base, r6 @ Save VFP registers
- stm \vfp_base, {r2-r5} @ Save FPEXC, FPSCR, FPINST, FPINST2
-.endm
-
-/* Assume FPEXC_EN is on and FPEXC_EX is off, clobbers {r2-r6} */
-.macro restore_vfp_state vfp_base
- VFPFLDMIA \vfp_base, r6 @ Load VFP registers
- ldm \vfp_base, {r2-r5} @ Load FPEXC, FPSCR, FPINST, FPINST2
-
- VFPFMXR FPSCR, r3
- tst r2, #FPEXC_EX @ Check for VFP Subarchitecture
- beq 1f
- VFPFMXR FPINST, r4
- tst r2, #FPEXC_FP2V
- VFPFMXR FPINST2, r5, ne
-1:
- VFPFMXR FPEXC, r2 @ FPEXC (last, in case !EN)
-.endm
-
-/* These are simply for the macros to work - value don't have meaning */
-.equ usr, 0
-.equ svc, 1
-.equ abt, 2
-.equ und, 3
-.equ irq, 4
-.equ fiq, 5
-
-.macro push_host_regs_mode mode
- mrs r2, SP_\mode
- mrs r3, LR_\mode
- mrs r4, SPSR_\mode
- push {r2, r3, r4}
-.endm
-
-/*
- * Store all host persistent registers on the stack.
- * Clobbers all registers, in all modes, except r0 and r1.
- */
-.macro save_host_regs
- /* Hyp regs. Only ELR_hyp (SPSR_hyp already saved) */
- mrs r2, ELR_hyp
- push {r2}
-
- /* usr regs */
- push {r4-r12} @ r0-r3 are always clobbered
- mrs r2, SP_usr
- mov r3, lr
- push {r2, r3}
-
- push_host_regs_mode svc
- push_host_regs_mode abt
- push_host_regs_mode und
- push_host_regs_mode irq
-
- /* fiq regs */
- mrs r2, r8_fiq
- mrs r3, r9_fiq
- mrs r4, r10_fiq
- mrs r5, r11_fiq
- mrs r6, r12_fiq
- mrs r7, SP_fiq
- mrs r8, LR_fiq
- mrs r9, SPSR_fiq
- push {r2-r9}
-.endm
-
-.macro pop_host_regs_mode mode
- pop {r2, r3, r4}
- msr SP_\mode, r2
- msr LR_\mode, r3
- msr SPSR_\mode, r4
-.endm
-
-/*
- * Restore all host registers from the stack.
- * Clobbers all registers, in all modes, except r0 and r1.
- */
-.macro restore_host_regs
- pop {r2-r9}
- msr r8_fiq, r2
- msr r9_fiq, r3
- msr r10_fiq, r4
- msr r11_fiq, r5
- msr r12_fiq, r6
- msr SP_fiq, r7
- msr LR_fiq, r8
- msr SPSR_fiq, r9
-
- pop_host_regs_mode irq
- pop_host_regs_mode und
- pop_host_regs_mode abt
- pop_host_regs_mode svc
-
- pop {r2, r3}
- msr SP_usr, r2
- mov lr, r3
- pop {r4-r12}
-
- pop {r2}
- msr ELR_hyp, r2
-.endm
-
-/*
- * Restore SP, LR and SPSR for a given mode. offset is the offset of
- * this mode's registers from the VCPU base.
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers r1, r2, r3, r4.
- */
-.macro restore_guest_regs_mode mode, offset
- add r1, vcpu, \offset
- ldm r1, {r2, r3, r4}
- msr SP_\mode, r2
- msr LR_\mode, r3
- msr SPSR_\mode, r4
-.endm
-
-/*
- * Restore all guest registers from the vcpu struct.
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers *all* registers.
- */
-.macro restore_guest_regs
- restore_guest_regs_mode svc, #VCPU_SVC_REGS
- restore_guest_regs_mode abt, #VCPU_ABT_REGS
- restore_guest_regs_mode und, #VCPU_UND_REGS
- restore_guest_regs_mode irq, #VCPU_IRQ_REGS
-
- add r1, vcpu, #VCPU_FIQ_REGS
- ldm r1, {r2-r9}
- msr r8_fiq, r2
- msr r9_fiq, r3
- msr r10_fiq, r4
- msr r11_fiq, r5
- msr r12_fiq, r6
- msr SP_fiq, r7
- msr LR_fiq, r8
- msr SPSR_fiq, r9
-
- @ Load return state
- ldr r2, [vcpu, #VCPU_PC]
- ldr r3, [vcpu, #VCPU_CPSR]
- msr ELR_hyp, r2
- msr SPSR_cxsf, r3
-
- @ Load user registers
- ldr r2, [vcpu, #VCPU_USR_SP]
- ldr r3, [vcpu, #VCPU_USR_LR]
- msr SP_usr, r2
- mov lr, r3
- add vcpu, vcpu, #(VCPU_USR_REGS)
- ldm vcpu, {r0-r12}
-.endm
-
-/*
- * Save SP, LR and SPSR for a given mode. offset is the offset of
- * this mode's registers from the VCPU base.
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers r2, r3, r4, r5.
- */
-.macro save_guest_regs_mode mode, offset
- add r2, vcpu, \offset
- mrs r3, SP_\mode
- mrs r4, LR_\mode
- mrs r5, SPSR_\mode
- stm r2, {r3, r4, r5}
-.endm
-
-/*
- * Save all guest registers to the vcpu struct
- * Expects guest's r0, r1, r2 on the stack.
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers r2, r3, r4, r5.
- */
-.macro save_guest_regs
- @ Store usr registers
- add r2, vcpu, #VCPU_USR_REG(3)
- stm r2, {r3-r12}
- add r2, vcpu, #VCPU_USR_REG(0)
- pop {r3, r4, r5} @ r0, r1, r2
- stm r2, {r3, r4, r5}
- mrs r2, SP_usr
- mov r3, lr
- str r2, [vcpu, #VCPU_USR_SP]
- str r3, [vcpu, #VCPU_USR_LR]
-
- @ Store return state
- mrs r2, ELR_hyp
- mrs r3, spsr
- str r2, [vcpu, #VCPU_PC]
- str r3, [vcpu, #VCPU_CPSR]
-
- @ Store other guest registers
- save_guest_regs_mode svc, #VCPU_SVC_REGS
- save_guest_regs_mode abt, #VCPU_ABT_REGS
- save_guest_regs_mode und, #VCPU_UND_REGS
- save_guest_regs_mode irq, #VCPU_IRQ_REGS
-.endm
-
-/* Reads cp15 registers from hardware and stores them in memory
- * @store_to_vcpu: If 0, registers are written in-order to the stack,
- * otherwise to the VCPU struct pointed to by vcpup
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers r2 - r12
- */
-.macro read_cp15_state store_to_vcpu
- mrc p15, 0, r2, c1, c0, 0 @ SCTLR
- mrc p15, 0, r3, c1, c0, 2 @ CPACR
- mrc p15, 0, r4, c2, c0, 2 @ TTBCR
- mrc p15, 0, r5, c3, c0, 0 @ DACR
- mrrc p15, 0, r6, r7, c2 @ TTBR 0
- mrrc p15, 1, r8, r9, c2 @ TTBR 1
- mrc p15, 0, r10, c10, c2, 0 @ PRRR
- mrc p15, 0, r11, c10, c2, 1 @ NMRR
- mrc p15, 2, r12, c0, c0, 0 @ CSSELR
-
- .if \store_to_vcpu == 0
- push {r2-r12} @ Push CP15 registers
- .else
- str r2, [vcpu, #CP15_OFFSET(c1_SCTLR)]
- str r3, [vcpu, #CP15_OFFSET(c1_CPACR)]
- str r4, [vcpu, #CP15_OFFSET(c2_TTBCR)]
- str r5, [vcpu, #CP15_OFFSET(c3_DACR)]
- add r2, vcpu, #CP15_OFFSET(c2_TTBR0)
- strd r6, r7, [r2]
- add r2, vcpu, #CP15_OFFSET(c2_TTBR1)
- strd r8, r9, [r2]
- str r10, [vcpu, #CP15_OFFSET(c10_PRRR)]
- str r11, [vcpu, #CP15_OFFSET(c10_NMRR)]
- str r12, [vcpu, #CP15_OFFSET(c0_CSSELR)]
- .endif
-
- mrc p15, 0, r2, c13, c0, 1 @ CID
- mrc p15, 0, r3, c13, c0, 2 @ TID_URW
- mrc p15, 0, r4, c13, c0, 3 @ TID_URO
- mrc p15, 0, r5, c13, c0, 4 @ TID_PRIV
- mrc p15, 0, r6, c5, c0, 0 @ DFSR
- mrc p15, 0, r7, c5, c0, 1 @ IFSR
- mrc p15, 0, r8, c5, c1, 0 @ ADFSR
- mrc p15, 0, r9, c5, c1, 1 @ AIFSR
- mrc p15, 0, r10, c6, c0, 0 @ DFAR
- mrc p15, 0, r11, c6, c0, 2 @ IFAR
- mrc p15, 0, r12, c12, c0, 0 @ VBAR
-
- .if \store_to_vcpu == 0
- push {r2-r12} @ Push CP15 registers
- .else
- str r2, [vcpu, #CP15_OFFSET(c13_CID)]
- str r3, [vcpu, #CP15_OFFSET(c13_TID_URW)]
- str r4, [vcpu, #CP15_OFFSET(c13_TID_URO)]
- str r5, [vcpu, #CP15_OFFSET(c13_TID_PRIV)]
- str r6, [vcpu, #CP15_OFFSET(c5_DFSR)]
- str r7, [vcpu, #CP15_OFFSET(c5_IFSR)]
- str r8, [vcpu, #CP15_OFFSET(c5_ADFSR)]
- str r9, [vcpu, #CP15_OFFSET(c5_AIFSR)]
- str r10, [vcpu, #CP15_OFFSET(c6_DFAR)]
- str r11, [vcpu, #CP15_OFFSET(c6_IFAR)]
- str r12, [vcpu, #CP15_OFFSET(c12_VBAR)]
- .endif
-
- mrc p15, 0, r2, c14, c1, 0 @ CNTKCTL
- mrrc p15, 0, r4, r5, c7 @ PAR
- mrc p15, 0, r6, c10, c3, 0 @ AMAIR0
- mrc p15, 0, r7, c10, c3, 1 @ AMAIR1
-
- .if \store_to_vcpu == 0
- push {r2,r4-r7}
- .else
- str r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
- add r12, vcpu, #CP15_OFFSET(c7_PAR)
- strd r4, r5, [r12]
- str r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
- str r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
- .endif
-.endm
-
-/*
- * Reads cp15 registers from memory and writes them to hardware
- * @read_from_vcpu: If 0, registers are read in-order from the stack,
- * otherwise from the VCPU struct pointed to by vcpup
- *
- * Assumes vcpu pointer in vcpu reg
- */
-.macro write_cp15_state read_from_vcpu
- .if \read_from_vcpu == 0
- pop {r2,r4-r7}
- .else
- ldr r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
- add r12, vcpu, #CP15_OFFSET(c7_PAR)
- ldrd r4, r5, [r12]
- ldr r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
- ldr r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
- .endif
-
- mcr p15, 0, r2, c14, c1, 0 @ CNTKCTL
- mcrr p15, 0, r4, r5, c7 @ PAR
- mcr p15, 0, r6, c10, c3, 0 @ AMAIR0
- mcr p15, 0, r7, c10, c3, 1 @ AMAIR1
-
- .if \read_from_vcpu == 0
- pop {r2-r12}
- .else
- ldr r2, [vcpu, #CP15_OFFSET(c13_CID)]
- ldr r3, [vcpu, #CP15_OFFSET(c13_TID_URW)]
- ldr r4, [vcpu, #CP15_OFFSET(c13_TID_URO)]
- ldr r5, [vcpu, #CP15_OFFSET(c13_TID_PRIV)]
- ldr r6, [vcpu, #CP15_OFFSET(c5_DFSR)]
- ldr r7, [vcpu, #CP15_OFFSET(c5_IFSR)]
- ldr r8, [vcpu, #CP15_OFFSET(c5_ADFSR)]
- ldr r9, [vcpu, #CP15_OFFSET(c5_AIFSR)]
- ldr r10, [vcpu, #CP15_OFFSET(c6_DFAR)]
- ldr r11, [vcpu, #CP15_OFFSET(c6_IFAR)]
- ldr r12, [vcpu, #CP15_OFFSET(c12_VBAR)]
- .endif
-
- mcr p15, 0, r2, c13, c0, 1 @ CID
- mcr p15, 0, r3, c13, c0, 2 @ TID_URW
- mcr p15, 0, r4, c13, c0, 3 @ TID_URO
- mcr p15, 0, r5, c13, c0, 4 @ TID_PRIV
- mcr p15, 0, r6, c5, c0, 0 @ DFSR
- mcr p15, 0, r7, c5, c0, 1 @ IFSR
- mcr p15, 0, r8, c5, c1, 0 @ ADFSR
- mcr p15, 0, r9, c5, c1, 1 @ AIFSR
- mcr p15, 0, r10, c6, c0, 0 @ DFAR
- mcr p15, 0, r11, c6, c0, 2 @ IFAR
- mcr p15, 0, r12, c12, c0, 0 @ VBAR
-
- .if \read_from_vcpu == 0
- pop {r2-r12}
- .else
- ldr r2, [vcpu, #CP15_OFFSET(c1_SCTLR)]
- ldr r3, [vcpu, #CP15_OFFSET(c1_CPACR)]
- ldr r4, [vcpu, #CP15_OFFSET(c2_TTBCR)]
- ldr r5, [vcpu, #CP15_OFFSET(c3_DACR)]
- add r12, vcpu, #CP15_OFFSET(c2_TTBR0)
- ldrd r6, r7, [r12]
- add r12, vcpu, #CP15_OFFSET(c2_TTBR1)
- ldrd r8, r9, [r12]
- ldr r10, [vcpu, #CP15_OFFSET(c10_PRRR)]
- ldr r11, [vcpu, #CP15_OFFSET(c10_NMRR)]
- ldr r12, [vcpu, #CP15_OFFSET(c0_CSSELR)]
- .endif
-
- mcr p15, 0, r2, c1, c0, 0 @ SCTLR
- mcr p15, 0, r3, c1, c0, 2 @ CPACR
- mcr p15, 0, r4, c2, c0, 2 @ TTBCR
- mcr p15, 0, r5, c3, c0, 0 @ DACR
- mcrr p15, 0, r6, r7, c2 @ TTBR 0
- mcrr p15, 1, r8, r9, c2 @ TTBR 1
- mcr p15, 0, r10, c10, c2, 0 @ PRRR
- mcr p15, 0, r11, c10, c2, 1 @ NMRR
- mcr p15, 2, r12, c0, c0, 0 @ CSSELR
-.endm
-
-/*
- * Save the VGIC CPU state into memory
- *
- * Assumes vcpu pointer in vcpu reg
- */
-.macro save_vgic_state
- /* Get VGIC VCTRL base into r2 */
- ldr r2, [vcpu, #VCPU_KVM]
- ldr r2, [r2, #KVM_VGIC_VCTRL]
- cmp r2, #0
- beq 2f
-
- /* Compute the address of struct vgic_cpu */
- add r11, vcpu, #VCPU_VGIC_CPU
-
- /* Save all interesting registers */
- ldr r4, [r2, #GICH_VMCR]
- ldr r5, [r2, #GICH_MISR]
- ldr r6, [r2, #GICH_EISR0]
- ldr r7, [r2, #GICH_EISR1]
- ldr r8, [r2, #GICH_ELRSR0]
- ldr r9, [r2, #GICH_ELRSR1]
- ldr r10, [r2, #GICH_APR]
-ARM_BE8(rev r4, r4 )
-ARM_BE8(rev r5, r5 )
-ARM_BE8(rev r6, r6 )
-ARM_BE8(rev r7, r7 )
-ARM_BE8(rev r8, r8 )
-ARM_BE8(rev r9, r9 )
-ARM_BE8(rev r10, r10 )
-
- str r4, [r11, #VGIC_V2_CPU_VMCR]
- str r5, [r11, #VGIC_V2_CPU_MISR]
-#ifdef CONFIG_CPU_ENDIAN_BE8
- str r6, [r11, #(VGIC_V2_CPU_EISR + 4)]
- str r7, [r11, #VGIC_V2_CPU_EISR]
- str r8, [r11, #(VGIC_V2_CPU_ELRSR + 4)]
- str r9, [r11, #VGIC_V2_CPU_ELRSR]
-#else
- str r6, [r11, #VGIC_V2_CPU_EISR]
- str r7, [r11, #(VGIC_V2_CPU_EISR + 4)]
- str r8, [r11, #VGIC_V2_CPU_ELRSR]
- str r9, [r11, #(VGIC_V2_CPU_ELRSR + 4)]
-#endif
- str r10, [r11, #VGIC_V2_CPU_APR]
-
- /* Clear GICH_HCR */
- mov r5, #0
- str r5, [r2, #GICH_HCR]
-
- /* Save list registers */
- add r2, r2, #GICH_LR0
- add r3, r11, #VGIC_V2_CPU_LR
- ldr r4, [r11, #VGIC_CPU_NR_LR]
-1: ldr r6, [r2], #4
-ARM_BE8(rev r6, r6 )
- str r6, [r3], #4
- subs r4, r4, #1
- bne 1b
-2:
-.endm
-
-/*
- * Restore the VGIC CPU state from memory
- *
- * Assumes vcpu pointer in vcpu reg
- */
-.macro restore_vgic_state
- /* Get VGIC VCTRL base into r2 */
- ldr r2, [vcpu, #VCPU_KVM]
- ldr r2, [r2, #KVM_VGIC_VCTRL]
- cmp r2, #0
- beq 2f
-
- /* Compute the address of struct vgic_cpu */
- add r11, vcpu, #VCPU_VGIC_CPU
-
- /* We only restore a minimal set of registers */
- ldr r3, [r11, #VGIC_V2_CPU_HCR]
- ldr r4, [r11, #VGIC_V2_CPU_VMCR]
- ldr r8, [r11, #VGIC_V2_CPU_APR]
-ARM_BE8(rev r3, r3 )
-ARM_BE8(rev r4, r4 )
-ARM_BE8(rev r8, r8 )
-
- str r3, [r2, #GICH_HCR]
- str r4, [r2, #GICH_VMCR]
- str r8, [r2, #GICH_APR]
-
- /* Restore list registers */
- add r2, r2, #GICH_LR0
- add r3, r11, #VGIC_V2_CPU_LR
- ldr r4, [r11, #VGIC_CPU_NR_LR]
-1: ldr r6, [r3], #4
-ARM_BE8(rev r6, r6 )
- str r6, [r2], #4
- subs r4, r4, #1
- bne 1b
-2:
-.endm
-
-#define CNTHCTL_PL1PCTEN (1 << 0)
-#define CNTHCTL_PL1PCEN (1 << 1)
-
-/*
- * Save the timer state onto the VCPU and allow physical timer/counter access
- * for the host.
- *
- * Assumes vcpu pointer in vcpu reg
- * Clobbers r2-r5
- */
-.macro save_timer_state
- ldr r4, [vcpu, #VCPU_KVM]
- ldr r2, [r4, #KVM_TIMER_ENABLED]
- cmp r2, #0
- beq 1f
-
- mrc p15, 0, r2, c14, c3, 1 @ CNTV_CTL
- str r2, [vcpu, #VCPU_TIMER_CNTV_CTL]
-
- isb
-
- mrrc p15, 3, rr_lo_hi(r2, r3), c14 @ CNTV_CVAL
- ldr r4, =VCPU_TIMER_CNTV_CVAL
- add r5, vcpu, r4
- strd r2, r3, [r5]
-
- @ Ensure host CNTVCT == CNTPCT
- mov r2, #0
- mcrr p15, 4, r2, r2, c14 @ CNTVOFF
-
-1:
- mov r2, #0 @ Clear ENABLE
- mcr p15, 0, r2, c14, c3, 1 @ CNTV_CTL
-
- @ Allow physical timer/counter access for the host
- mrc p15, 4, r2, c14, c1, 0 @ CNTHCTL
- orr r2, r2, #(CNTHCTL_PL1PCEN | CNTHCTL_PL1PCTEN)
- mcr p15, 4, r2, c14, c1, 0 @ CNTHCTL
-.endm
-
-/*
- * Load the timer state from the VCPU and deny physical timer/counter access
- * for the host.
- *
- * Assumes vcpu pointer in vcpu reg
- * Clobbers r2-r5
- */
-.macro restore_timer_state
- @ Disallow physical timer access for the guest
- @ Physical counter access is allowed
- mrc p15, 4, r2, c14, c1, 0 @ CNTHCTL
- orr r2, r2, #CNTHCTL_PL1PCTEN
- bic r2, r2, #CNTHCTL_PL1PCEN
- mcr p15, 4, r2, c14, c1, 0 @ CNTHCTL
-
- ldr r4, [vcpu, #VCPU_KVM]
- ldr r2, [r4, #KVM_TIMER_ENABLED]
- cmp r2, #0
- beq 1f
-
- ldr r2, [r4, #KVM_TIMER_CNTVOFF]
- ldr r3, [r4, #(KVM_TIMER_CNTVOFF + 4)]
- mcrr p15, 4, rr_lo_hi(r2, r3), c14 @ CNTVOFF
-
- ldr r4, =VCPU_TIMER_CNTV_CVAL
- add r5, vcpu, r4
- ldrd r2, r3, [r5]
- mcrr p15, 3, rr_lo_hi(r2, r3), c14 @ CNTV_CVAL
- isb
-
- ldr r2, [vcpu, #VCPU_TIMER_CNTV_CTL]
- and r2, r2, #3
- mcr p15, 0, r2, c14, c3, 1 @ CNTV_CTL
-1:
-.endm
-
-.equ vmentry, 0
-.equ vmexit, 1
-
-/* Configures the HSTR (Hyp System Trap Register) on entry/return
- * (hardware reset value is 0) */
-.macro set_hstr operation
- mrc p15, 4, r2, c1, c1, 3
- ldr r3, =HSTR_T(15)
- .if \operation == vmentry
- orr r2, r2, r3 @ Trap CR{15}
- .else
- bic r2, r2, r3 @ Don't trap any CRx accesses
- .endif
- mcr p15, 4, r2, c1, c1, 3
-.endm
-
-/* Configures the HCPTR (Hyp Coprocessor Trap Register) on entry/return
- * (hardware reset value is 0). Keep previous value in r2.
- * An ISB is emited on vmexit/vmtrap, but executed on vmexit only if
- * VFP wasn't already enabled (always executed on vmtrap).
- * If a label is specified with vmexit, it is branched to if VFP wasn't
- * enabled.
- */
-.macro set_hcptr operation, mask, label = none
- mrc p15, 4, r2, c1, c1, 2
- ldr r3, =\mask
- .if \operation == vmentry
- orr r3, r2, r3 @ Trap coproc-accesses defined in mask
- .else
- bic r3, r2, r3 @ Don't trap defined coproc-accesses
- .endif
- mcr p15, 4, r3, c1, c1, 2
- .if \operation != vmentry
- .if \operation == vmexit
- tst r2, #(HCPTR_TCP(10) | HCPTR_TCP(11))
- beq 1f
- .endif
- isb
- .if \label != none
- b \label
- .endif
-1:
- .endif
-.endm
-
-/* Configures the HDCR (Hyp Debug Configuration Register) on entry/return
- * (hardware reset value is 0) */
-.macro set_hdcr operation
- mrc p15, 4, r2, c1, c1, 1
- ldr r3, =(HDCR_TPM|HDCR_TPMCR)
- .if \operation == vmentry
- orr r2, r2, r3 @ Trap some perfmon accesses
- .else
- bic r2, r2, r3 @ Don't trap any perfmon accesses
- .endif
- mcr p15, 4, r2, c1, c1, 1
-.endm
-
-/* Enable/Disable: stage-2 trans., trap interrupts, trap wfi, trap smc */
-.macro configure_hyp_role operation
- .if \operation == vmentry
- ldr r2, [vcpu, #VCPU_HCR]
- ldr r3, [vcpu, #VCPU_IRQ_LINES]
- orr r2, r2, r3
- .else
- mov r2, #0
- .endif
- mcr p15, 4, r2, c1, c1, 0 @ HCR
-.endm
-
-.macro load_vcpu
- mrc p15, 4, vcpu, c13, c0, 2 @ HTPIDR
-.endm
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 21/28] ARM: KVM: Remove the old world switch
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
As we now have a full reimplementation of the world switch, it is
time to kiss the old stuff goodbye. I'm not sure we'll miss it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/interrupts.S | 469 +----------------------------
arch/arm/kvm/interrupts_head.S | 660 -----------------------------------------
2 files changed, 1 insertion(+), 1128 deletions(-)
delete mode 100644 arch/arm/kvm/interrupts_head.S
diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 01eb169..b1bd316 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -17,198 +17,8 @@
*/
#include <linux/linkage.h>
-#include <linux/const.h>
-#include <asm/unified.h>
-#include <asm/page.h>
-#include <asm/ptrace.h>
-#include <asm/asm-offsets.h>
-#include <asm/kvm_asm.h>
-#include <asm/kvm_arm.h>
-#include <asm/vfpmacros.h>
-#include "interrupts_head.S"
.text
- .pushsection .hyp.text, "ax"
-
-/********************************************************************
- * Flush per-VMID TLBs
- *
- * void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
- *
- * We rely on the hardware to broadcast the TLB invalidation to all CPUs
- * inside the inner-shareable domain (which is the case for all v7
- * implementations). If we come across a non-IS SMP implementation, we'll
- * have to use an IPI based mechanism. Until then, we stick to the simple
- * hardware assisted version.
- *
- * As v7 does not support flushing per IPA, just nuke the whole TLB
- * instead, ignoring the ipa value.
- */
-ENTRY(__kvm_tlb_flush_vmid_ipa)
- push {r2, r3}
-
- dsb ishst
- add r0, r0, #KVM_VTTBR
- ldrd r2, r3, [r0]
- mcrr p15, 6, rr_lo_hi(r2, r3), c2 @ Write VTTBR
- isb
- mcr p15, 0, r0, c8, c3, 0 @ TLBIALLIS (rt ignored)
- dsb ish
- isb
- mov r2, #0
- mov r3, #0
- mcrr p15, 6, r2, r3, c2 @ Back to VMID #0
- isb @ Not necessary if followed by eret
-
- pop {r2, r3}
- bx lr
-ENDPROC(__kvm_tlb_flush_vmid_ipa)
-
-/**
- * void __kvm_tlb_flush_vmid(struct kvm *kvm) - Flush per-VMID TLBs
- *
- * Reuses __kvm_tlb_flush_vmid_ipa() for ARMv7, without passing address
- * parameter
- */
-
-ENTRY(__kvm_tlb_flush_vmid)
- b __kvm_tlb_flush_vmid_ipa
-ENDPROC(__kvm_tlb_flush_vmid)
-
-/********************************************************************
- * Flush TLBs and instruction caches of all CPUs inside the inner-shareable
- * domain, for all VMIDs
- *
- * void __kvm_flush_vm_context(void);
- */
-ENTRY(__kvm_flush_vm_context)
- mov r0, #0 @ rn parameter for c15 flushes is SBZ
-
- /* Invalidate NS Non-Hyp TLB Inner Shareable (TLBIALLNSNHIS) */
- mcr p15, 4, r0, c8, c3, 4
- /* Invalidate instruction caches Inner Shareable (ICIALLUIS) */
- mcr p15, 0, r0, c7, c1, 0
- dsb ish
- isb @ Not necessary if followed by eret
-
- bx lr
-ENDPROC(__kvm_flush_vm_context)
-
-
-/********************************************************************
- * Hypervisor world-switch code
- *
- *
- * int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
- */
-ENTRY(__kvm_vcpu_run)
- @ Save the vcpu pointer
- mcr p15, 4, vcpu, c13, c0, 2 @ HTPIDR
-
- save_host_regs
-
- restore_vgic_state
- restore_timer_state
-
- @ Store hardware CP15 state and load guest state
- read_cp15_state store_to_vcpu = 0
- write_cp15_state read_from_vcpu = 1
-
- @ If the host kernel has not been configured with VFPv3 support,
- @ then it is safer if we deny guests from using it as well.
-#ifdef CONFIG_VFPv3
- @ Set FPEXC_EN so the guest doesn't trap floating point instructions
- VFPFMRX r2, FPEXC @ VMRS
- push {r2}
- orr r2, r2, #FPEXC_EN
- VFPFMXR FPEXC, r2 @ VMSR
-#endif
-
- @ Configure Hyp-role
- configure_hyp_role vmentry
-
- @ Trap coprocessor CRx accesses
- set_hstr vmentry
- set_hcptr vmentry, (HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11))
- set_hdcr vmentry
-
- @ Write configured ID register into MIDR alias
- ldr r1, [vcpu, #VCPU_MIDR]
- mcr p15, 4, r1, c0, c0, 0
-
- @ Write guest view of MPIDR into VMPIDR
- ldr r1, [vcpu, #CP15_OFFSET(c0_MPIDR)]
- mcr p15, 4, r1, c0, c0, 5
-
- @ Set up guest memory translation
- ldr r1, [vcpu, #VCPU_KVM]
- add r1, r1, #KVM_VTTBR
- ldrd r2, r3, [r1]
- mcrr p15, 6, rr_lo_hi(r2, r3), c2 @ Write VTTBR
-
- @ We're all done, just restore the GPRs and go to the guest
- restore_guest_regs
- clrex @ Clear exclusive monitor
- eret
-
-__kvm_vcpu_return:
- /*
- * return convention:
- * guest r0, r1, r2 saved on the stack
- * r0: vcpu pointer
- * r1: exception code
- */
- save_guest_regs
-
- @ Set VMID == 0
- mov r2, #0
- mov r3, #0
- mcrr p15, 6, r2, r3, c2 @ Write VTTBR
-
- @ Don't trap coprocessor accesses for host kernel
- set_hstr vmexit
- set_hdcr vmexit
- set_hcptr vmexit, (HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11)), after_vfp_restore
-
-#ifdef CONFIG_VFPv3
- @ Switch VFP/NEON hardware state to the host's
- add r7, vcpu, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
- store_vfp_state r7
- add r7, vcpu, #VCPU_HOST_CTXT
- ldr r7, [r7]
- add r7, r7, #CPU_CTXT_VFP
- restore_vfp_state r7
-
-after_vfp_restore:
- @ Restore FPEXC_EN which we clobbered on entry
- pop {r2}
- VFPFMXR FPEXC, r2
-#else
-after_vfp_restore:
-#endif
-
- @ Reset Hyp-role
- configure_hyp_role vmexit
-
- @ Let host read hardware MIDR
- mrc p15, 0, r2, c0, c0, 0
- mcr p15, 4, r2, c0, c0, 0
-
- @ Back to hardware MPIDR
- mrc p15, 0, r2, c0, c0, 5
- mcr p15, 4, r2, c0, c0, 5
-
- @ Store guest CP15 state and restore host state
- read_cp15_state store_to_vcpu = 1
- write_cp15_state read_from_vcpu = 0
-
- save_timer_state
- save_vgic_state
-
- restore_host_regs
- clrex @ Clear exclusive monitor
- mov r0, r1 @ Return the return code
- bx lr @ return to IOCTL
/********************************************************************
* Call function in Hyp mode
@@ -239,281 +49,4 @@ after_vfp_restore:
ENTRY(kvm_call_hyp)
hvc #0
bx lr
-
-/********************************************************************
- * Hypervisor exception vector and handlers
- *
- *
- * The KVM/ARM Hypervisor ABI is defined as follows:
- *
- * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
- * instruction is issued since all traps are disabled when running the host
- * kernel as per the Hyp-mode initialization at boot time.
- *
- * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
- * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
- * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
- * instructions are called from within Hyp-mode.
- *
- * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
- * Switching to Hyp mode is done through a simple HVC #0 instruction. The
- * exception vector code will check that the HVC comes from VMID==0 and if
- * so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- * - r0 contains a pointer to a HYP function
- * - r1, r2, and r3 contain arguments to the above function.
- * - The HYP function will be called with its arguments in r0, r1 and r2.
- * On HYP function return, we return directly to SVC.
- *
- * Note that the above is used to execute code in Hyp-mode from a host-kernel
- * point of view, and is a different concept from performing a world-switch and
- * executing guest code SVC mode (with a VMID != 0).
- */
-
-/* Handle undef, svc, pabt, or dabt by crashing with a user notice */
-.macro bad_exception exception_code, panic_str
- push {r0-r2}
- mrrc p15, 6, r0, r1, c2 @ Read VTTBR
- lsr r1, r1, #16
- ands r1, r1, #0xff
- beq 99f
-
- load_vcpu @ Load VCPU pointer
- .if \exception_code == ARM_EXCEPTION_DATA_ABORT
- mrc p15, 4, r2, c5, c2, 0 @ HSR
- mrc p15, 4, r1, c6, c0, 0 @ HDFAR
- str r2, [vcpu, #VCPU_HSR]
- str r1, [vcpu, #VCPU_HxFAR]
- .endif
- .if \exception_code == ARM_EXCEPTION_PREF_ABORT
- mrc p15, 4, r2, c5, c2, 0 @ HSR
- mrc p15, 4, r1, c6, c0, 2 @ HIFAR
- str r2, [vcpu, #VCPU_HSR]
- str r1, [vcpu, #VCPU_HxFAR]
- .endif
- mov r1, #\exception_code
- b __kvm_vcpu_return
-
- @ We were in the host already. Let's craft a panic-ing return to SVC.
-99: mrs r2, cpsr
- bic r2, r2, #MODE_MASK
- orr r2, r2, #SVC_MODE
-THUMB( orr r2, r2, #PSR_T_BIT )
- msr spsr_cxsf, r2
- mrs r1, ELR_hyp
- ldr r2, =panic
- msr ELR_hyp, r2
- ldr r0, =\panic_str
- clrex @ Clear exclusive monitor
- eret
-.endm
-
- .align 5
-__kvm_hyp_vector:
- .globl __kvm_hyp_vector
-
- @ Hyp-mode exception vector
- W(b) hyp_reset
- W(b) hyp_undef
- W(b) hyp_svc
- W(b) hyp_pabt
- W(b) hyp_dabt
- W(b) hyp_hvc
- W(b) hyp_irq
- W(b) hyp_fiq
-
- .align
-hyp_reset:
- b hyp_reset
-
- .align
-hyp_undef:
- bad_exception ARM_EXCEPTION_UNDEFINED, und_die_str
-
- .align
-hyp_svc:
- bad_exception ARM_EXCEPTION_HVC, svc_die_str
-
- .align
-hyp_pabt:
- bad_exception ARM_EXCEPTION_PREF_ABORT, pabt_die_str
-
- .align
-hyp_dabt:
- bad_exception ARM_EXCEPTION_DATA_ABORT, dabt_die_str
-
- .align
-hyp_hvc:
- /*
- * Getting here is either becuase of a trap from a guest or from calling
- * HVC from the host kernel, which means "switch to Hyp mode".
- */
- push {r0, r1, r2}
-
- @ Check syndrome register
- mrc p15, 4, r1, c5, c2, 0 @ HSR
- lsr r0, r1, #HSR_EC_SHIFT
- cmp r0, #HSR_EC_HVC
- bne guest_trap @ Not HVC instr.
-
- /*
- * Let's check if the HVC came from VMID 0 and allow simple
- * switch to Hyp mode
- */
- mrrc p15, 6, r0, r2, c2
- lsr r2, r2, #16
- and r2, r2, #0xff
- cmp r2, #0
- bne guest_trap @ Guest called HVC
-
- /*
- * Getting here means host called HVC, we shift parameters and branch
- * to Hyp function.
- */
- pop {r0, r1, r2}
-
- /* Check for __hyp_get_vectors */
- cmp r0, #-1
- mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
- beq 1f
-
- push {lr}
- mrs lr, SPSR
- push {lr}
-
- mov lr, r0
- mov r0, r1
- mov r1, r2
- mov r2, r3
-
-THUMB( orr lr, #1)
- blx lr @ Call the HYP function
-
- pop {lr}
- msr SPSR_csxf, lr
- pop {lr}
-1: eret
-
-guest_trap:
- load_vcpu @ Load VCPU pointer to r0
- str r1, [vcpu, #VCPU_HSR]
-
- @ Check if we need the fault information
- lsr r1, r1, #HSR_EC_SHIFT
-#ifdef CONFIG_VFPv3
- cmp r1, #HSR_EC_CP_0_13
- beq switch_to_guest_vfp
-#endif
- cmp r1, #HSR_EC_IABT
- mrceq p15, 4, r2, c6, c0, 2 @ HIFAR
- beq 2f
- cmp r1, #HSR_EC_DABT
- bne 1f
- mrc p15, 4, r2, c6, c0, 0 @ HDFAR
-
-2: str r2, [vcpu, #VCPU_HxFAR]
-
- /*
- * B3.13.5 Reporting exceptions taken to the Non-secure PL2 mode:
- *
- * Abort on the stage 2 translation for a memory access from a
- * Non-secure PL1 or PL0 mode:
- *
- * For any Access flag fault or Translation fault, and also for any
- * Permission fault on the stage 2 translation of a memory access
- * made as part of a translation table walk for a stage 1 translation,
- * the HPFAR holds the IPA that caused the fault. Otherwise, the HPFAR
- * is UNKNOWN.
- */
-
- /* Check for permission fault, and S1PTW */
- mrc p15, 4, r1, c5, c2, 0 @ HSR
- and r0, r1, #HSR_FSC_TYPE
- cmp r0, #FSC_PERM
- tsteq r1, #(1 << 7) @ S1PTW
- mrcne p15, 4, r2, c6, c0, 4 @ HPFAR
- bne 3f
-
- /* Preserve PAR */
- mrrc p15, 0, r0, r1, c7 @ PAR
- push {r0, r1}
-
- /* Resolve IPA using the xFAR */
- mcr p15, 0, r2, c7, c8, 0 @ ATS1CPR
- isb
- mrrc p15, 0, r0, r1, c7 @ PAR
- tst r0, #1
- bne 4f @ Failed translation
- ubfx r2, r0, #12, #20
- lsl r2, r2, #4
- orr r2, r2, r1, lsl #24
-
- /* Restore PAR */
- pop {r0, r1}
- mcrr p15, 0, r0, r1, c7 @ PAR
-
-3: load_vcpu @ Load VCPU pointer to r0
- str r2, [r0, #VCPU_HPFAR]
-
-1: mov r1, #ARM_EXCEPTION_HVC
- b __kvm_vcpu_return
-
-4: pop {r0, r1} @ Failed translation, return to guest
- mcrr p15, 0, r0, r1, c7 @ PAR
- clrex
- pop {r0, r1, r2}
- eret
-
-/*
- * If VFPv3 support is not available, then we will not switch the VFP
- * registers; however cp10 and cp11 accesses will still trap and fallback
- * to the regular coprocessor emulation code, which currently will
- * inject an undefined exception to the guest.
- */
-#ifdef CONFIG_VFPv3
-switch_to_guest_vfp:
- push {r3-r7}
-
- @ NEON/VFP used. Turn on VFP access.
- set_hcptr vmtrap, (HCPTR_TCP(10) | HCPTR_TCP(11))
-
- @ Switch VFP/NEON hardware state to the guest's
- add r7, r0, #VCPU_HOST_CTXT
- ldr r7, [r7]
- add r7, r7, #CPU_CTXT_VFP
- store_vfp_state r7
- add r7, r0, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
- restore_vfp_state r7
-
- pop {r3-r7}
- pop {r0-r2}
- clrex
- eret
-#endif
-
- .align
-hyp_irq:
- push {r0, r1, r2}
- mov r1, #ARM_EXCEPTION_IRQ
- load_vcpu @ Load VCPU pointer to r0
- b __kvm_vcpu_return
-
- .align
-hyp_fiq:
- b hyp_fiq
-
- .ltorg
-
- .popsection
-
- .pushsection ".rodata"
-
-und_die_str:
- .ascii "unexpected undefined exception in Hyp mode at: %#08x\n"
-pabt_die_str:
- .ascii "unexpected prefetch abort in Hyp mode at: %#08x\n"
-dabt_die_str:
- .ascii "unexpected data abort in Hyp mode at: %#08x\n"
-svc_die_str:
- .ascii "unexpected HVC/SVC trap in Hyp mode at: %#08x\n"
-
- .popsection
+ENDPROC(kvm_call_hyp)
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
deleted file mode 100644
index e0943cb8..0000000
--- a/arch/arm/kvm/interrupts_head.S
+++ /dev/null
@@ -1,660 +0,0 @@
-#include <linux/irqchip/arm-gic.h>
-#include <asm/assembler.h>
-
-/* Compat macro, until we get rid of this file entierely */
-#define VCPU_GP_REGS (VCPU_GUEST_CTXT + CPU_CTXT_GP_REGS)
-#define VCPU_USR_REGS (VCPU_GP_REGS + GP_REGS_USR)
-#define VCPU_SVC_REGS (VCPU_GP_REGS + GP_REGS_SVC)
-#define VCPU_ABT_REGS (VCPU_GP_REGS + GP_REGS_ABT)
-#define VCPU_UND_REGS (VCPU_GP_REGS + GP_REGS_UND)
-#define VCPU_IRQ_REGS (VCPU_GP_REGS + GP_REGS_IRQ)
-#define VCPU_FIQ_REGS (VCPU_GP_REGS + GP_REGS_FIQ)
-#define VCPU_PC (VCPU_GP_REGS + GP_REGS_PC)
-#define VCPU_CPSR (VCPU_GP_REGS + GP_REGS_CPSR)
-
-#define VCPU_USR_REG(_reg_nr) (VCPU_USR_REGS + (_reg_nr * 4))
-#define VCPU_USR_SP (VCPU_USR_REG(13))
-#define VCPU_USR_LR (VCPU_USR_REG(14))
-#define VCPU_CP15_BASE (VCPU_GUEST_CTXT + CPU_CTXT_CP15)
-#define CP15_OFFSET(_cp15_reg_idx) (VCPU_CP15_BASE + (_cp15_reg_idx * 4))
-
-/*
- * Many of these macros need to access the VCPU structure, which is always
- * held in r0. These macros should never clobber r1, as it is used to hold the
- * exception code on the return path (except of course the macro that switches
- * all the registers before the final jump to the VM).
- */
-vcpu .req r0 @ vcpu pointer always in r0
-
-/* Clobbers {r2-r6} */
-.macro store_vfp_state vfp_base
- @ The VFPFMRX and VFPFMXR macros are the VMRS and VMSR instructions
- VFPFMRX r2, FPEXC
- @ Make sure VFP is enabled so we can touch the registers.
- orr r6, r2, #FPEXC_EN
- VFPFMXR FPEXC, r6
-
- VFPFMRX r3, FPSCR
- tst r2, #FPEXC_EX @ Check for VFP Subarchitecture
- beq 1f
- @ If FPEXC_EX is 0, then FPINST/FPINST2 reads are upredictable, so
- @ we only need to save them if FPEXC_EX is set.
- VFPFMRX r4, FPINST
- tst r2, #FPEXC_FP2V
- VFPFMRX r5, FPINST2, ne @ vmrsne
- bic r6, r2, #FPEXC_EX @ FPEXC_EX disable
- VFPFMXR FPEXC, r6
-1:
- VFPFSTMIA \vfp_base, r6 @ Save VFP registers
- stm \vfp_base, {r2-r5} @ Save FPEXC, FPSCR, FPINST, FPINST2
-.endm
-
-/* Assume FPEXC_EN is on and FPEXC_EX is off, clobbers {r2-r6} */
-.macro restore_vfp_state vfp_base
- VFPFLDMIA \vfp_base, r6 @ Load VFP registers
- ldm \vfp_base, {r2-r5} @ Load FPEXC, FPSCR, FPINST, FPINST2
-
- VFPFMXR FPSCR, r3
- tst r2, #FPEXC_EX @ Check for VFP Subarchitecture
- beq 1f
- VFPFMXR FPINST, r4
- tst r2, #FPEXC_FP2V
- VFPFMXR FPINST2, r5, ne
-1:
- VFPFMXR FPEXC, r2 @ FPEXC (last, in case !EN)
-.endm
-
-/* These are simply for the macros to work - value don't have meaning */
-.equ usr, 0
-.equ svc, 1
-.equ abt, 2
-.equ und, 3
-.equ irq, 4
-.equ fiq, 5
-
-.macro push_host_regs_mode mode
- mrs r2, SP_\mode
- mrs r3, LR_\mode
- mrs r4, SPSR_\mode
- push {r2, r3, r4}
-.endm
-
-/*
- * Store all host persistent registers on the stack.
- * Clobbers all registers, in all modes, except r0 and r1.
- */
-.macro save_host_regs
- /* Hyp regs. Only ELR_hyp (SPSR_hyp already saved) */
- mrs r2, ELR_hyp
- push {r2}
-
- /* usr regs */
- push {r4-r12} @ r0-r3 are always clobbered
- mrs r2, SP_usr
- mov r3, lr
- push {r2, r3}
-
- push_host_regs_mode svc
- push_host_regs_mode abt
- push_host_regs_mode und
- push_host_regs_mode irq
-
- /* fiq regs */
- mrs r2, r8_fiq
- mrs r3, r9_fiq
- mrs r4, r10_fiq
- mrs r5, r11_fiq
- mrs r6, r12_fiq
- mrs r7, SP_fiq
- mrs r8, LR_fiq
- mrs r9, SPSR_fiq
- push {r2-r9}
-.endm
-
-.macro pop_host_regs_mode mode
- pop {r2, r3, r4}
- msr SP_\mode, r2
- msr LR_\mode, r3
- msr SPSR_\mode, r4
-.endm
-
-/*
- * Restore all host registers from the stack.
- * Clobbers all registers, in all modes, except r0 and r1.
- */
-.macro restore_host_regs
- pop {r2-r9}
- msr r8_fiq, r2
- msr r9_fiq, r3
- msr r10_fiq, r4
- msr r11_fiq, r5
- msr r12_fiq, r6
- msr SP_fiq, r7
- msr LR_fiq, r8
- msr SPSR_fiq, r9
-
- pop_host_regs_mode irq
- pop_host_regs_mode und
- pop_host_regs_mode abt
- pop_host_regs_mode svc
-
- pop {r2, r3}
- msr SP_usr, r2
- mov lr, r3
- pop {r4-r12}
-
- pop {r2}
- msr ELR_hyp, r2
-.endm
-
-/*
- * Restore SP, LR and SPSR for a given mode. offset is the offset of
- * this mode's registers from the VCPU base.
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers r1, r2, r3, r4.
- */
-.macro restore_guest_regs_mode mode, offset
- add r1, vcpu, \offset
- ldm r1, {r2, r3, r4}
- msr SP_\mode, r2
- msr LR_\mode, r3
- msr SPSR_\mode, r4
-.endm
-
-/*
- * Restore all guest registers from the vcpu struct.
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers *all* registers.
- */
-.macro restore_guest_regs
- restore_guest_regs_mode svc, #VCPU_SVC_REGS
- restore_guest_regs_mode abt, #VCPU_ABT_REGS
- restore_guest_regs_mode und, #VCPU_UND_REGS
- restore_guest_regs_mode irq, #VCPU_IRQ_REGS
-
- add r1, vcpu, #VCPU_FIQ_REGS
- ldm r1, {r2-r9}
- msr r8_fiq, r2
- msr r9_fiq, r3
- msr r10_fiq, r4
- msr r11_fiq, r5
- msr r12_fiq, r6
- msr SP_fiq, r7
- msr LR_fiq, r8
- msr SPSR_fiq, r9
-
- @ Load return state
- ldr r2, [vcpu, #VCPU_PC]
- ldr r3, [vcpu, #VCPU_CPSR]
- msr ELR_hyp, r2
- msr SPSR_cxsf, r3
-
- @ Load user registers
- ldr r2, [vcpu, #VCPU_USR_SP]
- ldr r3, [vcpu, #VCPU_USR_LR]
- msr SP_usr, r2
- mov lr, r3
- add vcpu, vcpu, #(VCPU_USR_REGS)
- ldm vcpu, {r0-r12}
-.endm
-
-/*
- * Save SP, LR and SPSR for a given mode. offset is the offset of
- * this mode's registers from the VCPU base.
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers r2, r3, r4, r5.
- */
-.macro save_guest_regs_mode mode, offset
- add r2, vcpu, \offset
- mrs r3, SP_\mode
- mrs r4, LR_\mode
- mrs r5, SPSR_\mode
- stm r2, {r3, r4, r5}
-.endm
-
-/*
- * Save all guest registers to the vcpu struct
- * Expects guest's r0, r1, r2 on the stack.
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers r2, r3, r4, r5.
- */
-.macro save_guest_regs
- @ Store usr registers
- add r2, vcpu, #VCPU_USR_REG(3)
- stm r2, {r3-r12}
- add r2, vcpu, #VCPU_USR_REG(0)
- pop {r3, r4, r5} @ r0, r1, r2
- stm r2, {r3, r4, r5}
- mrs r2, SP_usr
- mov r3, lr
- str r2, [vcpu, #VCPU_USR_SP]
- str r3, [vcpu, #VCPU_USR_LR]
-
- @ Store return state
- mrs r2, ELR_hyp
- mrs r3, spsr
- str r2, [vcpu, #VCPU_PC]
- str r3, [vcpu, #VCPU_CPSR]
-
- @ Store other guest registers
- save_guest_regs_mode svc, #VCPU_SVC_REGS
- save_guest_regs_mode abt, #VCPU_ABT_REGS
- save_guest_regs_mode und, #VCPU_UND_REGS
- save_guest_regs_mode irq, #VCPU_IRQ_REGS
-.endm
-
-/* Reads cp15 registers from hardware and stores them in memory
- * @store_to_vcpu: If 0, registers are written in-order to the stack,
- * otherwise to the VCPU struct pointed to by vcpup
- *
- * Assumes vcpu pointer in vcpu reg
- *
- * Clobbers r2 - r12
- */
-.macro read_cp15_state store_to_vcpu
- mrc p15, 0, r2, c1, c0, 0 @ SCTLR
- mrc p15, 0, r3, c1, c0, 2 @ CPACR
- mrc p15, 0, r4, c2, c0, 2 @ TTBCR
- mrc p15, 0, r5, c3, c0, 0 @ DACR
- mrrc p15, 0, r6, r7, c2 @ TTBR 0
- mrrc p15, 1, r8, r9, c2 @ TTBR 1
- mrc p15, 0, r10, c10, c2, 0 @ PRRR
- mrc p15, 0, r11, c10, c2, 1 @ NMRR
- mrc p15, 2, r12, c0, c0, 0 @ CSSELR
-
- .if \store_to_vcpu == 0
- push {r2-r12} @ Push CP15 registers
- .else
- str r2, [vcpu, #CP15_OFFSET(c1_SCTLR)]
- str r3, [vcpu, #CP15_OFFSET(c1_CPACR)]
- str r4, [vcpu, #CP15_OFFSET(c2_TTBCR)]
- str r5, [vcpu, #CP15_OFFSET(c3_DACR)]
- add r2, vcpu, #CP15_OFFSET(c2_TTBR0)
- strd r6, r7, [r2]
- add r2, vcpu, #CP15_OFFSET(c2_TTBR1)
- strd r8, r9, [r2]
- str r10, [vcpu, #CP15_OFFSET(c10_PRRR)]
- str r11, [vcpu, #CP15_OFFSET(c10_NMRR)]
- str r12, [vcpu, #CP15_OFFSET(c0_CSSELR)]
- .endif
-
- mrc p15, 0, r2, c13, c0, 1 @ CID
- mrc p15, 0, r3, c13, c0, 2 @ TID_URW
- mrc p15, 0, r4, c13, c0, 3 @ TID_URO
- mrc p15, 0, r5, c13, c0, 4 @ TID_PRIV
- mrc p15, 0, r6, c5, c0, 0 @ DFSR
- mrc p15, 0, r7, c5, c0, 1 @ IFSR
- mrc p15, 0, r8, c5, c1, 0 @ ADFSR
- mrc p15, 0, r9, c5, c1, 1 @ AIFSR
- mrc p15, 0, r10, c6, c0, 0 @ DFAR
- mrc p15, 0, r11, c6, c0, 2 @ IFAR
- mrc p15, 0, r12, c12, c0, 0 @ VBAR
-
- .if \store_to_vcpu == 0
- push {r2-r12} @ Push CP15 registers
- .else
- str r2, [vcpu, #CP15_OFFSET(c13_CID)]
- str r3, [vcpu, #CP15_OFFSET(c13_TID_URW)]
- str r4, [vcpu, #CP15_OFFSET(c13_TID_URO)]
- str r5, [vcpu, #CP15_OFFSET(c13_TID_PRIV)]
- str r6, [vcpu, #CP15_OFFSET(c5_DFSR)]
- str r7, [vcpu, #CP15_OFFSET(c5_IFSR)]
- str r8, [vcpu, #CP15_OFFSET(c5_ADFSR)]
- str r9, [vcpu, #CP15_OFFSET(c5_AIFSR)]
- str r10, [vcpu, #CP15_OFFSET(c6_DFAR)]
- str r11, [vcpu, #CP15_OFFSET(c6_IFAR)]
- str r12, [vcpu, #CP15_OFFSET(c12_VBAR)]
- .endif
-
- mrc p15, 0, r2, c14, c1, 0 @ CNTKCTL
- mrrc p15, 0, r4, r5, c7 @ PAR
- mrc p15, 0, r6, c10, c3, 0 @ AMAIR0
- mrc p15, 0, r7, c10, c3, 1 @ AMAIR1
-
- .if \store_to_vcpu == 0
- push {r2,r4-r7}
- .else
- str r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
- add r12, vcpu, #CP15_OFFSET(c7_PAR)
- strd r4, r5, [r12]
- str r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
- str r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
- .endif
-.endm
-
-/*
- * Reads cp15 registers from memory and writes them to hardware
- * @read_from_vcpu: If 0, registers are read in-order from the stack,
- * otherwise from the VCPU struct pointed to by vcpup
- *
- * Assumes vcpu pointer in vcpu reg
- */
-.macro write_cp15_state read_from_vcpu
- .if \read_from_vcpu == 0
- pop {r2,r4-r7}
- .else
- ldr r2, [vcpu, #CP15_OFFSET(c14_CNTKCTL)]
- add r12, vcpu, #CP15_OFFSET(c7_PAR)
- ldrd r4, r5, [r12]
- ldr r6, [vcpu, #CP15_OFFSET(c10_AMAIR0)]
- ldr r7, [vcpu, #CP15_OFFSET(c10_AMAIR1)]
- .endif
-
- mcr p15, 0, r2, c14, c1, 0 @ CNTKCTL
- mcrr p15, 0, r4, r5, c7 @ PAR
- mcr p15, 0, r6, c10, c3, 0 @ AMAIR0
- mcr p15, 0, r7, c10, c3, 1 @ AMAIR1
-
- .if \read_from_vcpu == 0
- pop {r2-r12}
- .else
- ldr r2, [vcpu, #CP15_OFFSET(c13_CID)]
- ldr r3, [vcpu, #CP15_OFFSET(c13_TID_URW)]
- ldr r4, [vcpu, #CP15_OFFSET(c13_TID_URO)]
- ldr r5, [vcpu, #CP15_OFFSET(c13_TID_PRIV)]
- ldr r6, [vcpu, #CP15_OFFSET(c5_DFSR)]
- ldr r7, [vcpu, #CP15_OFFSET(c5_IFSR)]
- ldr r8, [vcpu, #CP15_OFFSET(c5_ADFSR)]
- ldr r9, [vcpu, #CP15_OFFSET(c5_AIFSR)]
- ldr r10, [vcpu, #CP15_OFFSET(c6_DFAR)]
- ldr r11, [vcpu, #CP15_OFFSET(c6_IFAR)]
- ldr r12, [vcpu, #CP15_OFFSET(c12_VBAR)]
- .endif
-
- mcr p15, 0, r2, c13, c0, 1 @ CID
- mcr p15, 0, r3, c13, c0, 2 @ TID_URW
- mcr p15, 0, r4, c13, c0, 3 @ TID_URO
- mcr p15, 0, r5, c13, c0, 4 @ TID_PRIV
- mcr p15, 0, r6, c5, c0, 0 @ DFSR
- mcr p15, 0, r7, c5, c0, 1 @ IFSR
- mcr p15, 0, r8, c5, c1, 0 @ ADFSR
- mcr p15, 0, r9, c5, c1, 1 @ AIFSR
- mcr p15, 0, r10, c6, c0, 0 @ DFAR
- mcr p15, 0, r11, c6, c0, 2 @ IFAR
- mcr p15, 0, r12, c12, c0, 0 @ VBAR
-
- .if \read_from_vcpu == 0
- pop {r2-r12}
- .else
- ldr r2, [vcpu, #CP15_OFFSET(c1_SCTLR)]
- ldr r3, [vcpu, #CP15_OFFSET(c1_CPACR)]
- ldr r4, [vcpu, #CP15_OFFSET(c2_TTBCR)]
- ldr r5, [vcpu, #CP15_OFFSET(c3_DACR)]
- add r12, vcpu, #CP15_OFFSET(c2_TTBR0)
- ldrd r6, r7, [r12]
- add r12, vcpu, #CP15_OFFSET(c2_TTBR1)
- ldrd r8, r9, [r12]
- ldr r10, [vcpu, #CP15_OFFSET(c10_PRRR)]
- ldr r11, [vcpu, #CP15_OFFSET(c10_NMRR)]
- ldr r12, [vcpu, #CP15_OFFSET(c0_CSSELR)]
- .endif
-
- mcr p15, 0, r2, c1, c0, 0 @ SCTLR
- mcr p15, 0, r3, c1, c0, 2 @ CPACR
- mcr p15, 0, r4, c2, c0, 2 @ TTBCR
- mcr p15, 0, r5, c3, c0, 0 @ DACR
- mcrr p15, 0, r6, r7, c2 @ TTBR 0
- mcrr p15, 1, r8, r9, c2 @ TTBR 1
- mcr p15, 0, r10, c10, c2, 0 @ PRRR
- mcr p15, 0, r11, c10, c2, 1 @ NMRR
- mcr p15, 2, r12, c0, c0, 0 @ CSSELR
-.endm
-
-/*
- * Save the VGIC CPU state into memory
- *
- * Assumes vcpu pointer in vcpu reg
- */
-.macro save_vgic_state
- /* Get VGIC VCTRL base into r2 */
- ldr r2, [vcpu, #VCPU_KVM]
- ldr r2, [r2, #KVM_VGIC_VCTRL]
- cmp r2, #0
- beq 2f
-
- /* Compute the address of struct vgic_cpu */
- add r11, vcpu, #VCPU_VGIC_CPU
-
- /* Save all interesting registers */
- ldr r4, [r2, #GICH_VMCR]
- ldr r5, [r2, #GICH_MISR]
- ldr r6, [r2, #GICH_EISR0]
- ldr r7, [r2, #GICH_EISR1]
- ldr r8, [r2, #GICH_ELRSR0]
- ldr r9, [r2, #GICH_ELRSR1]
- ldr r10, [r2, #GICH_APR]
-ARM_BE8(rev r4, r4 )
-ARM_BE8(rev r5, r5 )
-ARM_BE8(rev r6, r6 )
-ARM_BE8(rev r7, r7 )
-ARM_BE8(rev r8, r8 )
-ARM_BE8(rev r9, r9 )
-ARM_BE8(rev r10, r10 )
-
- str r4, [r11, #VGIC_V2_CPU_VMCR]
- str r5, [r11, #VGIC_V2_CPU_MISR]
-#ifdef CONFIG_CPU_ENDIAN_BE8
- str r6, [r11, #(VGIC_V2_CPU_EISR + 4)]
- str r7, [r11, #VGIC_V2_CPU_EISR]
- str r8, [r11, #(VGIC_V2_CPU_ELRSR + 4)]
- str r9, [r11, #VGIC_V2_CPU_ELRSR]
-#else
- str r6, [r11, #VGIC_V2_CPU_EISR]
- str r7, [r11, #(VGIC_V2_CPU_EISR + 4)]
- str r8, [r11, #VGIC_V2_CPU_ELRSR]
- str r9, [r11, #(VGIC_V2_CPU_ELRSR + 4)]
-#endif
- str r10, [r11, #VGIC_V2_CPU_APR]
-
- /* Clear GICH_HCR */
- mov r5, #0
- str r5, [r2, #GICH_HCR]
-
- /* Save list registers */
- add r2, r2, #GICH_LR0
- add r3, r11, #VGIC_V2_CPU_LR
- ldr r4, [r11, #VGIC_CPU_NR_LR]
-1: ldr r6, [r2], #4
-ARM_BE8(rev r6, r6 )
- str r6, [r3], #4
- subs r4, r4, #1
- bne 1b
-2:
-.endm
-
-/*
- * Restore the VGIC CPU state from memory
- *
- * Assumes vcpu pointer in vcpu reg
- */
-.macro restore_vgic_state
- /* Get VGIC VCTRL base into r2 */
- ldr r2, [vcpu, #VCPU_KVM]
- ldr r2, [r2, #KVM_VGIC_VCTRL]
- cmp r2, #0
- beq 2f
-
- /* Compute the address of struct vgic_cpu */
- add r11, vcpu, #VCPU_VGIC_CPU
-
- /* We only restore a minimal set of registers */
- ldr r3, [r11, #VGIC_V2_CPU_HCR]
- ldr r4, [r11, #VGIC_V2_CPU_VMCR]
- ldr r8, [r11, #VGIC_V2_CPU_APR]
-ARM_BE8(rev r3, r3 )
-ARM_BE8(rev r4, r4 )
-ARM_BE8(rev r8, r8 )
-
- str r3, [r2, #GICH_HCR]
- str r4, [r2, #GICH_VMCR]
- str r8, [r2, #GICH_APR]
-
- /* Restore list registers */
- add r2, r2, #GICH_LR0
- add r3, r11, #VGIC_V2_CPU_LR
- ldr r4, [r11, #VGIC_CPU_NR_LR]
-1: ldr r6, [r3], #4
-ARM_BE8(rev r6, r6 )
- str r6, [r2], #4
- subs r4, r4, #1
- bne 1b
-2:
-.endm
-
-#define CNTHCTL_PL1PCTEN (1 << 0)
-#define CNTHCTL_PL1PCEN (1 << 1)
-
-/*
- * Save the timer state onto the VCPU and allow physical timer/counter access
- * for the host.
- *
- * Assumes vcpu pointer in vcpu reg
- * Clobbers r2-r5
- */
-.macro save_timer_state
- ldr r4, [vcpu, #VCPU_KVM]
- ldr r2, [r4, #KVM_TIMER_ENABLED]
- cmp r2, #0
- beq 1f
-
- mrc p15, 0, r2, c14, c3, 1 @ CNTV_CTL
- str r2, [vcpu, #VCPU_TIMER_CNTV_CTL]
-
- isb
-
- mrrc p15, 3, rr_lo_hi(r2, r3), c14 @ CNTV_CVAL
- ldr r4, =VCPU_TIMER_CNTV_CVAL
- add r5, vcpu, r4
- strd r2, r3, [r5]
-
- @ Ensure host CNTVCT == CNTPCT
- mov r2, #0
- mcrr p15, 4, r2, r2, c14 @ CNTVOFF
-
-1:
- mov r2, #0 @ Clear ENABLE
- mcr p15, 0, r2, c14, c3, 1 @ CNTV_CTL
-
- @ Allow physical timer/counter access for the host
- mrc p15, 4, r2, c14, c1, 0 @ CNTHCTL
- orr r2, r2, #(CNTHCTL_PL1PCEN | CNTHCTL_PL1PCTEN)
- mcr p15, 4, r2, c14, c1, 0 @ CNTHCTL
-.endm
-
-/*
- * Load the timer state from the VCPU and deny physical timer/counter access
- * for the host.
- *
- * Assumes vcpu pointer in vcpu reg
- * Clobbers r2-r5
- */
-.macro restore_timer_state
- @ Disallow physical timer access for the guest
- @ Physical counter access is allowed
- mrc p15, 4, r2, c14, c1, 0 @ CNTHCTL
- orr r2, r2, #CNTHCTL_PL1PCTEN
- bic r2, r2, #CNTHCTL_PL1PCEN
- mcr p15, 4, r2, c14, c1, 0 @ CNTHCTL
-
- ldr r4, [vcpu, #VCPU_KVM]
- ldr r2, [r4, #KVM_TIMER_ENABLED]
- cmp r2, #0
- beq 1f
-
- ldr r2, [r4, #KVM_TIMER_CNTVOFF]
- ldr r3, [r4, #(KVM_TIMER_CNTVOFF + 4)]
- mcrr p15, 4, rr_lo_hi(r2, r3), c14 @ CNTVOFF
-
- ldr r4, =VCPU_TIMER_CNTV_CVAL
- add r5, vcpu, r4
- ldrd r2, r3, [r5]
- mcrr p15, 3, rr_lo_hi(r2, r3), c14 @ CNTV_CVAL
- isb
-
- ldr r2, [vcpu, #VCPU_TIMER_CNTV_CTL]
- and r2, r2, #3
- mcr p15, 0, r2, c14, c3, 1 @ CNTV_CTL
-1:
-.endm
-
-.equ vmentry, 0
-.equ vmexit, 1
-
-/* Configures the HSTR (Hyp System Trap Register) on entry/return
- * (hardware reset value is 0) */
-.macro set_hstr operation
- mrc p15, 4, r2, c1, c1, 3
- ldr r3, =HSTR_T(15)
- .if \operation == vmentry
- orr r2, r2, r3 @ Trap CR{15}
- .else
- bic r2, r2, r3 @ Don't trap any CRx accesses
- .endif
- mcr p15, 4, r2, c1, c1, 3
-.endm
-
-/* Configures the HCPTR (Hyp Coprocessor Trap Register) on entry/return
- * (hardware reset value is 0). Keep previous value in r2.
- * An ISB is emited on vmexit/vmtrap, but executed on vmexit only if
- * VFP wasn't already enabled (always executed on vmtrap).
- * If a label is specified with vmexit, it is branched to if VFP wasn't
- * enabled.
- */
-.macro set_hcptr operation, mask, label = none
- mrc p15, 4, r2, c1, c1, 2
- ldr r3, =\mask
- .if \operation == vmentry
- orr r3, r2, r3 @ Trap coproc-accesses defined in mask
- .else
- bic r3, r2, r3 @ Don't trap defined coproc-accesses
- .endif
- mcr p15, 4, r3, c1, c1, 2
- .if \operation != vmentry
- .if \operation == vmexit
- tst r2, #(HCPTR_TCP(10) | HCPTR_TCP(11))
- beq 1f
- .endif
- isb
- .if \label != none
- b \label
- .endif
-1:
- .endif
-.endm
-
-/* Configures the HDCR (Hyp Debug Configuration Register) on entry/return
- * (hardware reset value is 0) */
-.macro set_hdcr operation
- mrc p15, 4, r2, c1, c1, 1
- ldr r3, =(HDCR_TPM|HDCR_TPMCR)
- .if \operation == vmentry
- orr r2, r2, r3 @ Trap some perfmon accesses
- .else
- bic r2, r2, r3 @ Don't trap any perfmon accesses
- .endif
- mcr p15, 4, r2, c1, c1, 1
-.endm
-
-/* Enable/Disable: stage-2 trans., trap interrupts, trap wfi, trap smc */
-.macro configure_hyp_role operation
- .if \operation == vmentry
- ldr r2, [vcpu, #VCPU_HCR]
- ldr r3, [vcpu, #VCPU_IRQ_LINES]
- orr r2, r2, r3
- .else
- mov r2, #0
- .endif
- mcr p15, 4, r2, c1, c1, 0 @ HCR
-.endm
-
-.macro load_vcpu
- mrc p15, 4, vcpu, c13, c0, 2 @ HTPIDR
-.endm
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 22/28] ARM: KVM: Switch to C-based stage2 init
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 2 ++
arch/arm/include/asm/kvm_host.h | 1 +
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 2 ++
arch/arm/kvm/hyp/s2-setup.c | 34 ++++++++++++++++++++++++++++++++++
arch/arm/kvm/init.S | 8 --------
6 files changed, 40 insertions(+), 8 deletions(-)
create mode 100644 arch/arm/kvm/hyp/s2-setup.c
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 4841225..3283a2f 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -98,6 +98,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+
+extern void __init_stage2_translation(void);
#endif
#endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index c62d717..0fe41aa 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -224,6 +224,7 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
static inline void __cpu_init_stage2(void)
{
+ kvm_call_hyp(__init_stage2_translation);
}
static inline int kvm_arch_dev_ioctl_check_extension(long ext)
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index a7d3a7e..7152369 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -11,3 +11,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += entry.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
obj-$(CONFIG_KVM_ARM_HOST) += switch.o
+obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 8bbd2a7..73bf9e3 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -71,6 +71,8 @@
#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
#define DACR __ACCESS_CP15(c3, 0, c0, 0)
#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
diff --git a/arch/arm/kvm/hyp/s2-setup.c b/arch/arm/kvm/hyp/s2-setup.c
new file mode 100644
index 0000000..f5f49c5
--- /dev/null
+++ b/arch/arm/kvm/hyp/s2-setup.c
@@ -0,0 +1,34 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/types.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
+
+#include "hyp.h"
+
+void __hyp_text __init_stage2_translation(void)
+{
+ u64 val;
+
+ val = read_sysreg(VTCR) & ~VTCR_MASK;
+
+ val |= read_sysreg(HTCR) & VTCR_HTCR_SH;
+ val |= KVM_VTCR_SL0 | KVM_VTCR_T0SZ | KVM_VTCR_S;
+
+ write_sysreg(val, VTCR);
+}
diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S
index 3988e72..1f9ae17 100644
--- a/arch/arm/kvm/init.S
+++ b/arch/arm/kvm/init.S
@@ -84,14 +84,6 @@ __do_hyp_init:
orr r0, r0, r1
mcr p15, 4, r0, c2, c0, 2 @ HTCR
- mrc p15, 4, r1, c2, c1, 2 @ VTCR
- ldr r2, =VTCR_MASK
- bic r1, r1, r2
- bic r0, r0, #(~VTCR_HTCR_SH) @ clear non-reusable HTCR bits
- orr r1, r0, r1
- orr r1, r1, #(KVM_VTCR_SL0 | KVM_VTCR_T0SZ | KVM_VTCR_S)
- mcr p15, 4, r1, c2, c1, 2 @ VTCR
-
@ Use the same memory attributes for hyp. accesses as the kernel
@ (copy MAIRx ro HMAIRx).
mrc p15, 0, r0, c10, c2, 0
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 22/28] ARM: KVM: Switch to C-based stage2 init
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 2 ++
arch/arm/include/asm/kvm_host.h | 1 +
arch/arm/kvm/hyp/Makefile | 1 +
arch/arm/kvm/hyp/hyp.h | 2 ++
arch/arm/kvm/hyp/s2-setup.c | 34 ++++++++++++++++++++++++++++++++++
arch/arm/kvm/init.S | 8 --------
6 files changed, 40 insertions(+), 8 deletions(-)
create mode 100644 arch/arm/kvm/hyp/s2-setup.c
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 4841225..3283a2f 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -98,6 +98,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+
+extern void __init_stage2_translation(void);
#endif
#endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index c62d717..0fe41aa 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -224,6 +224,7 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
static inline void __cpu_init_stage2(void)
{
+ kvm_call_hyp(__init_stage2_translation);
}
static inline int kvm_arch_dev_ioctl_check_extension(long ext)
diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
index a7d3a7e..7152369 100644
--- a/arch/arm/kvm/hyp/Makefile
+++ b/arch/arm/kvm/hyp/Makefile
@@ -11,3 +11,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += entry.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
obj-$(CONFIG_KVM_ARM_HOST) += switch.o
+obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
index 8bbd2a7..73bf9e3 100644
--- a/arch/arm/kvm/hyp/hyp.h
+++ b/arch/arm/kvm/hyp/hyp.h
@@ -71,6 +71,8 @@
#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
#define DACR __ACCESS_CP15(c3, 0, c0, 0)
#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
diff --git a/arch/arm/kvm/hyp/s2-setup.c b/arch/arm/kvm/hyp/s2-setup.c
new file mode 100644
index 0000000..f5f49c5
--- /dev/null
+++ b/arch/arm/kvm/hyp/s2-setup.c
@@ -0,0 +1,34 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/types.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
+
+#include "hyp.h"
+
+void __hyp_text __init_stage2_translation(void)
+{
+ u64 val;
+
+ val = read_sysreg(VTCR) & ~VTCR_MASK;
+
+ val |= read_sysreg(HTCR) & VTCR_HTCR_SH;
+ val |= KVM_VTCR_SL0 | KVM_VTCR_T0SZ | KVM_VTCR_S;
+
+ write_sysreg(val, VTCR);
+}
diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S
index 3988e72..1f9ae17 100644
--- a/arch/arm/kvm/init.S
+++ b/arch/arm/kvm/init.S
@@ -84,14 +84,6 @@ __do_hyp_init:
orr r0, r0, r1
mcr p15, 4, r0, c2, c0, 2 @ HTCR
- mrc p15, 4, r1, c2, c1, 2 @ VTCR
- ldr r2, =VTCR_MASK
- bic r1, r1, r2
- bic r0, r0, #(~VTCR_HTCR_SH) @ clear non-reusable HTCR bits
- orr r1, r0, r1
- orr r1, r1, #(KVM_VTCR_SL0 | KVM_VTCR_T0SZ | KVM_VTCR_S)
- mcr p15, 4, r1, c2, c1, 2 @ VTCR
-
@ Use the same memory attributes for hyp. accesses as the kernel
@ (copy MAIRx ro HMAIRx).
mrc p15, 0, r0, c10, c2, 0
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 23/28] ARM: KVM: Remove __weak attributes
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Now that the old code is long gone, we can remove all the weak
attributes, as there is only one version of the code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp-entry.S | 4 +---
arch/arm/kvm/hyp/switch.c | 2 +-
arch/arm/kvm/hyp/tlb.c | 6 +++---
3 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index ca412ad..e2a45b8 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -58,10 +58,8 @@
*/
.align 5
-__hyp_vector:
- .global __hyp_vector
__kvm_hyp_vector:
- .weak __kvm_hyp_vector
+ .global __kvm_hyp_vector
@ Hyp-mode exception vector
W(b) hyp_reset
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index 67f3944..7325ad3 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -187,7 +187,7 @@ again:
return exit_code;
}
-__alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+__alias(__guest_run) int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
static const char * const __hyp_panic_string[] = {
[ARM_EXCEPTION_RESET] = "\nHYP panic: RST?? PC:%08x CPSR:%08x",
diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
index 993fe89..3854e31 100644
--- a/arch/arm/kvm/hyp/tlb.c
+++ b/arch/arm/kvm/hyp/tlb.c
@@ -50,14 +50,14 @@ static void __hyp_text __tlb_flush_vmid(struct kvm *kvm)
write_sysreg(0, VTTBR);
}
-__alias(__tlb_flush_vmid) void __weak __kvm_tlb_flush_vmid(struct kvm *kvm);
+__alias(__tlb_flush_vmid) void __kvm_tlb_flush_vmid(struct kvm *kvm);
static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
{
__tlb_flush_vmid(kvm);
}
-__alias(__tlb_flush_vmid_ipa) void __weak __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
+__alias(__tlb_flush_vmid_ipa) void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
phys_addr_t ipa);
static void __hyp_text __tlb_flush_vm_context(void)
@@ -68,4 +68,4 @@ static void __hyp_text __tlb_flush_vm_context(void)
dsb(ish);
}
-__alias(__tlb_flush_vm_context) void __weak __kvm_flush_vm_context(void);
+__alias(__tlb_flush_vm_context) void __kvm_flush_vm_context(void);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 23/28] ARM: KVM: Remove __weak attributes
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Now that the old code is long gone, we can remove all the weak
attributes, as there is only one version of the code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/hyp/hyp-entry.S | 4 +---
arch/arm/kvm/hyp/switch.c | 2 +-
arch/arm/kvm/hyp/tlb.c | 6 +++---
3 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
index ca412ad..e2a45b8 100644
--- a/arch/arm/kvm/hyp/hyp-entry.S
+++ b/arch/arm/kvm/hyp/hyp-entry.S
@@ -58,10 +58,8 @@
*/
.align 5
-__hyp_vector:
- .global __hyp_vector
__kvm_hyp_vector:
- .weak __kvm_hyp_vector
+ .global __kvm_hyp_vector
@ Hyp-mode exception vector
W(b) hyp_reset
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index 67f3944..7325ad3 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -187,7 +187,7 @@ again:
return exit_code;
}
-__alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+__alias(__guest_run) int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
static const char * const __hyp_panic_string[] = {
[ARM_EXCEPTION_RESET] = "\nHYP panic: RST?? PC:%08x CPSR:%08x",
diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
index 993fe89..3854e31 100644
--- a/arch/arm/kvm/hyp/tlb.c
+++ b/arch/arm/kvm/hyp/tlb.c
@@ -50,14 +50,14 @@ static void __hyp_text __tlb_flush_vmid(struct kvm *kvm)
write_sysreg(0, VTTBR);
}
-__alias(__tlb_flush_vmid) void __weak __kvm_tlb_flush_vmid(struct kvm *kvm);
+__alias(__tlb_flush_vmid) void __kvm_tlb_flush_vmid(struct kvm *kvm);
static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
{
__tlb_flush_vmid(kvm);
}
-__alias(__tlb_flush_vmid_ipa) void __weak __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
+__alias(__tlb_flush_vmid_ipa) void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
phys_addr_t ipa);
static void __hyp_text __tlb_flush_vm_context(void)
@@ -68,4 +68,4 @@ static void __hyp_text __tlb_flush_vm_context(void)
dsb(ish);
}
-__alias(__tlb_flush_vm_context) void __weak __kvm_flush_vm_context(void);
+__alias(__tlb_flush_vm_context) void __kvm_flush_vm_context(void);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 24/28] ARM: KVM: Turn CP15 defines to an enum
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Just like on arm64, having the CP15 registers expressed as a set
of #defines has been very conflict-prone. Let's turn it into an
enum, which should make it more manageable.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 33 ---------------------------------
arch/arm/include/asm/kvm_host.h | 39 +++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/guest.c | 1 -
3 files changed, 39 insertions(+), 34 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 3283a2f..083825f 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -21,39 +21,6 @@
#include <asm/virt.h>
-/* 0 is reserved as an invalid value. */
-#define c0_MPIDR 1 /* MultiProcessor ID Register */
-#define c0_CSSELR 2 /* Cache Size Selection Register */
-#define c1_SCTLR 3 /* System Control Register */
-#define c1_ACTLR 4 /* Auxiliary Control Register */
-#define c1_CPACR 5 /* Coprocessor Access Control */
-#define c2_TTBR0 6 /* Translation Table Base Register 0 */
-#define c2_TTBR0_high 7 /* TTBR0 top 32 bits */
-#define c2_TTBR1 8 /* Translation Table Base Register 1 */
-#define c2_TTBR1_high 9 /* TTBR1 top 32 bits */
-#define c2_TTBCR 10 /* Translation Table Base Control R. */
-#define c3_DACR 11 /* Domain Access Control Register */
-#define c5_DFSR 12 /* Data Fault Status Register */
-#define c5_IFSR 13 /* Instruction Fault Status Register */
-#define c5_ADFSR 14 /* Auxilary Data Fault Status R */
-#define c5_AIFSR 15 /* Auxilary Instrunction Fault Status R */
-#define c6_DFAR 16 /* Data Fault Address Register */
-#define c6_IFAR 17 /* Instruction Fault Address Register */
-#define c7_PAR 18 /* Physical Address Register */
-#define c7_PAR_high 19 /* PAR top 32 bits */
-#define c9_L2CTLR 20 /* Cortex A15/A7 L2 Control Register */
-#define c10_PRRR 21 /* Primary Region Remap Register */
-#define c10_NMRR 22 /* Normal Memory Remap Register */
-#define c12_VBAR 23 /* Vector Base Address Register */
-#define c13_CID 24 /* Context ID Register */
-#define c13_TID_URW 25 /* Thread ID, User R/W */
-#define c13_TID_URO 26 /* Thread ID, User R/O */
-#define c13_TID_PRIV 27 /* Thread ID, Privileged */
-#define c14_CNTKCTL 28 /* Timer Control Register (PL1) */
-#define c10_AMAIR0 29 /* Auxilary Memory Attribute Indirection Reg0 */
-#define c10_AMAIR1 30 /* Auxilary Memory Attribute Indirection Reg1 */
-#define NR_CP15_REGS 31 /* Number of regs (incl. invalid) */
-
#define ARM_EXCEPTION_RESET 0
#define ARM_EXCEPTION_UNDEFINED 1
#define ARM_EXCEPTION_SOFTWARE 2
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 0fe41aa..daf6a71 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -88,6 +88,45 @@ struct kvm_vcpu_fault_info {
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
};
+/*
+ * 0 is reserved as an invalid value.
+ * Order should be kept in sync with the save/restore code.
+ */
+enum vcpu_sysreg {
+ __INVALID_SYSREG__,
+ c0_MPIDR, /* MultiProcessor ID Register */
+ c0_CSSELR, /* Cache Size Selection Register */
+ c1_SCTLR, /* System Control Register */
+ c1_ACTLR, /* Auxiliary Control Register */
+ c1_CPACR, /* Coprocessor Access Control */
+ c2_TTBR0, /* Translation Table Base Register 0 */
+ c2_TTBR0_high, /* TTBR0 top 32 bits */
+ c2_TTBR1, /* Translation Table Base Register 1 */
+ c2_TTBR1_high, /* TTBR1 top 32 bits */
+ c2_TTBCR, /* Translation Table Base Control R. */
+ c3_DACR, /* Domain Access Control Register */
+ c5_DFSR, /* Data Fault Status Register */
+ c5_IFSR, /* Instruction Fault Status Register */
+ c5_ADFSR, /* Auxilary Data Fault Status R */
+ c5_AIFSR, /* Auxilary Instrunction Fault Status R */
+ c6_DFAR, /* Data Fault Address Register */
+ c6_IFAR, /* Instruction Fault Address Register */
+ c7_PAR, /* Physical Address Register */
+ c7_PAR_high, /* PAR top 32 bits */
+ c9_L2CTLR, /* Cortex A15/A7 L2 Control Register */
+ c10_PRRR, /* Primary Region Remap Register */
+ c10_NMRR, /* Normal Memory Remap Register */
+ c12_VBAR, /* Vector Base Address Register */
+ c13_CID, /* Context ID Register */
+ c13_TID_URW, /* Thread ID, User R/W */
+ c13_TID_URO, /* Thread ID, User R/O */
+ c13_TID_PRIV, /* Thread ID, Privileged */
+ c14_CNTKCTL, /* Timer Control Register (PL1) */
+ c10_AMAIR0, /* Auxilary Memory Attribute Indirection Reg0 */
+ c10_AMAIR1, /* Auxilary Memory Attribute Indirection Reg1 */
+ NR_CP15_REGS /* Number of regs (incl. invalid) */
+};
+
struct kvm_cpu_context {
struct kvm_regs gp_regs;
struct vfp_hard_struct vfp;
diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c
index 86e26fb..12cbb68 100644
--- a/arch/arm/kvm/guest.c
+++ b/arch/arm/kvm/guest.c
@@ -25,7 +25,6 @@
#include <asm/cputype.h>
#include <asm/uaccess.h>
#include <asm/kvm.h>
-#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 24/28] ARM: KVM: Turn CP15 defines to an enum
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Just like on arm64, having the CP15 registers expressed as a set
of #defines has been very conflict-prone. Let's turn it into an
enum, which should make it more manageable.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 33 ---------------------------------
arch/arm/include/asm/kvm_host.h | 39 +++++++++++++++++++++++++++++++++++++++
arch/arm/kvm/guest.c | 1 -
3 files changed, 39 insertions(+), 34 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 3283a2f..083825f 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -21,39 +21,6 @@
#include <asm/virt.h>
-/* 0 is reserved as an invalid value. */
-#define c0_MPIDR 1 /* MultiProcessor ID Register */
-#define c0_CSSELR 2 /* Cache Size Selection Register */
-#define c1_SCTLR 3 /* System Control Register */
-#define c1_ACTLR 4 /* Auxiliary Control Register */
-#define c1_CPACR 5 /* Coprocessor Access Control */
-#define c2_TTBR0 6 /* Translation Table Base Register 0 */
-#define c2_TTBR0_high 7 /* TTBR0 top 32 bits */
-#define c2_TTBR1 8 /* Translation Table Base Register 1 */
-#define c2_TTBR1_high 9 /* TTBR1 top 32 bits */
-#define c2_TTBCR 10 /* Translation Table Base Control R. */
-#define c3_DACR 11 /* Domain Access Control Register */
-#define c5_DFSR 12 /* Data Fault Status Register */
-#define c5_IFSR 13 /* Instruction Fault Status Register */
-#define c5_ADFSR 14 /* Auxilary Data Fault Status R */
-#define c5_AIFSR 15 /* Auxilary Instrunction Fault Status R */
-#define c6_DFAR 16 /* Data Fault Address Register */
-#define c6_IFAR 17 /* Instruction Fault Address Register */
-#define c7_PAR 18 /* Physical Address Register */
-#define c7_PAR_high 19 /* PAR top 32 bits */
-#define c9_L2CTLR 20 /* Cortex A15/A7 L2 Control Register */
-#define c10_PRRR 21 /* Primary Region Remap Register */
-#define c10_NMRR 22 /* Normal Memory Remap Register */
-#define c12_VBAR 23 /* Vector Base Address Register */
-#define c13_CID 24 /* Context ID Register */
-#define c13_TID_URW 25 /* Thread ID, User R/W */
-#define c13_TID_URO 26 /* Thread ID, User R/O */
-#define c13_TID_PRIV 27 /* Thread ID, Privileged */
-#define c14_CNTKCTL 28 /* Timer Control Register (PL1) */
-#define c10_AMAIR0 29 /* Auxilary Memory Attribute Indirection Reg0 */
-#define c10_AMAIR1 30 /* Auxilary Memory Attribute Indirection Reg1 */
-#define NR_CP15_REGS 31 /* Number of regs (incl. invalid) */
-
#define ARM_EXCEPTION_RESET 0
#define ARM_EXCEPTION_UNDEFINED 1
#define ARM_EXCEPTION_SOFTWARE 2
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 0fe41aa..daf6a71 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -88,6 +88,45 @@ struct kvm_vcpu_fault_info {
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
};
+/*
+ * 0 is reserved as an invalid value.
+ * Order should be kept in sync with the save/restore code.
+ */
+enum vcpu_sysreg {
+ __INVALID_SYSREG__,
+ c0_MPIDR, /* MultiProcessor ID Register */
+ c0_CSSELR, /* Cache Size Selection Register */
+ c1_SCTLR, /* System Control Register */
+ c1_ACTLR, /* Auxiliary Control Register */
+ c1_CPACR, /* Coprocessor Access Control */
+ c2_TTBR0, /* Translation Table Base Register 0 */
+ c2_TTBR0_high, /* TTBR0 top 32 bits */
+ c2_TTBR1, /* Translation Table Base Register 1 */
+ c2_TTBR1_high, /* TTBR1 top 32 bits */
+ c2_TTBCR, /* Translation Table Base Control R. */
+ c3_DACR, /* Domain Access Control Register */
+ c5_DFSR, /* Data Fault Status Register */
+ c5_IFSR, /* Instruction Fault Status Register */
+ c5_ADFSR, /* Auxilary Data Fault Status R */
+ c5_AIFSR, /* Auxilary Instrunction Fault Status R */
+ c6_DFAR, /* Data Fault Address Register */
+ c6_IFAR, /* Instruction Fault Address Register */
+ c7_PAR, /* Physical Address Register */
+ c7_PAR_high, /* PAR top 32 bits */
+ c9_L2CTLR, /* Cortex A15/A7 L2 Control Register */
+ c10_PRRR, /* Primary Region Remap Register */
+ c10_NMRR, /* Normal Memory Remap Register */
+ c12_VBAR, /* Vector Base Address Register */
+ c13_CID, /* Context ID Register */
+ c13_TID_URW, /* Thread ID, User R/W */
+ c13_TID_URO, /* Thread ID, User R/O */
+ c13_TID_PRIV, /* Thread ID, Privileged */
+ c14_CNTKCTL, /* Timer Control Register (PL1) */
+ c10_AMAIR0, /* Auxilary Memory Attribute Indirection Reg0 */
+ c10_AMAIR1, /* Auxilary Memory Attribute Indirection Reg1 */
+ NR_CP15_REGS /* Number of regs (incl. invalid) */
+};
+
struct kvm_cpu_context {
struct kvm_regs gp_regs;
struct vfp_hard_struct vfp;
diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c
index 86e26fb..12cbb68 100644
--- a/arch/arm/kvm/guest.c
+++ b/arch/arm/kvm/guest.c
@@ -25,7 +25,6 @@
#include <asm/cputype.h>
#include <asm/uaccess.h>
#include <asm/kvm.h>
-#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 25/28] ARM: KVM: Cleanup asm-offsets.c
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
Since we don't have much assembler left, most of the KVM stuff
in asm-offsets.c is now superfluous. Let's get rid of it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kernel/asm-offsets.c | 30 ------------------------------
1 file changed, 30 deletions(-)
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 2f3e0b0..1f24c32 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -170,42 +170,12 @@ int main(void)
DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE);
BLANK();
#ifdef CONFIG_KVM_ARM_HOST
- DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
- DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
DEFINE(VCPU_GUEST_CTXT, offsetof(struct kvm_vcpu, arch.ctxt));
DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
- DEFINE(CPU_CTXT_CP15, offsetof(struct kvm_cpu_context, cp15));
DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
- DEFINE(GP_REGS_SVC, offsetof(struct kvm_regs, svc_regs));
- DEFINE(GP_REGS_ABT, offsetof(struct kvm_regs, abt_regs));
- DEFINE(GP_REGS_UND, offsetof(struct kvm_regs, und_regs));
- DEFINE(GP_REGS_IRQ, offsetof(struct kvm_regs, irq_regs));
- DEFINE(GP_REGS_FIQ, offsetof(struct kvm_regs, fiq_regs));
- DEFINE(GP_REGS_PC, offsetof(struct kvm_regs, usr_regs.ARM_pc));
- DEFINE(GP_REGS_CPSR, offsetof(struct kvm_regs, usr_regs.ARM_cpsr));
- DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr));
- DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
- DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
- DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.fault.hxfar));
- DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.fault.hpfar));
DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
- DEFINE(VCPU_VGIC_CPU, offsetof(struct kvm_vcpu, arch.vgic_cpu));
- DEFINE(VGIC_V2_CPU_HCR, offsetof(struct vgic_cpu, vgic_v2.vgic_hcr));
- DEFINE(VGIC_V2_CPU_VMCR, offsetof(struct vgic_cpu, vgic_v2.vgic_vmcr));
- DEFINE(VGIC_V2_CPU_MISR, offsetof(struct vgic_cpu, vgic_v2.vgic_misr));
- DEFINE(VGIC_V2_CPU_EISR, offsetof(struct vgic_cpu, vgic_v2.vgic_eisr));
- DEFINE(VGIC_V2_CPU_ELRSR, offsetof(struct vgic_cpu, vgic_v2.vgic_elrsr));
- DEFINE(VGIC_V2_CPU_APR, offsetof(struct vgic_cpu, vgic_v2.vgic_apr));
- DEFINE(VGIC_V2_CPU_LR, offsetof(struct vgic_cpu, vgic_v2.vgic_lr));
- DEFINE(VGIC_CPU_NR_LR, offsetof(struct vgic_cpu, nr_lr));
- DEFINE(VCPU_TIMER_CNTV_CTL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_ctl));
- DEFINE(VCPU_TIMER_CNTV_CVAL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_cval));
- DEFINE(KVM_TIMER_CNTVOFF, offsetof(struct kvm, arch.timer.cntvoff));
- DEFINE(KVM_TIMER_ENABLED, offsetof(struct kvm, arch.timer.enabled));
- DEFINE(KVM_VGIC_VCTRL, offsetof(struct kvm, arch.vgic.vctrl_base));
- DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr));
#endif
BLANK();
#ifdef CONFIG_VDSO
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 25/28] ARM: KVM: Cleanup asm-offsets.c
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
Since we don't have much assembler left, most of the KVM stuff
in asm-offsets.c is now superfluous. Let's get rid of it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kernel/asm-offsets.c | 30 ------------------------------
1 file changed, 30 deletions(-)
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 2f3e0b0..1f24c32 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -170,42 +170,12 @@ int main(void)
DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE);
BLANK();
#ifdef CONFIG_KVM_ARM_HOST
- DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
- DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
DEFINE(VCPU_GUEST_CTXT, offsetof(struct kvm_vcpu, arch.ctxt));
DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
- DEFINE(CPU_CTXT_CP15, offsetof(struct kvm_cpu_context, cp15));
DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
- DEFINE(GP_REGS_SVC, offsetof(struct kvm_regs, svc_regs));
- DEFINE(GP_REGS_ABT, offsetof(struct kvm_regs, abt_regs));
- DEFINE(GP_REGS_UND, offsetof(struct kvm_regs, und_regs));
- DEFINE(GP_REGS_IRQ, offsetof(struct kvm_regs, irq_regs));
- DEFINE(GP_REGS_FIQ, offsetof(struct kvm_regs, fiq_regs));
- DEFINE(GP_REGS_PC, offsetof(struct kvm_regs, usr_regs.ARM_pc));
- DEFINE(GP_REGS_CPSR, offsetof(struct kvm_regs, usr_regs.ARM_cpsr));
- DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr));
- DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
- DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
- DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.fault.hxfar));
- DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.fault.hpfar));
DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
- DEFINE(VCPU_VGIC_CPU, offsetof(struct kvm_vcpu, arch.vgic_cpu));
- DEFINE(VGIC_V2_CPU_HCR, offsetof(struct vgic_cpu, vgic_v2.vgic_hcr));
- DEFINE(VGIC_V2_CPU_VMCR, offsetof(struct vgic_cpu, vgic_v2.vgic_vmcr));
- DEFINE(VGIC_V2_CPU_MISR, offsetof(struct vgic_cpu, vgic_v2.vgic_misr));
- DEFINE(VGIC_V2_CPU_EISR, offsetof(struct vgic_cpu, vgic_v2.vgic_eisr));
- DEFINE(VGIC_V2_CPU_ELRSR, offsetof(struct vgic_cpu, vgic_v2.vgic_elrsr));
- DEFINE(VGIC_V2_CPU_APR, offsetof(struct vgic_cpu, vgic_v2.vgic_apr));
- DEFINE(VGIC_V2_CPU_LR, offsetof(struct vgic_cpu, vgic_v2.vgic_lr));
- DEFINE(VGIC_CPU_NR_LR, offsetof(struct vgic_cpu, nr_lr));
- DEFINE(VCPU_TIMER_CNTV_CTL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_ctl));
- DEFINE(VCPU_TIMER_CNTV_CVAL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_cval));
- DEFINE(KVM_TIMER_CNTVOFF, offsetof(struct kvm, arch.timer.cntvoff));
- DEFINE(KVM_TIMER_ENABLED, offsetof(struct kvm, arch.timer.enabled));
- DEFINE(KVM_VGIC_VCTRL, offsetof(struct kvm, arch.vgic.vctrl_base));
- DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr));
#endif
BLANK();
#ifdef CONFIG_VDSO
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 26/28] ARM: KVM: Remove unused hyp_pc field
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
This field was never populated, and the panic code already
does something similar. Delete the related code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_emulate.h | 5 -----
arch/arm/include/asm/kvm_host.h | 1 -
arch/arm/kernel/asm-offsets.c | 1 -
arch/arm/kvm/handle_exit.c | 5 -----
4 files changed, 12 deletions(-)
diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index f710616..8a8c6de 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -108,11 +108,6 @@ static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
return ((phys_addr_t)vcpu->arch.fault.hpfar & HPFAR_MASK) << 8;
}
-static inline unsigned long kvm_vcpu_get_hyp_pc(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.fault.hyp_pc;
-}
-
static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_ISV;
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index daf6a71..19e9aba 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -85,7 +85,6 @@ struct kvm_vcpu_fault_info {
u32 hsr; /* Hyp Syndrome Register */
u32 hxfar; /* Hyp Data/Inst. Fault Address Register */
u32 hpfar; /* Hyp IPA Fault Address Register */
- u32 hyp_pc; /* PC when exception was taken from Hyp mode */
};
/*
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 1f24c32..27d0581 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -175,7 +175,6 @@ int main(void)
DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
- DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
#endif
BLANK();
#ifdef CONFIG_VDSO
diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
index 3ede90d..5377d753 100644
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -147,11 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
switch (exception_index) {
case ARM_EXCEPTION_IRQ:
return 1;
- case ARM_EXCEPTION_UNDEFINED:
- kvm_err("Undefined exception in Hyp mode at: %#08lx\n",
- kvm_vcpu_get_hyp_pc(vcpu));
- BUG();
- panic("KVM: Hypervisor undefined exception!\n");
case ARM_EXCEPTION_DATA_ABORT:
case ARM_EXCEPTION_PREF_ABORT:
case ARM_EXCEPTION_HVC:
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 26/28] ARM: KVM: Remove unused hyp_pc field
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
This field was never populated, and the panic code already
does something similar. Delete the related code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_emulate.h | 5 -----
arch/arm/include/asm/kvm_host.h | 1 -
arch/arm/kernel/asm-offsets.c | 1 -
arch/arm/kvm/handle_exit.c | 5 -----
4 files changed, 12 deletions(-)
diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index f710616..8a8c6de 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -108,11 +108,6 @@ static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
return ((phys_addr_t)vcpu->arch.fault.hpfar & HPFAR_MASK) << 8;
}
-static inline unsigned long kvm_vcpu_get_hyp_pc(struct kvm_vcpu *vcpu)
-{
- return vcpu->arch.fault.hyp_pc;
-}
-
static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_ISV;
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index daf6a71..19e9aba 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -85,7 +85,6 @@ struct kvm_vcpu_fault_info {
u32 hsr; /* Hyp Syndrome Register */
u32 hxfar; /* Hyp Data/Inst. Fault Address Register */
u32 hpfar; /* Hyp IPA Fault Address Register */
- u32 hyp_pc; /* PC when exception was taken from Hyp mode */
};
/*
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 1f24c32..27d0581 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -175,7 +175,6 @@ int main(void)
DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
- DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
#endif
BLANK();
#ifdef CONFIG_VDSO
diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
index 3ede90d..5377d753 100644
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -147,11 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
switch (exception_index) {
case ARM_EXCEPTION_IRQ:
return 1;
- case ARM_EXCEPTION_UNDEFINED:
- kvm_err("Undefined exception in Hyp mode at: %#08lx\n",
- kvm_vcpu_get_hyp_pc(vcpu));
- BUG();
- panic("KVM: Hypervisor undefined exception!\n");
case ARM_EXCEPTION_DATA_ABORT:
case ARM_EXCEPTION_PREF_ABORT:
case ARM_EXCEPTION_HVC:
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 27/28] ARM: KVM: Remove handling of ARM_EXCEPTION_DATA/PREF_ABORT
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
These are now handled as a panic, so there is little point in
keeping them around.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/handle_exit.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
index 5377d753..3f1ef0d 100644
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -147,8 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
switch (exception_index) {
case ARM_EXCEPTION_IRQ:
return 1;
- case ARM_EXCEPTION_DATA_ABORT:
- case ARM_EXCEPTION_PREF_ABORT:
case ARM_EXCEPTION_HVC:
/*
* See ARM ARM B1.14.1: "Hyp traps on instructions
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 27/28] ARM: KVM: Remove handling of ARM_EXCEPTION_DATA/PREF_ABORT
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
These are now handled as a panic, so there is little point in
keeping them around.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/handle_exit.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
index 5377d753..3f1ef0d 100644
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -147,8 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
switch (exception_index) {
case ARM_EXCEPTION_IRQ:
return 1;
- case ARM_EXCEPTION_DATA_ABORT:
- case ARM_EXCEPTION_PREF_ABORT:
case ARM_EXCEPTION_HVC:
/*
* See ARM ARM B1.14.1: "Hyp traps on instructions
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 28/28] ARM: KVM: Remove __kvm_hyp_exit/__kvm_hyp_exit_end
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-04 11:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
I have no idea what these were for - probably a leftover from an
early implementation. Good bye!
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 083825f..15d58b4 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -55,9 +55,6 @@ struct kvm_vcpu;
extern char __kvm_hyp_init[];
extern char __kvm_hyp_init_end[];
-extern char __kvm_hyp_exit[];
-extern char __kvm_hyp_exit_end[];
-
extern char __kvm_hyp_vector[];
extern void __kvm_flush_vm_context(void);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v2 28/28] ARM: KVM: Remove __kvm_hyp_exit/__kvm_hyp_exit_end
@ 2016-02-04 11:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-04 11:00 UTC (permalink / raw)
To: linux-arm-kernel
I have no idea what these were for - probably a leftover from an
early implementation. Good bye!
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/include/asm/kvm_asm.h | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 083825f..15d58b4 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -55,9 +55,6 @@ struct kvm_vcpu;
extern char __kvm_hyp_init[];
extern char __kvm_hyp_init_end[];
-extern char __kvm_hyp_exit[];
-extern char __kvm_hyp_exit_end[];
-
extern char __kvm_hyp_vector[];
extern void __kvm_flush_vm_context(void);
--
2.1.4
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v2 18/28] ARM: KVM: Add HYP mode entry code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 17:00 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 17:00 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Thu, Feb 04, 2016 at 11:00:35AM +0000, Marc Zyngier wrote:
> This part is almost entierely borrowed from the existing code, just
> slightly simplifying the HYP function call (as we now save SPSR_hyp
> in the world switch).
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/Makefile | 1 +
> arch/arm/kvm/hyp/hyp-entry.S | 157 +++++++++++++++++++++++++++++++++++++++++++
> arch/arm/kvm/hyp/hyp.h | 2 +
> 3 files changed, 160 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
>
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> index cfab402..a7d3a7e 100644
> --- a/arch/arm/kvm/hyp/Makefile
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -9,4 +9,5 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
> +obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
> obj-$(CONFIG_KVM_ARM_HOST) += switch.o
> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
> new file mode 100644
> index 0000000..44bc11f
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/hyp-entry.S
> @@ -0,0 +1,157 @@
> +/*
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License, version 2, as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/kvm_arm.h>
> +#include <asm/kvm_asm.h>
> +
> + .arch_extension virt
> +
> + .text
> + .pushsection .hyp.text, "ax"
> +
> +.macro load_vcpu reg
> + mrc p15, 4, \reg, c13, c0, 2 @ HTPIDR
> +.endm
> +
> +/********************************************************************
> + * Hypervisor exception vector and handlers
> + *
> + *
> + * The KVM/ARM Hypervisor ABI is defined as follows:
> + *
> + * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
> + * instruction is issued since all traps are disabled when running the host
> + * kernel as per the Hyp-mode initialization at boot time.
> + *
> + * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
> + * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
> + * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
> + * instructions are called from within Hyp-mode.
> + *
> + * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
> + * Switching to Hyp mode is done through a simple HVC #0 instruction. The
> + * exception vector code will check that the HVC comes from VMID==0.
> + * - r0 contains a pointer to a HYP function
> + * - r1, r2, and r3 contain arguments to the above function.
> + * - The HYP function will be called with its arguments in r0, r1 and r2.
> + * On HYP function return, we return directly to SVC.
> + *
> + * Note that the above is used to execute code in Hyp-mode from a host-kernel
> + * point of view, and is a different concept from performing a world-switch and
> + * executing guest code SVC mode (with a VMID != 0).
> + */
> +
> + .align 5
> +__hyp_vector:
> + .global __hyp_vector
> +__kvm_hyp_vector:
> + .weak __kvm_hyp_vector
> +
> + @ Hyp-mode exception vector
> + W(b) hyp_reset
> + W(b) hyp_undef
> + W(b) hyp_svc
> + W(b) hyp_pabt
> + W(b) hyp_dabt
> + W(b) hyp_hvc
> + W(b) hyp_irq
> + W(b) hyp_fiq
> +
> +.macro invalid_vector label, cause
> + .align
> +\label: b .
> +.endm
> +
> + invalid_vector hyp_reset
> + invalid_vector hyp_undef
> + invalid_vector hyp_svc
> + invalid_vector hyp_pabt
> + invalid_vector hyp_dabt
> + invalid_vector hyp_fiq
> +
> +hyp_hvc:
> + /*
> + * Getting here is either because of a trap from a guest,
> + * or from executing HVC from the host kernel, which means
> + * "do something in Hyp mode".
> + */
> + push {r0, r1, r2}
> +
> + @ Check syndrome register
> + mrc p15, 4, r1, c5, c2, 0 @ HSR
> + lsr r0, r1, #HSR_EC_SHIFT
> + cmp r0, #HSR_EC_HVC
> + bne guest_trap @ Not HVC instr.
> +
> + /*
> + * Let's check if the HVC came from VMID 0 and allow simple
> + * switch to Hyp mode
> + */
> + mrrc p15, 6, r0, r2, c2
> + lsr r2, r2, #16
> + and r2, r2, #0xff
> + cmp r2, #0
> + bne guest_trap @ Guest called HVC
> +
> + /*
> + * Getting here means host called HVC, we shift parameters and branch
> + * to Hyp function.
> + */
> + pop {r0, r1, r2}
> +
> + /* Check for __hyp_get_vectors */
> + cmp r0, #-1
> + mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
> + beq 1f
> +
> + push {lr}
> +
> + mov lr, r0
> + mov r0, r1
> + mov r1, r2
> + mov r2, r3
> +
> +THUMB( orr lr, #1)
> + blx lr @ Call the HYP function
> +
> + pop {lr}
> +1: eret
> +
> +guest_trap:
> + load_vcpu r0 @ Load VCPU pointer to r0
> +
> + @ Check if we need the fault information
nit: this is not about faults at this point, so this comment should
either go or be reworded to "let's check if we trapped on guest VFP
access"
and I think the lsr can be moved into the ifdef as well.
> + lsr r1, r1, #HSR_EC_SHIFT
> +#ifdef CONFIG_VFPv3
> + cmp r1, #HSR_EC_CP_0_13
> + beq __vfp_guest_restore
> +#endif
> +
> + mov r1, #ARM_EXCEPTION_HVC
> + b __guest_exit
> +
> +hyp_irq:
> + push {r0, r1, r2}
> + mov r1, #ARM_EXCEPTION_IRQ
> + load_vcpu r0 @ Load VCPU pointer to r0
> + b __guest_exit
> +
> + .ltorg
> +
> + .popsection
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index 7ddca54..8bbd2a7 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -123,4 +123,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
>
> int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
> struct kvm_cpu_context *host);
> +int asmlinkage __hyp_do_panic(const char *, int, u32);
> +
> #endif /* __ARM_KVM_HYP_H__ */
> --
> 2.1.4
>
Otherwise looke good.
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 18/28] ARM: KVM: Add HYP mode entry code
@ 2016-02-09 17:00 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 17:00 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:35AM +0000, Marc Zyngier wrote:
> This part is almost entierely borrowed from the existing code, just
> slightly simplifying the HYP function call (as we now save SPSR_hyp
> in the world switch).
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/Makefile | 1 +
> arch/arm/kvm/hyp/hyp-entry.S | 157 +++++++++++++++++++++++++++++++++++++++++++
> arch/arm/kvm/hyp/hyp.h | 2 +
> 3 files changed, 160 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
>
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> index cfab402..a7d3a7e 100644
> --- a/arch/arm/kvm/hyp/Makefile
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -9,4 +9,5 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
> +obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
> obj-$(CONFIG_KVM_ARM_HOST) += switch.o
> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
> new file mode 100644
> index 0000000..44bc11f
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/hyp-entry.S
> @@ -0,0 +1,157 @@
> +/*
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License, version 2, as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/kvm_arm.h>
> +#include <asm/kvm_asm.h>
> +
> + .arch_extension virt
> +
> + .text
> + .pushsection .hyp.text, "ax"
> +
> +.macro load_vcpu reg
> + mrc p15, 4, \reg, c13, c0, 2 @ HTPIDR
> +.endm
> +
> +/********************************************************************
> + * Hypervisor exception vector and handlers
> + *
> + *
> + * The KVM/ARM Hypervisor ABI is defined as follows:
> + *
> + * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
> + * instruction is issued since all traps are disabled when running the host
> + * kernel as per the Hyp-mode initialization at boot time.
> + *
> + * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
> + * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
> + * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
> + * instructions are called from within Hyp-mode.
> + *
> + * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
> + * Switching to Hyp mode is done through a simple HVC #0 instruction. The
> + * exception vector code will check that the HVC comes from VMID==0.
> + * - r0 contains a pointer to a HYP function
> + * - r1, r2, and r3 contain arguments to the above function.
> + * - The HYP function will be called with its arguments in r0, r1 and r2.
> + * On HYP function return, we return directly to SVC.
> + *
> + * Note that the above is used to execute code in Hyp-mode from a host-kernel
> + * point of view, and is a different concept from performing a world-switch and
> + * executing guest code SVC mode (with a VMID != 0).
> + */
> +
> + .align 5
> +__hyp_vector:
> + .global __hyp_vector
> +__kvm_hyp_vector:
> + .weak __kvm_hyp_vector
> +
> + @ Hyp-mode exception vector
> + W(b) hyp_reset
> + W(b) hyp_undef
> + W(b) hyp_svc
> + W(b) hyp_pabt
> + W(b) hyp_dabt
> + W(b) hyp_hvc
> + W(b) hyp_irq
> + W(b) hyp_fiq
> +
> +.macro invalid_vector label, cause
> + .align
> +\label: b .
> +.endm
> +
> + invalid_vector hyp_reset
> + invalid_vector hyp_undef
> + invalid_vector hyp_svc
> + invalid_vector hyp_pabt
> + invalid_vector hyp_dabt
> + invalid_vector hyp_fiq
> +
> +hyp_hvc:
> + /*
> + * Getting here is either because of a trap from a guest,
> + * or from executing HVC from the host kernel, which means
> + * "do something in Hyp mode".
> + */
> + push {r0, r1, r2}
> +
> + @ Check syndrome register
> + mrc p15, 4, r1, c5, c2, 0 @ HSR
> + lsr r0, r1, #HSR_EC_SHIFT
> + cmp r0, #HSR_EC_HVC
> + bne guest_trap @ Not HVC instr.
> +
> + /*
> + * Let's check if the HVC came from VMID 0 and allow simple
> + * switch to Hyp mode
> + */
> + mrrc p15, 6, r0, r2, c2
> + lsr r2, r2, #16
> + and r2, r2, #0xff
> + cmp r2, #0
> + bne guest_trap @ Guest called HVC
> +
> + /*
> + * Getting here means host called HVC, we shift parameters and branch
> + * to Hyp function.
> + */
> + pop {r0, r1, r2}
> +
> + /* Check for __hyp_get_vectors */
> + cmp r0, #-1
> + mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
> + beq 1f
> +
> + push {lr}
> +
> + mov lr, r0
> + mov r0, r1
> + mov r1, r2
> + mov r2, r3
> +
> +THUMB( orr lr, #1)
> + blx lr @ Call the HYP function
> +
> + pop {lr}
> +1: eret
> +
> +guest_trap:
> + load_vcpu r0 @ Load VCPU pointer to r0
> +
> + @ Check if we need the fault information
nit: this is not about faults at this point, so this comment should
either go or be reworded to "let's check if we trapped on guest VFP
access"
and I think the lsr can be moved into the ifdef as well.
> + lsr r1, r1, #HSR_EC_SHIFT
> +#ifdef CONFIG_VFPv3
> + cmp r1, #HSR_EC_CP_0_13
> + beq __vfp_guest_restore
> +#endif
> +
> + mov r1, #ARM_EXCEPTION_HVC
> + b __guest_exit
> +
> +hyp_irq:
> + push {r0, r1, r2}
> + mov r1, #ARM_EXCEPTION_IRQ
> + load_vcpu r0 @ Load VCPU pointer to r0
> + b __guest_exit
> +
> + .ltorg
> +
> + .popsection
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index 7ddca54..8bbd2a7 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -123,4 +123,6 @@ void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
>
> int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
> struct kvm_cpu_context *host);
> +int asmlinkage __hyp_do_panic(const char *, int, u32);
> +
> #endif /* __ARM_KVM_HYP_H__ */
> --
> 2.1.4
>
Otherwise looke good.
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 20/28] ARM: KVM: Change kvm_call_hyp return type to unsigned long
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:28 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:28 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:37AM +0000, Marc Zyngier wrote:
> Having u64 as the kvm_call_hyp return type is problematic, as
> it forces all kind of tricks for the return values from HYP
> to be promoted to 64bit (LE has the LSB in r0, and BE has them
> in r1).
>
> Since the only user of the return value is perfectly happy with
> a 32bit value, let's make kvm_call_hyp return an unsigned long,
> which is 32bit on ARM.
I wonder why I ever did this as a u64...
should the arm64 counterpart be modified to an unsigned long as well?
>
> This solves yet another headache.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_host.h | 2 +-
> arch/arm/kvm/interrupts.S | 10 ++--------
> 2 files changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 02932ba..c62d717 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -165,7 +165,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
> int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
> int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
> int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
> -u64 kvm_call_hyp(void *hypfn, ...);
> +unsigned long kvm_call_hyp(void *hypfn, ...);
> void force_vm_exit(const cpumask_t *mask);
>
> #define KVM_ARCH_WANT_MMU_NOTIFIER
> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> index 7bfb289..01eb169 100644
> --- a/arch/arm/kvm/interrupts.S
> +++ b/arch/arm/kvm/interrupts.S
> @@ -207,20 +207,14 @@ after_vfp_restore:
>
> restore_host_regs
> clrex @ Clear exclusive monitor
> -#ifndef CONFIG_CPU_ENDIAN_BE8
> mov r0, r1 @ Return the return code
> - mov r1, #0 @ Clear upper bits in return value
> -#else
> - @ r1 already has return code
> - mov r0, #0 @ Clear upper bits in return value
> -#endif /* CONFIG_CPU_ENDIAN_BE8 */
> bx lr @ return to IOCTL
>
> /********************************************************************
> * Call function in Hyp mode
> *
> *
> - * u64 kvm_call_hyp(void *hypfn, ...);
> + * unsigned long kvm_call_hyp(void *hypfn, ...);
> *
> * This is not really a variadic function in the classic C-way and care must
> * be taken when calling this to ensure parameters are passed in registers
> @@ -231,7 +225,7 @@ after_vfp_restore:
> * passed as r0, r1, and r2 (a maximum of 3 arguments in addition to the
> * function pointer can be passed). The function being called must be mapped
> * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
> - * passed in r0 and r1.
> + * passed in r0 (strictly 32bit).
> *
> * A function pointer with a value of 0xffffffff has a special meaning,
> * and is used to implement __hyp_get_vectors in the same way as in
> --
> 2.1.4
>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 20/28] ARM: KVM: Change kvm_call_hyp return type to unsigned long
@ 2016-02-09 18:28 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:28 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:37AM +0000, Marc Zyngier wrote:
> Having u64 as the kvm_call_hyp return type is problematic, as
> it forces all kind of tricks for the return values from HYP
> to be promoted to 64bit (LE has the LSB in r0, and BE has them
> in r1).
>
> Since the only user of the return value is perfectly happy with
> a 32bit value, let's make kvm_call_hyp return an unsigned long,
> which is 32bit on ARM.
I wonder why I ever did this as a u64...
should the arm64 counterpart be modified to an unsigned long as well?
>
> This solves yet another headache.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_host.h | 2 +-
> arch/arm/kvm/interrupts.S | 10 ++--------
> 2 files changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 02932ba..c62d717 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -165,7 +165,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
> int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
> int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
> int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
> -u64 kvm_call_hyp(void *hypfn, ...);
> +unsigned long kvm_call_hyp(void *hypfn, ...);
> void force_vm_exit(const cpumask_t *mask);
>
> #define KVM_ARCH_WANT_MMU_NOTIFIER
> diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
> index 7bfb289..01eb169 100644
> --- a/arch/arm/kvm/interrupts.S
> +++ b/arch/arm/kvm/interrupts.S
> @@ -207,20 +207,14 @@ after_vfp_restore:
>
> restore_host_regs
> clrex @ Clear exclusive monitor
> -#ifndef CONFIG_CPU_ENDIAN_BE8
> mov r0, r1 @ Return the return code
> - mov r1, #0 @ Clear upper bits in return value
> -#else
> - @ r1 already has return code
> - mov r0, #0 @ Clear upper bits in return value
> -#endif /* CONFIG_CPU_ENDIAN_BE8 */
> bx lr @ return to IOCTL
>
> /********************************************************************
> * Call function in Hyp mode
> *
> *
> - * u64 kvm_call_hyp(void *hypfn, ...);
> + * unsigned long kvm_call_hyp(void *hypfn, ...);
> *
> * This is not really a variadic function in the classic C-way and care must
> * be taken when calling this to ensure parameters are passed in registers
> @@ -231,7 +225,7 @@ after_vfp_restore:
> * passed as r0, r1, and r2 (a maximum of 3 arguments in addition to the
> * function pointer can be passed). The function being called must be mapped
> * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
> - * passed in r0 and r1.
> + * passed in r0 (strictly 32bit).
> *
> * A function pointer with a value of 0xffffffff has a special meaning,
> * and is used to implement __hyp_get_vectors in the same way as in
> --
> 2.1.4
>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 26/28] ARM: KVM: Remove unused hyp_pc field
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:39 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Thu, Feb 04, 2016 at 11:00:43AM +0000, Marc Zyngier wrote:
> This field was never populated, and the panic code already
> does something similar. Delete the related code.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_emulate.h | 5 -----
> arch/arm/include/asm/kvm_host.h | 1 -
> arch/arm/kernel/asm-offsets.c | 1 -
> arch/arm/kvm/handle_exit.c | 5 -----
> 4 files changed, 12 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
> index f710616..8a8c6de 100644
> --- a/arch/arm/include/asm/kvm_emulate.h
> +++ b/arch/arm/include/asm/kvm_emulate.h
> @@ -108,11 +108,6 @@ static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
> return ((phys_addr_t)vcpu->arch.fault.hpfar & HPFAR_MASK) << 8;
> }
>
> -static inline unsigned long kvm_vcpu_get_hyp_pc(struct kvm_vcpu *vcpu)
> -{
> - return vcpu->arch.fault.hyp_pc;
> -}
> -
> static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
> {
> return kvm_vcpu_get_hsr(vcpu) & HSR_ISV;
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index daf6a71..19e9aba 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -85,7 +85,6 @@ struct kvm_vcpu_fault_info {
> u32 hsr; /* Hyp Syndrome Register */
> u32 hxfar; /* Hyp Data/Inst. Fault Address Register */
> u32 hpfar; /* Hyp IPA Fault Address Register */
> - u32 hyp_pc; /* PC when exception was taken from Hyp mode */
> };
>
> /*
> diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
> index 1f24c32..27d0581 100644
> --- a/arch/arm/kernel/asm-offsets.c
> +++ b/arch/arm/kernel/asm-offsets.c
> @@ -175,7 +175,6 @@ int main(void)
> DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
> DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
> DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
> - DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
> #endif
> BLANK();
> #ifdef CONFIG_VDSO
> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
> index 3ede90d..5377d753 100644
> --- a/arch/arm/kvm/handle_exit.c
> +++ b/arch/arm/kvm/handle_exit.c
> @@ -147,11 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
> switch (exception_index) {
> case ARM_EXCEPTION_IRQ:
> return 1;
> - case ARM_EXCEPTION_UNDEFINED:
> - kvm_err("Undefined exception in Hyp mode at: %#08lx\n",
> - kvm_vcpu_get_hyp_pc(vcpu));
> - BUG();
> - panic("KVM: Hypervisor undefined exception!\n");
> case ARM_EXCEPTION_DATA_ABORT:
> case ARM_EXCEPTION_PREF_ABORT:
> case ARM_EXCEPTION_HVC:
> --
> 2.1.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 26/28] ARM: KVM: Remove unused hyp_pc field
@ 2016-02-09 18:39 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:43AM +0000, Marc Zyngier wrote:
> This field was never populated, and the panic code already
> does something similar. Delete the related code.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_emulate.h | 5 -----
> arch/arm/include/asm/kvm_host.h | 1 -
> arch/arm/kernel/asm-offsets.c | 1 -
> arch/arm/kvm/handle_exit.c | 5 -----
> 4 files changed, 12 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
> index f710616..8a8c6de 100644
> --- a/arch/arm/include/asm/kvm_emulate.h
> +++ b/arch/arm/include/asm/kvm_emulate.h
> @@ -108,11 +108,6 @@ static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
> return ((phys_addr_t)vcpu->arch.fault.hpfar & HPFAR_MASK) << 8;
> }
>
> -static inline unsigned long kvm_vcpu_get_hyp_pc(struct kvm_vcpu *vcpu)
> -{
> - return vcpu->arch.fault.hyp_pc;
> -}
> -
> static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
> {
> return kvm_vcpu_get_hsr(vcpu) & HSR_ISV;
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index daf6a71..19e9aba 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -85,7 +85,6 @@ struct kvm_vcpu_fault_info {
> u32 hsr; /* Hyp Syndrome Register */
> u32 hxfar; /* Hyp Data/Inst. Fault Address Register */
> u32 hpfar; /* Hyp IPA Fault Address Register */
> - u32 hyp_pc; /* PC when exception was taken from Hyp mode */
> };
>
> /*
> diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
> index 1f24c32..27d0581 100644
> --- a/arch/arm/kernel/asm-offsets.c
> +++ b/arch/arm/kernel/asm-offsets.c
> @@ -175,7 +175,6 @@ int main(void)
> DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
> DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
> DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
> - DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
> #endif
> BLANK();
> #ifdef CONFIG_VDSO
> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
> index 3ede90d..5377d753 100644
> --- a/arch/arm/kvm/handle_exit.c
> +++ b/arch/arm/kvm/handle_exit.c
> @@ -147,11 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
> switch (exception_index) {
> case ARM_EXCEPTION_IRQ:
> return 1;
> - case ARM_EXCEPTION_UNDEFINED:
> - kvm_err("Undefined exception in Hyp mode at: %#08lx\n",
> - kvm_vcpu_get_hyp_pc(vcpu));
> - BUG();
> - panic("KVM: Hypervisor undefined exception!\n");
> case ARM_EXCEPTION_DATA_ABORT:
> case ARM_EXCEPTION_PREF_ABORT:
> case ARM_EXCEPTION_HVC:
> --
> 2.1.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 27/28] ARM: KVM: Remove handling of ARM_EXCEPTION_DATA/PREF_ABORT
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:39 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:44AM +0000, Marc Zyngier wrote:
> These are now handled as a panic, so there is little point in
> keeping them around.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/handle_exit.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
> index 5377d753..3f1ef0d 100644
> --- a/arch/arm/kvm/handle_exit.c
> +++ b/arch/arm/kvm/handle_exit.c
> @@ -147,8 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
> switch (exception_index) {
> case ARM_EXCEPTION_IRQ:
> return 1;
> - case ARM_EXCEPTION_DATA_ABORT:
> - case ARM_EXCEPTION_PREF_ABORT:
> case ARM_EXCEPTION_HVC:
> /*
> * See ARM ARM B1.14.1: "Hyp traps on instructions
> --
> 2.1.4
>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 27/28] ARM: KVM: Remove handling of ARM_EXCEPTION_DATA/PREF_ABORT
@ 2016-02-09 18:39 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:44AM +0000, Marc Zyngier wrote:
> These are now handled as a panic, so there is little point in
> keeping them around.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/handle_exit.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
> index 5377d753..3f1ef0d 100644
> --- a/arch/arm/kvm/handle_exit.c
> +++ b/arch/arm/kvm/handle_exit.c
> @@ -147,8 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
> switch (exception_index) {
> case ARM_EXCEPTION_IRQ:
> return 1;
> - case ARM_EXCEPTION_DATA_ABORT:
> - case ARM_EXCEPTION_PREF_ABORT:
> case ARM_EXCEPTION_HVC:
> /*
> * See ARM ARM B1.14.1: "Hyp traps on instructions
> --
> 2.1.4
>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 28/28] ARM: KVM: Remove __kvm_hyp_exit/__kvm_hyp_exit_end
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:39 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Thu, Feb 04, 2016 at 11:00:45AM +0000, Marc Zyngier wrote:
> I have no idea what these were for - probably a leftover from an
> early implementation. Good bye!
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 28/28] ARM: KVM: Remove __kvm_hyp_exit/__kvm_hyp_exit_end
@ 2016-02-09 18:39 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:45AM +0000, Marc Zyngier wrote:
> I have no idea what these were for - probably a leftover from an
> early implementation. Good bye!
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 01/28] ARM: KVM: Move the HYP code to its own section
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:39 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Thu, Feb 04, 2016 at 11:00:18AM +0000, Marc Zyngier wrote:
> In order to be able to spread the HYP code into multiple compilation
> units, adopt a layout similar to that of arm64:
> - the HYP text is emited in its own section (.hyp.text)
> - two linker generated symbols are use to identify the boundaries
> of that section
>
> No functionnal change.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 01/28] ARM: KVM: Move the HYP code to its own section
@ 2016-02-09 18:39 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:18AM +0000, Marc Zyngier wrote:
> In order to be able to spread the HYP code into multiple compilation
> units, adopt a layout similar to that of arm64:
> - the HYP text is emited in its own section (.hyp.text)
> - two linker generated symbols are use to identify the boundaries
> of that section
>
> No functionnal change.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 02/28] ARM: KVM: Remove __kvm_hyp_code_start/__kvm_hyp_code_end
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:39 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:19AM +0000, Marc Zyngier wrote:
> Now that we've unified the way we refer to the HYP text between
> arm and arm64, drop __kvm_hyp_code_start/end, and just use the
> __hyp_text_start/end symbols.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 02/28] ARM: KVM: Remove __kvm_hyp_code_start/__kvm_hyp_code_end
@ 2016-02-09 18:39 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:39 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:19AM +0000, Marc Zyngier wrote:
> Now that we've unified the way we refer to the HYP text between
> arm and arm64, drop __kvm_hyp_code_start/end, and just use the
> __hyp_text_start/end symbols.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 03/28] ARM: KVM: Move VFP registers to a CPU context structure
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:20AM +0000, Marc Zyngier wrote:
> In order to turn the WS code into something that looks a bit
> more like the arm64 version, move the VFP registers into a
> CPU context container for both the host and the guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 03/28] ARM: KVM: Move VFP registers to a CPU context structure
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:20AM +0000, Marc Zyngier wrote:
> In order to turn the WS code into something that looks a bit
> more like the arm64 version, move the VFP registers into a
> CPU context container for both the host and the guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 04/28] ARM: KVM: Move CP15 array into the CPU context structure
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:21AM +0000, Marc Zyngier wrote:
> Continuing our rework of the CPU context, we now move the CP15
> array into the CPU context structure. As this causes quite a bit
> of churn, we introduce the vcpu_cp15() macro that abstract the
> location of the actual array. This will probably help next time
> we have to revisit that code.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 04/28] ARM: KVM: Move CP15 array into the CPU context structure
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:21AM +0000, Marc Zyngier wrote:
> Continuing our rework of the CPU context, we now move the CP15
> array into the CPU context structure. As this causes quite a bit
> of churn, we introduce the vcpu_cp15() macro that abstract the
> location of the actual array. This will probably help next time
> we have to revisit that code.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 05/28] ARM: KVM: Move GP registers into the CPU context structure
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Thu, Feb 04, 2016 at 11:00:22AM +0000, Marc Zyngier wrote:
> Continuing our rework of the CPU context, we now move the GP
> registers into the CPU context structure.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 05/28] ARM: KVM: Move GP registers into the CPU context structure
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:22AM +0000, Marc Zyngier wrote:
> Continuing our rework of the CPU context, we now move the GP
> registers into the CPU context structure.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 06/28] ARM: KVM: Add a HYP-specific header file
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:23AM +0000, Marc Zyngier wrote:
> In order to expose the various HYP services that are private to
> the hypervisor, add a new hyp.h file.
>
> So far, it only contains mundane things such as section annotation
> and VA manipulation.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 06/28] ARM: KVM: Add a HYP-specific header file
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:23AM +0000, Marc Zyngier wrote:
> In order to expose the various HYP services that are private to
> the hypervisor, add a new hyp.h file.
>
> So far, it only contains mundane things such as section annotation
> and VA manipulation.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 08/28] ARM: KVM: Add TLB invalidation code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:25AM +0000, Marc Zyngier wrote:
> Convert the TLB invalidation code to C, hooking it into the
> build system whilst we're at it.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/Makefile | 1 +
> arch/arm/kvm/hyp/Makefile | 5 ++++
> arch/arm/kvm/hyp/hyp.h | 5 ++++
> arch/arm/kvm/hyp/tlb.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 82 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/Makefile
> create mode 100644 arch/arm/kvm/hyp/tlb.c
>
> diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
> index c5eef02c..eb1bf43 100644
> --- a/arch/arm/kvm/Makefile
> +++ b/arch/arm/kvm/Makefile
> @@ -17,6 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
> KVM := ../../../virt/kvm
> kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o
>
> +obj-$(CONFIG_KVM_ARM_HOST) += hyp/
> obj-y += kvm-arm.o init.o interrupts.o
> obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
> obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> new file mode 100644
> index 0000000..36c760d
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for Kernel-based Virtual Machine module, HYP part
> +#
> +
> +obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index 727089f..5808bbd 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -42,4 +42,9 @@
> })
> #define read_sysreg(...) __read_sysreg(__VA_ARGS__)
>
> +#define VTTBR __ACCESS_CP15_64(6, c2)
> +#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
> +#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
> +#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
> +
> #endif /* __ARM_KVM_HYP_H__ */
> diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
> new file mode 100644
> index 0000000..993fe89
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/tlb.c
> @@ -0,0 +1,71 @@
> +/*
> + * Original code:
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "hyp.h"
> +
> +/**
> + * Flush per-VMID TLBs
> + *
> + * __kvm_tlb_flush_vmid(struct kvm *kvm);
> + *
> + * We rely on the hardware to broadcast the TLB invalidation to all CPUs
> + * inside the inner-shareable domain (which is the case for all v7
> + * implementations). If we come across a non-IS SMP implementation, we'll
> + * have to use an IPI based mechanism. Until then, we stick to the simple
> + * hardware assisted version.
> + *
> + * As v7 does not support flushing per IPA, just nuke the whole TLB
> + * instead, ignoring the ipa value.
> + */
> +static void __hyp_text __tlb_flush_vmid(struct kvm *kvm)
> +{
> + dsb(ishst);
> +
> + /* Switch to requested VMID */
> + kvm = kern_hyp_va(kvm);
> + write_sysreg(kvm->arch.vttbr, VTTBR);
> + isb();
> +
> + write_sysreg(0, TLBIALLIS);
> + dsb(ish);
> + isb();
> +
> + write_sysreg(0, VTTBR);
> +}
> +
> +__alias(__tlb_flush_vmid) void __weak __kvm_tlb_flush_vmid(struct kvm *kvm);
> +
> +static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
> +{
> + __tlb_flush_vmid(kvm);
> +}
> +
> +__alias(__tlb_flush_vmid_ipa) void __weak __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
> + phys_addr_t ipa);
> +
> +static void __hyp_text __tlb_flush_vm_context(void)
> +{
> + dsb(ishst);
do we need this initial dsb?
> + write_sysreg(0, TLBIALLNSNHIS);
> + write_sysreg(0, ICIALLUIS);
> + dsb(ish);
we used to have an isb here, but we got rid of this because it's always
followed by eret?
> +}
> +
> +__alias(__tlb_flush_vm_context) void __weak __kvm_flush_vm_context(void);
> --
> 2.1.4
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 08/28] ARM: KVM: Add TLB invalidation code
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:25AM +0000, Marc Zyngier wrote:
> Convert the TLB invalidation code to C, hooking it into the
> build system whilst we're at it.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/Makefile | 1 +
> arch/arm/kvm/hyp/Makefile | 5 ++++
> arch/arm/kvm/hyp/hyp.h | 5 ++++
> arch/arm/kvm/hyp/tlb.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 82 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/Makefile
> create mode 100644 arch/arm/kvm/hyp/tlb.c
>
> diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
> index c5eef02c..eb1bf43 100644
> --- a/arch/arm/kvm/Makefile
> +++ b/arch/arm/kvm/Makefile
> @@ -17,6 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
> KVM := ../../../virt/kvm
> kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o
>
> +obj-$(CONFIG_KVM_ARM_HOST) += hyp/
> obj-y += kvm-arm.o init.o interrupts.o
> obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
> obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> new file mode 100644
> index 0000000..36c760d
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for Kernel-based Virtual Machine module, HYP part
> +#
> +
> +obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index 727089f..5808bbd 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -42,4 +42,9 @@
> })
> #define read_sysreg(...) __read_sysreg(__VA_ARGS__)
>
> +#define VTTBR __ACCESS_CP15_64(6, c2)
> +#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
> +#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
> +#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
> +
> #endif /* __ARM_KVM_HYP_H__ */
> diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
> new file mode 100644
> index 0000000..993fe89
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/tlb.c
> @@ -0,0 +1,71 @@
> +/*
> + * Original code:
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "hyp.h"
> +
> +/**
> + * Flush per-VMID TLBs
> + *
> + * __kvm_tlb_flush_vmid(struct kvm *kvm);
> + *
> + * We rely on the hardware to broadcast the TLB invalidation to all CPUs
> + * inside the inner-shareable domain (which is the case for all v7
> + * implementations). If we come across a non-IS SMP implementation, we'll
> + * have to use an IPI based mechanism. Until then, we stick to the simple
> + * hardware assisted version.
> + *
> + * As v7 does not support flushing per IPA, just nuke the whole TLB
> + * instead, ignoring the ipa value.
> + */
> +static void __hyp_text __tlb_flush_vmid(struct kvm *kvm)
> +{
> + dsb(ishst);
> +
> + /* Switch to requested VMID */
> + kvm = kern_hyp_va(kvm);
> + write_sysreg(kvm->arch.vttbr, VTTBR);
> + isb();
> +
> + write_sysreg(0, TLBIALLIS);
> + dsb(ish);
> + isb();
> +
> + write_sysreg(0, VTTBR);
> +}
> +
> +__alias(__tlb_flush_vmid) void __weak __kvm_tlb_flush_vmid(struct kvm *kvm);
> +
> +static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
> +{
> + __tlb_flush_vmid(kvm);
> +}
> +
> +__alias(__tlb_flush_vmid_ipa) void __weak __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
> + phys_addr_t ipa);
> +
> +static void __hyp_text __tlb_flush_vm_context(void)
> +{
> + dsb(ishst);
do we need this initial dsb?
> + write_sysreg(0, TLBIALLNSNHIS);
> + write_sysreg(0, ICIALLUIS);
> + dsb(ish);
we used to have an isb here, but we got rid of this because it's always
followed by eret?
> +}
> +
> +__alias(__tlb_flush_vm_context) void __weak __kvm_flush_vm_context(void);
> --
> 2.1.4
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 09/28] ARM: KVM: Add CP15 save/restore code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:26AM +0000, Marc Zyngier wrote:
> Concert the CP15 save/restore code to C.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 09/28] ARM: KVM: Add CP15 save/restore code
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:26AM +0000, Marc Zyngier wrote:
> Concert the CP15 save/restore code to C.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 10/28] ARM: KVM: Add timer save/restore
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:27AM +0000, Marc Zyngier wrote:
> This patch shouldn't exist, as we should be able to reuse the
> arm64 version for free. I'll get there eventually, but in the
> meantime I need a timer ticking.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/Makefile | 1 +
> arch/arm/kvm/hyp/hyp.h | 8 +++++
> arch/arm/kvm/hyp/timer-sr.c | 71 +++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 80 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/timer-sr.c
>
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> index 9f96fcb..9241ae8 100644
> --- a/arch/arm/kvm/hyp/Makefile
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -4,3 +4,4 @@
>
> obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
> obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
> +obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index ab2cb82..4924418 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -46,6 +46,9 @@
> #define TTBR1 __ACCESS_CP15_64(1, c2)
> #define VTTBR __ACCESS_CP15_64(6, c2)
> #define PAR __ACCESS_CP15_64(0, c7)
> +#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> +#define CNTVOFF __ACCESS_CP15_64(4, c14)
> +
> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> #define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
> #define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
> @@ -71,6 +74,11 @@
> #define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
> #define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
> #define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
> +#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
> +#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
> +
> +void __timer_save_state(struct kvm_vcpu *vcpu);
> +void __timer_restore_state(struct kvm_vcpu *vcpu);
>
> void __sysreg_save_state(struct kvm_cpu_context *ctxt);
> void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
> diff --git a/arch/arm/kvm/hyp/timer-sr.c b/arch/arm/kvm/hyp/timer-sr.c
> new file mode 100644
> index 0000000..d7535fd
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/timer-sr.c
> @@ -0,0 +1,71 @@
> +/*
> + * Copyright (C) 2012-2015 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <clocksource/arm_arch_timer.h>
> +#include <linux/compiler.h>
> +#include <linux/kvm_host.h>
> +
> +#include <asm/kvm_mmu.h>
> +
> +#include "hyp.h"
> +
> +/* vcpu is already in the HYP VA space */
> +void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> + u64 val;
> +
> + if (kvm->arch.timer.enabled) {
> + timer->cntv_ctl = read_sysreg(CNTV_CTL);
> + timer->cntv_cval = read_sysreg(CNTV_CVAL);
> + }
> +
> + /* Disable the virtual timer */
> + write_sysreg(0, CNTV_CTL);
> +
> + /* Allow physical timer/counter access for the host */
> + val = read_sysreg(CNTHCTL);
> + val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
> + write_sysreg(val, CNTHCTL);
> +
> + /* Clear cntvoff for the host */
> + write_sysreg(0, CNTVOFF);
in the asm version we only did this if the timer was enabled, probably
the theory being that only in that case did we mody the offset. But it
should be safe to just clear the cntvoff in any case, right?
> +}
> +
> +void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> + u64 val;
> +
> + /*
> + * Disallow physical timer access for the guest
> + * Physical counter access is allowed
> + */
> + val = read_sysreg(CNTHCTL);
> + val &= ~CNTHCTL_EL1PCEN;
> + val |= CNTHCTL_EL1PCTEN;
> + write_sysreg(val, CNTHCTL);
> +
> + if (kvm->arch.timer.enabled) {
> + write_sysreg(kvm->arch.timer.cntvoff, CNTVOFF);
> + write_sysreg(timer->cntv_cval, CNTV_CVAL);
> + isb();
> + write_sysreg(timer->cntv_ctl, CNTV_CTL);
> + }
> +}
> --
> 2.1.4
>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 10/28] ARM: KVM: Add timer save/restore
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:27AM +0000, Marc Zyngier wrote:
> This patch shouldn't exist, as we should be able to reuse the
> arm64 version for free. I'll get there eventually, but in the
> meantime I need a timer ticking.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/Makefile | 1 +
> arch/arm/kvm/hyp/hyp.h | 8 +++++
> arch/arm/kvm/hyp/timer-sr.c | 71 +++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 80 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/timer-sr.c
>
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> index 9f96fcb..9241ae8 100644
> --- a/arch/arm/kvm/hyp/Makefile
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -4,3 +4,4 @@
>
> obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
> obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
> +obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index ab2cb82..4924418 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -46,6 +46,9 @@
> #define TTBR1 __ACCESS_CP15_64(1, c2)
> #define VTTBR __ACCESS_CP15_64(6, c2)
> #define PAR __ACCESS_CP15_64(0, c7)
> +#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> +#define CNTVOFF __ACCESS_CP15_64(4, c14)
> +
> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> #define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
> #define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
> @@ -71,6 +74,11 @@
> #define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
> #define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
> #define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
> +#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
> +#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
> +
> +void __timer_save_state(struct kvm_vcpu *vcpu);
> +void __timer_restore_state(struct kvm_vcpu *vcpu);
>
> void __sysreg_save_state(struct kvm_cpu_context *ctxt);
> void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
> diff --git a/arch/arm/kvm/hyp/timer-sr.c b/arch/arm/kvm/hyp/timer-sr.c
> new file mode 100644
> index 0000000..d7535fd
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/timer-sr.c
> @@ -0,0 +1,71 @@
> +/*
> + * Copyright (C) 2012-2015 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <clocksource/arm_arch_timer.h>
> +#include <linux/compiler.h>
> +#include <linux/kvm_host.h>
> +
> +#include <asm/kvm_mmu.h>
> +
> +#include "hyp.h"
> +
> +/* vcpu is already in the HYP VA space */
> +void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> + u64 val;
> +
> + if (kvm->arch.timer.enabled) {
> + timer->cntv_ctl = read_sysreg(CNTV_CTL);
> + timer->cntv_cval = read_sysreg(CNTV_CVAL);
> + }
> +
> + /* Disable the virtual timer */
> + write_sysreg(0, CNTV_CTL);
> +
> + /* Allow physical timer/counter access for the host */
> + val = read_sysreg(CNTHCTL);
> + val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
> + write_sysreg(val, CNTHCTL);
> +
> + /* Clear cntvoff for the host */
> + write_sysreg(0, CNTVOFF);
in the asm version we only did this if the timer was enabled, probably
the theory being that only in that case did we mody the offset. But it
should be safe to just clear the cntvoff in any case, right?
> +}
> +
> +void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
> + u64 val;
> +
> + /*
> + * Disallow physical timer access for the guest
> + * Physical counter access is allowed
> + */
> + val = read_sysreg(CNTHCTL);
> + val &= ~CNTHCTL_EL1PCEN;
> + val |= CNTHCTL_EL1PCTEN;
> + write_sysreg(val, CNTHCTL);
> +
> + if (kvm->arch.timer.enabled) {
> + write_sysreg(kvm->arch.timer.cntvoff, CNTVOFF);
> + write_sysreg(timer->cntv_cval, CNTV_CVAL);
> + isb();
> + write_sysreg(timer->cntv_ctl, CNTV_CTL);
> + }
> +}
> --
> 2.1.4
>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 11/28] ARM: KVM: Add vgic v2 save/restore
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:28AM +0000, Marc Zyngier wrote:
> This patch shouldn't exist, as we should be able to reuse the
> arm64 version for free. I'll get there eventually, but in the
> meantime I need an interrupt controller.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 11/28] ARM: KVM: Add vgic v2 save/restore
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:28AM +0000, Marc Zyngier wrote:
> This patch shouldn't exist, as we should be able to reuse the
> arm64 version for free. I'll get there eventually, but in the
> meantime I need an interrupt controller.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 12/28] ARM: KVM: Add VFP save/restore
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:29AM +0000, Marc Zyngier wrote:
> This is almost a copy/paste of the existing version, with a couple
> of subtle differences:
> - Only write to FPEXC once on the save path
> - Add an isb when enabling VFP access
>
> The patch also defines a few sysreg accessors and a __vfp_enabled
> predicate that test the VFP trapping state.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 12/28] ARM: KVM: Add VFP save/restore
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:29AM +0000, Marc Zyngier wrote:
> This is almost a copy/paste of the existing version, with a couple
> of subtle differences:
> - Only write to FPEXC once on the save path
> - Add an isb when enabling VFP access
>
> The patch also defines a few sysreg accessors and a __vfp_enabled
> predicate that test the VFP trapping state.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 13/28] ARM: KVM: Add banked registers save/restore
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:42 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Thu, Feb 04, 2016 at 11:00:30AM +0000, Marc Zyngier wrote:
> Banked registers are one of the many perks of the 32bit architecture,
> and the world switch needs to cope with it.
>
> This requires some "special" accessors, as these are not accessed
> using a standard coprocessor instruction.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 13/28] ARM: KVM: Add banked registers save/restore
@ 2016-02-09 18:42 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:30AM +0000, Marc Zyngier wrote:
> Banked registers are one of the many perks of the 32bit architecture,
> and the world switch needs to cope with it.
>
> This requires some "special" accessors, as these are not accessed
> using a standard coprocessor instruction.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 14/28] ARM: KVM: Add guest entry code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:44 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:44 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:31AM +0000, Marc Zyngier wrote:
> Add the very minimal piece of code that is now required to jump
> into the guest (and return from it). This code is only concerned
> with save/restoring the USR registers (r0-r12+lr for the guest,
> r4-r12+lr for the host), as everything else is dealt with in C
> (VFP is another matter though).
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/Makefile | 1 +
> arch/arm/kvm/hyp/entry.S | 70 +++++++++++++++++++++++++++++++++++++++++++++++
> arch/arm/kvm/hyp/hyp.h | 2 ++
> 3 files changed, 73 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/entry.S
>
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> index 173bd1d..c779690 100644
> --- a/arch/arm/kvm/hyp/Makefile
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -8,3 +8,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
> +obj-$(CONFIG_KVM_ARM_HOST) += entry.o
> diff --git a/arch/arm/kvm/hyp/entry.S b/arch/arm/kvm/hyp/entry.S
> new file mode 100644
> index 0000000..32f79b0
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/entry.S
> @@ -0,0 +1,70 @@
> +/*
> + * Copyright (C) 2016 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> +*/
> +
> +#include <linux/linkage.h>
> +#include <asm/asm-offsets.h>
> +#include <asm/kvm_arm.h>
> +
> + .arch_extension virt
> +
> + .text
> + .pushsection .hyp.text, "ax"
> +
> +#define USR_REGS_OFFSET (CPU_CTXT_GP_REGS + GP_REGS_USR)
> +
> +/* int __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host) */
> +ENTRY(__guest_enter)
> + @ Save host registers
> + add r1, r1, #(USR_REGS_OFFSET + S_R4)
> + stm r1!, {r4-r12}
> + str lr, [r1, #4] @ Skip SP_usr (already saved)
> +
> + @ Restore guest registers
> + add r0, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
this really relies on offsetof(struct pt_regs, ARM_r0) == 0, which I
guess will likely never change, but given there's both a kernel and uapi
version of struct pt_regs, are we sure about this?
> + ldr lr, [r0, #S_LR]
> + ldm r0, {r0-r12}
> +
> + clrex
> + eret
> +ENDPROC(__guest_enter)
> +
> +ENTRY(__guest_exit)
> + /*
> + * return convention:
> + * guest r0, r1, r2 saved on the stack
> + * r0: vcpu pointer
> + * r1: exception code
> + */
> +
> + add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R3)
> + stm r2!, {r3-r12}
> + str lr, [r2, #4]
> + add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
> + pop {r3, r4, r5} @ r0, r1, r2
> + stm r2, {r3-r5}
> +
> + ldr r0, [r0, #VCPU_HOST_CTXT]
> + add r0, r0, #(USR_REGS_OFFSET + S_R4)
> + ldm r0!, {r4-r12}
> + ldr lr, [r0, #4]
> +
> + mov r0, r1
> + bx lr
> +ENDPROC(__guest_exit)
> +
> + .popsection
> +
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index 278eb1f..b3f6ed2 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -110,4 +110,6 @@ static inline bool __vfp_enabled(void)
> void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt);
> void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
>
> +int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
> + struct kvm_cpu_context *host);
> #endif /* __ARM_KVM_HYP_H__ */
> --
> 2.1.4
>
Otherwise:
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 14/28] ARM: KVM: Add guest entry code
@ 2016-02-09 18:44 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:44 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:31AM +0000, Marc Zyngier wrote:
> Add the very minimal piece of code that is now required to jump
> into the guest (and return from it). This code is only concerned
> with save/restoring the USR registers (r0-r12+lr for the guest,
> r4-r12+lr for the host), as everything else is dealt with in C
> (VFP is another matter though).
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/Makefile | 1 +
> arch/arm/kvm/hyp/entry.S | 70 +++++++++++++++++++++++++++++++++++++++++++++++
> arch/arm/kvm/hyp/hyp.h | 2 ++
> 3 files changed, 73 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/entry.S
>
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> index 173bd1d..c779690 100644
> --- a/arch/arm/kvm/hyp/Makefile
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -8,3 +8,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
> +obj-$(CONFIG_KVM_ARM_HOST) += entry.o
> diff --git a/arch/arm/kvm/hyp/entry.S b/arch/arm/kvm/hyp/entry.S
> new file mode 100644
> index 0000000..32f79b0
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/entry.S
> @@ -0,0 +1,70 @@
> +/*
> + * Copyright (C) 2016 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> +*/
> +
> +#include <linux/linkage.h>
> +#include <asm/asm-offsets.h>
> +#include <asm/kvm_arm.h>
> +
> + .arch_extension virt
> +
> + .text
> + .pushsection .hyp.text, "ax"
> +
> +#define USR_REGS_OFFSET (CPU_CTXT_GP_REGS + GP_REGS_USR)
> +
> +/* int __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host) */
> +ENTRY(__guest_enter)
> + @ Save host registers
> + add r1, r1, #(USR_REGS_OFFSET + S_R4)
> + stm r1!, {r4-r12}
> + str lr, [r1, #4] @ Skip SP_usr (already saved)
> +
> + @ Restore guest registers
> + add r0, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
this really relies on offsetof(struct pt_regs, ARM_r0) == 0, which I
guess will likely never change, but given there's both a kernel and uapi
version of struct pt_regs, are we sure about this?
> + ldr lr, [r0, #S_LR]
> + ldm r0, {r0-r12}
> +
> + clrex
> + eret
> +ENDPROC(__guest_enter)
> +
> +ENTRY(__guest_exit)
> + /*
> + * return convention:
> + * guest r0, r1, r2 saved on the stack
> + * r0: vcpu pointer
> + * r1: exception code
> + */
> +
> + add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R3)
> + stm r2!, {r3-r12}
> + str lr, [r2, #4]
> + add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
> + pop {r3, r4, r5} @ r0, r1, r2
> + stm r2, {r3-r5}
> +
> + ldr r0, [r0, #VCPU_HOST_CTXT]
> + add r0, r0, #(USR_REGS_OFFSET + S_R4)
> + ldm r0!, {r4-r12}
> + ldr lr, [r0, #4]
> +
> + mov r0, r1
> + bx lr
> +ENDPROC(__guest_exit)
> +
> + .popsection
> +
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index 278eb1f..b3f6ed2 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -110,4 +110,6 @@ static inline bool __vfp_enabled(void)
> void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt);
> void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
>
> +int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
> + struct kvm_cpu_context *host);
> #endif /* __ARM_KVM_HYP_H__ */
> --
> 2.1.4
>
Otherwise:
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 15/28] ARM: KVM: Add VFP lazy save/restore handler
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:44 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:44 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:32AM +0000, Marc Zyngier wrote:
> Similar to the arm64 version, add the code that deals with VFP traps,
> re-enabling VFP, save/restoring the registers and resuming the guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 15/28] ARM: KVM: Add VFP lazy save/restore handler
@ 2016-02-09 18:44 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:44 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:32AM +0000, Marc Zyngier wrote:
> Similar to the arm64 version, add the code that deals with VFP traps,
> re-enabling VFP, save/restoring the registers and resuming the guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 16/28] ARM: KVM: Add the new world switch implementation
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:44 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:44 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:33AM +0000, Marc Zyngier wrote:
> The new world switch implementation is modeled after the arm64 one,
> calling the various save/restore functions in turn, and having as
> little state as possible.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/Makefile | 1 +
> arch/arm/kvm/hyp/hyp.h | 7 +++
> arch/arm/kvm/hyp/switch.c | 136 ++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 144 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/switch.c
>
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> index c779690..cfab402 100644
> --- a/arch/arm/kvm/hyp/Makefile
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -9,3 +9,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
> +obj-$(CONFIG_KVM_ARM_HOST) += switch.o
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index b3f6ed2..2ca651f 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -60,11 +60,16 @@
> #define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> #define CNTVOFF __ACCESS_CP15_64(4, c14)
>
> +#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> +#define VMIDR __ACCESS_CP15(c0, 4, c0, 0)
Nit: This is called VPIDR in v7 and VMPIDR_EL2 in v8 IIUC. Should we
refer to it by one of those names instead?
> #define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
> #define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
> #define CPACR __ACCESS_CP15(c1, 0, c0, 2)
> +#define HCR __ACCESS_CP15(c1, 4, c1, 0)
> +#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
> #define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
> +#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
> #define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
> #define DACR __ACCESS_CP15(c3, 0, c0, 0)
> #define DFSR __ACCESS_CP15(c5, 0, c0, 0)
> @@ -73,6 +78,7 @@
> #define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
> #define DFAR __ACCESS_CP15(c6, 0, c0, 0)
> #define IFAR __ACCESS_CP15(c6, 0, c0, 2)
> +#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
> #define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
> #define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
> #define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
> @@ -85,6 +91,7 @@
> #define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
> #define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
> #define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
> +#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
> #define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
> #define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
> #define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
> diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
> new file mode 100644
> index 0000000..f715b0d
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/switch.c
> @@ -0,0 +1,136 @@
> +/*
> + * Copyright (C) 2015 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <asm/kvm_asm.h>
> +#include "hyp.h"
> +
> +__asm__(".arch_extension virt");
> +
> +static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu, u32 *fpexc)
Nit: can you document the fpexc or name it host_fpexc_preserve or something?
> +{
> + u32 val;
> +
> + /*
> + * We are about to set HCPTR.TCP10/11 to trap all floating point
> + * register accesses to HYP, however, the ARM ARM clearly states that
> + * traps are only taken to HYP if the operation would not otherwise
> + * trap to SVC. Therefore, always make sure that for 32-bit guests,
> + * we set FPEXC.EN to prevent traps to SVC, when setting the TCP bits.
> + */
> + val = read_sysreg(VFP_FPEXC);
> + *fpexc = val;
> + if (!(val & FPEXC_EN)) {
> + write_sysreg(val | FPEXC_EN, VFP_FPEXC);
> + isb();
> + }
> +
> + write_sysreg(vcpu->arch.hcr | vcpu->arch.irq_lines, HCR);
> + /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
> + write_sysreg(HSTR_T(15), HSTR);
> + write_sysreg(HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11), HCPTR);
> + val = read_sysreg(HDCR);
> + write_sysreg(val | HDCR_TPM | HDCR_TPMCR, HDCR);
> +}
> +
> +static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
> +{
> + u32 val;
> +
> + write_sysreg(0, HCR);
> + write_sysreg(0, HSTR);
> + val = read_sysreg(HDCR);
> + write_sysreg(val & ~(HDCR_TPM | HDCR_TPMCR), HDCR);
> + write_sysreg(0, HCPTR);
> +}
> +
> +static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> + write_sysreg(kvm->arch.vttbr, VTTBR);
> + write_sysreg(vcpu->arch.midr, VMIDR);
> +}
> +
> +static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu)
> +{
> + write_sysreg(0, VTTBR);
> + write_sysreg(read_sysreg(MIDR), VMIDR);
> +}
> +
> +static void __hyp_text __vgic_save_state(struct kvm_vcpu *vcpu)
> +{
> + __vgic_v2_save_state(vcpu);
> +}
> +
> +static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
> +{
> + __vgic_v2_restore_state(vcpu);
> +}
> +
> +static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_cpu_context *host_ctxt;
> + struct kvm_cpu_context *guest_ctxt;
> + bool fp_enabled;
> + u64 exit_code;
> + u32 fpexc;
> +
> + vcpu = kern_hyp_va(vcpu);
> + write_sysreg(vcpu, HTPIDR);
> +
> + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> + guest_ctxt = &vcpu->arch.ctxt;
> +
> + __sysreg_save_state(host_ctxt);
> + __banked_save_state(host_ctxt);
> +
> + __activate_traps(vcpu, &fpexc);
> + __activate_vm(vcpu);
> +
> + __vgic_restore_state(vcpu);
> + __timer_restore_state(vcpu);
> +
> + __sysreg_restore_state(guest_ctxt);
> + __banked_restore_state(guest_ctxt);
> +
> + /* Jump in the fire! */
> + exit_code = __guest_enter(vcpu, host_ctxt);
> + /* And we're baaack! */
> +
> + fp_enabled = __vfp_enabled();
> +
> + __banked_save_state(guest_ctxt);
> + __sysreg_save_state(guest_ctxt);
> + __timer_save_state(vcpu);
> + __vgic_save_state(vcpu);
> +
> + __deactivate_traps(vcpu);
> + __deactivate_vm(vcpu);
> +
> + __banked_restore_state(host_ctxt);
> + __sysreg_restore_state(host_ctxt);
> +
> + if (fp_enabled) {
> + __vfp_save_state(&guest_ctxt->vfp);
> + __vfp_restore_state(&host_ctxt->vfp);
> + }
> +
> + write_sysreg(fpexc, VFP_FPEXC);
> +
> + return exit_code;
> +}
> +
> +__alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
> --
> 2.1.4
>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 16/28] ARM: KVM: Add the new world switch implementation
@ 2016-02-09 18:44 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:44 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:33AM +0000, Marc Zyngier wrote:
> The new world switch implementation is modeled after the arm64 one,
> calling the various save/restore functions in turn, and having as
> little state as possible.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/Makefile | 1 +
> arch/arm/kvm/hyp/hyp.h | 7 +++
> arch/arm/kvm/hyp/switch.c | 136 ++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 144 insertions(+)
> create mode 100644 arch/arm/kvm/hyp/switch.c
>
> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> index c779690..cfab402 100644
> --- a/arch/arm/kvm/hyp/Makefile
> +++ b/arch/arm/kvm/hyp/Makefile
> @@ -9,3 +9,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
> +obj-$(CONFIG_KVM_ARM_HOST) += switch.o
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index b3f6ed2..2ca651f 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -60,11 +60,16 @@
> #define CNTV_CVAL __ACCESS_CP15_64(3, c14)
> #define CNTVOFF __ACCESS_CP15_64(4, c14)
>
> +#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
> +#define VMIDR __ACCESS_CP15(c0, 4, c0, 0)
Nit: This is called VPIDR in v7 and VMPIDR_EL2 in v8 IIUC. Should we
refer to it by one of those names instead?
> #define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
> #define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
> #define CPACR __ACCESS_CP15(c1, 0, c0, 2)
> +#define HCR __ACCESS_CP15(c1, 4, c1, 0)
> +#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
> #define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
> +#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
> #define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
> #define DACR __ACCESS_CP15(c3, 0, c0, 0)
> #define DFSR __ACCESS_CP15(c5, 0, c0, 0)
> @@ -73,6 +78,7 @@
> #define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
> #define DFAR __ACCESS_CP15(c6, 0, c0, 0)
> #define IFAR __ACCESS_CP15(c6, 0, c0, 2)
> +#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
> #define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
> #define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
> #define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
> @@ -85,6 +91,7 @@
> #define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
> #define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
> #define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
> +#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
> #define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
> #define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
> #define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
> diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
> new file mode 100644
> index 0000000..f715b0d
> --- /dev/null
> +++ b/arch/arm/kvm/hyp/switch.c
> @@ -0,0 +1,136 @@
> +/*
> + * Copyright (C) 2015 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <asm/kvm_asm.h>
> +#include "hyp.h"
> +
> +__asm__(".arch_extension virt");
> +
> +static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu, u32 *fpexc)
Nit: can you document the fpexc or name it host_fpexc_preserve or something?
> +{
> + u32 val;
> +
> + /*
> + * We are about to set HCPTR.TCP10/11 to trap all floating point
> + * register accesses to HYP, however, the ARM ARM clearly states that
> + * traps are only taken to HYP if the operation would not otherwise
> + * trap to SVC. Therefore, always make sure that for 32-bit guests,
> + * we set FPEXC.EN to prevent traps to SVC, when setting the TCP bits.
> + */
> + val = read_sysreg(VFP_FPEXC);
> + *fpexc = val;
> + if (!(val & FPEXC_EN)) {
> + write_sysreg(val | FPEXC_EN, VFP_FPEXC);
> + isb();
> + }
> +
> + write_sysreg(vcpu->arch.hcr | vcpu->arch.irq_lines, HCR);
> + /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
> + write_sysreg(HSTR_T(15), HSTR);
> + write_sysreg(HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11), HCPTR);
> + val = read_sysreg(HDCR);
> + write_sysreg(val | HDCR_TPM | HDCR_TPMCR, HDCR);
> +}
> +
> +static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
> +{
> + u32 val;
> +
> + write_sysreg(0, HCR);
> + write_sysreg(0, HSTR);
> + val = read_sysreg(HDCR);
> + write_sysreg(val & ~(HDCR_TPM | HDCR_TPMCR), HDCR);
> + write_sysreg(0, HCPTR);
> +}
> +
> +static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> + write_sysreg(kvm->arch.vttbr, VTTBR);
> + write_sysreg(vcpu->arch.midr, VMIDR);
> +}
> +
> +static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu)
> +{
> + write_sysreg(0, VTTBR);
> + write_sysreg(read_sysreg(MIDR), VMIDR);
> +}
> +
> +static void __hyp_text __vgic_save_state(struct kvm_vcpu *vcpu)
> +{
> + __vgic_v2_save_state(vcpu);
> +}
> +
> +static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
> +{
> + __vgic_v2_restore_state(vcpu);
> +}
> +
> +static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_cpu_context *host_ctxt;
> + struct kvm_cpu_context *guest_ctxt;
> + bool fp_enabled;
> + u64 exit_code;
> + u32 fpexc;
> +
> + vcpu = kern_hyp_va(vcpu);
> + write_sysreg(vcpu, HTPIDR);
> +
> + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> + guest_ctxt = &vcpu->arch.ctxt;
> +
> + __sysreg_save_state(host_ctxt);
> + __banked_save_state(host_ctxt);
> +
> + __activate_traps(vcpu, &fpexc);
> + __activate_vm(vcpu);
> +
> + __vgic_restore_state(vcpu);
> + __timer_restore_state(vcpu);
> +
> + __sysreg_restore_state(guest_ctxt);
> + __banked_restore_state(guest_ctxt);
> +
> + /* Jump in the fire! */
> + exit_code = __guest_enter(vcpu, host_ctxt);
> + /* And we're baaack! */
> +
> + fp_enabled = __vfp_enabled();
> +
> + __banked_save_state(guest_ctxt);
> + __sysreg_save_state(guest_ctxt);
> + __timer_save_state(vcpu);
> + __vgic_save_state(vcpu);
> +
> + __deactivate_traps(vcpu);
> + __deactivate_vm(vcpu);
> +
> + __banked_restore_state(host_ctxt);
> + __sysreg_restore_state(host_ctxt);
> +
> + if (fp_enabled) {
> + __vfp_save_state(&guest_ctxt->vfp);
> + __vfp_restore_state(&host_ctxt->vfp);
> + }
> +
> + write_sysreg(fpexc, VFP_FPEXC);
> +
> + return exit_code;
> +}
> +
> +__alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
> --
> 2.1.4
>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 17/28] ARM: KVM: Add populating of fault data structure
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:44 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:44 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Thu, Feb 04, 2016 at 11:00:34AM +0000, Marc Zyngier wrote:
> On guest exit, we must take care of populating our fault data
> structure so that the host code can handle it. This includes
> resolving the IPA for permission faults, which can result in
> restarting the guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 17/28] ARM: KVM: Add populating of fault data structure
@ 2016-02-09 18:44 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:44 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:34AM +0000, Marc Zyngier wrote:
> On guest exit, we must take care of populating our fault data
> structure so that the host code can handle it. This includes
> resolving the IPA for permission faults, which can result in
> restarting the guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 19/28] ARM: KVM: Add panic handling code
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:45 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:36AM +0000, Marc Zyngier wrote:
> Instead of spinning forever, let's "properly" handle any unexpected
> exception ("properly" meaning "print a spat on the console and die").
>
> This has proved useful quite a few times...
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/hyp-entry.S | 28 +++++++++++++++++++++-------
> arch/arm/kvm/hyp/switch.c | 38 ++++++++++++++++++++++++++++++++++++++
> 2 files changed, 59 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
> index 44bc11f..ca412ad 100644
> --- a/arch/arm/kvm/hyp/hyp-entry.S
> +++ b/arch/arm/kvm/hyp/hyp-entry.S
> @@ -75,15 +75,29 @@ __kvm_hyp_vector:
>
> .macro invalid_vector label, cause
> .align
> -\label: b .
> +\label: mov r0, #\cause
> + b __hyp_panic
> .endm
>
> - invalid_vector hyp_reset
> - invalid_vector hyp_undef
> - invalid_vector hyp_svc
> - invalid_vector hyp_pabt
> - invalid_vector hyp_dabt
> - invalid_vector hyp_fiq
> + invalid_vector hyp_reset ARM_EXCEPTION_RESET
> + invalid_vector hyp_undef ARM_EXCEPTION_UNDEFINED
> + invalid_vector hyp_svc ARM_EXCEPTION_SOFTWARE
> + invalid_vector hyp_pabt ARM_EXCEPTION_PREF_ABORT
> + invalid_vector hyp_dabt ARM_EXCEPTION_DATA_ABORT
> + invalid_vector hyp_fiq ARM_EXCEPTION_FIQ
> +
> +ENTRY(__hyp_do_panic)
> + mrs lr, cpsr
> + bic lr, lr, #MODE_MASK
> + orr lr, lr, #SVC_MODE
> +THUMB( orr lr, lr, #PSR_T_BIT )
> + msr spsr_cxsf, lr
> + ldr lr, =panic
> + msr ELR_hyp, lr
> + ldr lr, =kvm_call_hyp
> + clrex
> + eret
> +ENDPROC(__hyp_do_panic)
>
> hyp_hvc:
> /*
> diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
> index 8bfd729..67f3944 100644
> --- a/arch/arm/kvm/hyp/switch.c
> +++ b/arch/arm/kvm/hyp/switch.c
> @@ -188,3 +188,41 @@ again:
> }
>
> __alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
> +
> +static const char * const __hyp_panic_string[] = {
> + [ARM_EXCEPTION_RESET] = "\nHYP panic: RST?? PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_UNDEFINED] = "\nHYP panic: UNDEF PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_SOFTWARE] = "\nHYP panic: SVC?? PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_PREF_ABORT] = "\nHYP panic: PABRT PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_DATA_ABORT] = "\nHYP panic: DABRT PC:%08x ADDR:%08x",
> + [ARM_EXCEPTION_IRQ] = "\nHYP panic: IRQ?? PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_FIQ] = "\nHYP panic: FIQ?? PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_HVC] = "\nHYP panic: HVC?? PC:%08x CPSR:%08x",
Why the question marks?
> +};
> +
> +void __hyp_text __noreturn __hyp_panic(int cause)
> +{
> + u32 elr = read_special(ELR_hyp);
> + u32 val;
> +
> + if (cause == ARM_EXCEPTION_DATA_ABORT)
> + val = read_sysreg(HDFAR);
> + else
> + val = read_special(SPSR);
> +
> + if (read_sysreg(VTTBR)) {
> + struct kvm_vcpu *vcpu;
> + struct kvm_cpu_context *host_ctxt;
> +
> + vcpu = (struct kvm_vcpu *)read_sysreg(HTPIDR);
> + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> + __deactivate_traps(vcpu);
> + __deactivate_vm(vcpu);
> + __sysreg_restore_state(host_ctxt);
> + }
> +
> + /* Call panic for real */
> + __hyp_do_panic(__hyp_panic_string[cause], elr, val);
> +
> + unreachable();
> +}
> --
> 2.1.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Otherwise:
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 19/28] ARM: KVM: Add panic handling code
@ 2016-02-09 18:45 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:36AM +0000, Marc Zyngier wrote:
> Instead of spinning forever, let's "properly" handle any unexpected
> exception ("properly" meaning "print a spat on the console and die").
>
> This has proved useful quite a few times...
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/hyp-entry.S | 28 +++++++++++++++++++++-------
> arch/arm/kvm/hyp/switch.c | 38 ++++++++++++++++++++++++++++++++++++++
> 2 files changed, 59 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
> index 44bc11f..ca412ad 100644
> --- a/arch/arm/kvm/hyp/hyp-entry.S
> +++ b/arch/arm/kvm/hyp/hyp-entry.S
> @@ -75,15 +75,29 @@ __kvm_hyp_vector:
>
> .macro invalid_vector label, cause
> .align
> -\label: b .
> +\label: mov r0, #\cause
> + b __hyp_panic
> .endm
>
> - invalid_vector hyp_reset
> - invalid_vector hyp_undef
> - invalid_vector hyp_svc
> - invalid_vector hyp_pabt
> - invalid_vector hyp_dabt
> - invalid_vector hyp_fiq
> + invalid_vector hyp_reset ARM_EXCEPTION_RESET
> + invalid_vector hyp_undef ARM_EXCEPTION_UNDEFINED
> + invalid_vector hyp_svc ARM_EXCEPTION_SOFTWARE
> + invalid_vector hyp_pabt ARM_EXCEPTION_PREF_ABORT
> + invalid_vector hyp_dabt ARM_EXCEPTION_DATA_ABORT
> + invalid_vector hyp_fiq ARM_EXCEPTION_FIQ
> +
> +ENTRY(__hyp_do_panic)
> + mrs lr, cpsr
> + bic lr, lr, #MODE_MASK
> + orr lr, lr, #SVC_MODE
> +THUMB( orr lr, lr, #PSR_T_BIT )
> + msr spsr_cxsf, lr
> + ldr lr, =panic
> + msr ELR_hyp, lr
> + ldr lr, =kvm_call_hyp
> + clrex
> + eret
> +ENDPROC(__hyp_do_panic)
>
> hyp_hvc:
> /*
> diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
> index 8bfd729..67f3944 100644
> --- a/arch/arm/kvm/hyp/switch.c
> +++ b/arch/arm/kvm/hyp/switch.c
> @@ -188,3 +188,41 @@ again:
> }
>
> __alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
> +
> +static const char * const __hyp_panic_string[] = {
> + [ARM_EXCEPTION_RESET] = "\nHYP panic: RST?? PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_UNDEFINED] = "\nHYP panic: UNDEF PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_SOFTWARE] = "\nHYP panic: SVC?? PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_PREF_ABORT] = "\nHYP panic: PABRT PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_DATA_ABORT] = "\nHYP panic: DABRT PC:%08x ADDR:%08x",
> + [ARM_EXCEPTION_IRQ] = "\nHYP panic: IRQ?? PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_FIQ] = "\nHYP panic: FIQ?? PC:%08x CPSR:%08x",
> + [ARM_EXCEPTION_HVC] = "\nHYP panic: HVC?? PC:%08x CPSR:%08x",
Why the question marks?
> +};
> +
> +void __hyp_text __noreturn __hyp_panic(int cause)
> +{
> + u32 elr = read_special(ELR_hyp);
> + u32 val;
> +
> + if (cause == ARM_EXCEPTION_DATA_ABORT)
> + val = read_sysreg(HDFAR);
> + else
> + val = read_special(SPSR);
> +
> + if (read_sysreg(VTTBR)) {
> + struct kvm_vcpu *vcpu;
> + struct kvm_cpu_context *host_ctxt;
> +
> + vcpu = (struct kvm_vcpu *)read_sysreg(HTPIDR);
> + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> + __deactivate_traps(vcpu);
> + __deactivate_vm(vcpu);
> + __sysreg_restore_state(host_ctxt);
> + }
> +
> + /* Call panic for real */
> + __hyp_do_panic(__hyp_panic_string[cause], elr, val);
> +
> + unreachable();
> +}
> --
> 2.1.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Otherwise:
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 21/28] ARM: KVM: Remove the old world switch
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:45 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:38AM +0000, Marc Zyngier wrote:
> As we now have a full reimplementation of the world switch, it is
> time to kiss the old stuff goodbye. I'm not sure we'll miss it.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 21/28] ARM: KVM: Remove the old world switch
@ 2016-02-09 18:45 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:38AM +0000, Marc Zyngier wrote:
> As we now have a full reimplementation of the world switch, it is
> time to kiss the old stuff goodbye. I'm not sure we'll miss it.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 22/28] ARM: KVM: Switch to C-based stage2 init
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:45 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
Mising (sarcastic) commit message?
On Thu, Feb 04, 2016 at 11:00:39AM +0000, Marc Zyngier wrote:
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 22/28] ARM: KVM: Switch to C-based stage2 init
@ 2016-02-09 18:45 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: linux-arm-kernel
Mising (sarcastic) commit message?
On Thu, Feb 04, 2016 at 11:00:39AM +0000, Marc Zyngier wrote:
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 23/28] ARM: KVM: Remove __weak attributes
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:45 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Thu, Feb 04, 2016 at 11:00:40AM +0000, Marc Zyngier wrote:
> Now that the old code is long gone, we can remove all the weak
> attributes, as there is only one version of the code.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 23/28] ARM: KVM: Remove __weak attributes
@ 2016-02-09 18:45 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:40AM +0000, Marc Zyngier wrote:
> Now that the old code is long gone, we can remove all the weak
> attributes, as there is only one version of the code.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 24/28] ARM: KVM: Turn CP15 defines to an enum
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:45 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Thu, Feb 04, 2016 at 11:00:41AM +0000, Marc Zyngier wrote:
> Just like on arm64, having the CP15 registers expressed as a set
> of #defines has been very conflict-prone. Let's turn it into an
> enum, which should make it more manageable.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 24/28] ARM: KVM: Turn CP15 defines to an enum
@ 2016-02-09 18:45 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:41AM +0000, Marc Zyngier wrote:
> Just like on arm64, having the CP15 registers expressed as a set
> of #defines has been very conflict-prone. Let's turn it into an
> enum, which should make it more manageable.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 25/28] ARM: KVM: Cleanup asm-offsets.c
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:45 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:42AM +0000, Marc Zyngier wrote:
> Since we don't have much assembler left, most of the KVM stuff
> in asm-offsets.c is now superfluous. Let's get rid of it.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 25/28] ARM: KVM: Cleanup asm-offsets.c
@ 2016-02-09 18:45 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:45 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:42AM +0000, Marc Zyngier wrote:
> Since we don't have much assembler left, most of the KVM stuff
> in asm-offsets.c is now superfluous. Let's get rid of it.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 00/28] ARM: KVM: Rewrite the world switch in C (mostly)
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-09 18:49 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:49 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:17AM +0000, Marc Zyngier wrote:
> Now that the arm64 rewrite is in mainline, I've taken a stab at fixing
> the 32bit code the same way. This is fairly straightforward (once
> you've been through it once...), with a few patches that adapt the
> code to be similar to the 64bit version.
>
> Note that the timer and GIC code should be made common between the two
> architectures, as this is litterally the exact same code (I've posted
> some proof of concept for that a while ago, see
> http://www.spinics.net/lists/kvm/msg126775.html).
>
> This has been tested on a Dual A7, and the code is pushed on a branch:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm/wsinc
>
This looks good overall.
I tested it briefly on TC2 and it seems generally happy.
-Christoffer
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 00/28] ARM: KVM: Rewrite the world switch in C (mostly)
@ 2016-02-09 18:49 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-09 18:49 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:17AM +0000, Marc Zyngier wrote:
> Now that the arm64 rewrite is in mainline, I've taken a stab at fixing
> the 32bit code the same way. This is fairly straightforward (once
> you've been through it once...), with a few patches that adapt the
> code to be similar to the 64bit version.
>
> Note that the timer and GIC code should be made common between the two
> architectures, as this is litterally the exact same code (I've posted
> some proof of concept for that a while ago, see
> http://www.spinics.net/lists/kvm/msg126775.html).
>
> This has been tested on a Dual A7, and the code is pushed on a branch:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm/wsinc
>
This looks good overall.
I tested it briefly on TC2 and it seems generally happy.
-Christoffer
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 22/28] ARM: KVM: Switch to C-based stage2 init
2016-02-09 18:45 ` Christoffer Dall
@ 2016-02-10 7:42 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 7:42 UTC (permalink / raw)
To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm
On Tue, 9 Feb 2016 19:45:13 +0100
Christoffer Dall <christoffer.dall@linaro.org> wrote:
> Mising (sarcastic) commit message?
Ah! I'll try to think of something along the lines of:
"Let's now retire the code that bravely served us for three
years, and enjoy a brand new set of bugs. At least, we can blame the
compiler this time."
> On Thu, Feb 04, 2016 at 11:00:39AM +0000, Marc Zyngier wrote:
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
>
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
>
Thanks,
M.
--
Jazz is not dead. It just smells funny.
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 22/28] ARM: KVM: Switch to C-based stage2 init
@ 2016-02-10 7:42 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 7:42 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, 9 Feb 2016 19:45:13 +0100
Christoffer Dall <christoffer.dall@linaro.org> wrote:
> Mising (sarcastic) commit message?
Ah! I'll try to think of something along the lines of:
"Let's now retire the code that bravely served us for three
years, and enjoy a brand new set of bugs. At least, we can blame the
compiler this time."
> On Thu, Feb 04, 2016 at 11:00:39AM +0000, Marc Zyngier wrote:
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
>
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
>
Thanks,
M.
--
Jazz is not dead. It just smells funny.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 22/28] ARM: KVM: Switch to C-based stage2 init
2016-02-10 7:42 ` Marc Zyngier
@ 2016-02-10 8:04 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-10 8:04 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Wed, Feb 10, 2016 at 07:42:47AM +0000, Marc Zyngier wrote:
> On Tue, 9 Feb 2016 19:45:13 +0100
> Christoffer Dall <christoffer.dall@linaro.org> wrote:
>
> > Mising (sarcastic) commit message?
>
> Ah! I'll try to think of something along the lines of:
>
> "Let's now retire the code that bravely served us for three
> years, and enjoy a brand new set of bugs. At least, we can blame the
> compiler this time."
>
You are the master of those ;)
-Christoffer
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 22/28] ARM: KVM: Switch to C-based stage2 init
@ 2016-02-10 8:04 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-10 8:04 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Feb 10, 2016 at 07:42:47AM +0000, Marc Zyngier wrote:
> On Tue, 9 Feb 2016 19:45:13 +0100
> Christoffer Dall <christoffer.dall@linaro.org> wrote:
>
> > Mising (sarcastic) commit message?
>
> Ah! I'll try to think of something along the lines of:
>
> "Let's now retire the code that bravely served us for three
> years, and enjoy a brand new set of bugs. At least, we can blame the
> compiler this time."
>
You are the master of those ;)
-Christoffer
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 08/28] ARM: KVM: Add TLB invalidation code
2016-02-09 18:42 ` Christoffer Dall
@ 2016-02-10 15:32 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 15:32 UTC (permalink / raw)
To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm
On 09/02/16 18:42, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:25AM +0000, Marc Zyngier wrote:
>> Convert the TLB invalidation code to C, hooking it into the
>> build system whilst we're at it.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/Makefile | 1 +
>> arch/arm/kvm/hyp/Makefile | 5 ++++
>> arch/arm/kvm/hyp/hyp.h | 5 ++++
>> arch/arm/kvm/hyp/tlb.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++
>> 4 files changed, 82 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/Makefile
>> create mode 100644 arch/arm/kvm/hyp/tlb.c
>>
>> diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
>> index c5eef02c..eb1bf43 100644
>> --- a/arch/arm/kvm/Makefile
>> +++ b/arch/arm/kvm/Makefile
>> @@ -17,6 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
>> KVM := ../../../virt/kvm
>> kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o
>>
>> +obj-$(CONFIG_KVM_ARM_HOST) += hyp/
>> obj-y += kvm-arm.o init.o interrupts.o
>> obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
>> obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> new file mode 100644
>> index 0000000..36c760d
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -0,0 +1,5 @@
>> +#
>> +# Makefile for Kernel-based Virtual Machine module, HYP part
>> +#
>> +
>> +obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index 727089f..5808bbd 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -42,4 +42,9 @@
>> })
>> #define read_sysreg(...) __read_sysreg(__VA_ARGS__)
>>
>> +#define VTTBR __ACCESS_CP15_64(6, c2)
>> +#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
>> +#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
>> +#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
>> +
>> #endif /* __ARM_KVM_HYP_H__ */
>> diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
>> new file mode 100644
>> index 0000000..993fe89
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/tlb.c
>> @@ -0,0 +1,71 @@
>> +/*
>> + * Original code:
>> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
>> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
>> + *
>> + * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include "hyp.h"
>> +
>> +/**
>> + * Flush per-VMID TLBs
>> + *
>> + * __kvm_tlb_flush_vmid(struct kvm *kvm);
>> + *
>> + * We rely on the hardware to broadcast the TLB invalidation to all CPUs
>> + * inside the inner-shareable domain (which is the case for all v7
>> + * implementations). If we come across a non-IS SMP implementation, we'll
>> + * have to use an IPI based mechanism. Until then, we stick to the simple
>> + * hardware assisted version.
>> + *
>> + * As v7 does not support flushing per IPA, just nuke the whole TLB
>> + * instead, ignoring the ipa value.
>> + */
>> +static void __hyp_text __tlb_flush_vmid(struct kvm *kvm)
>> +{
>> + dsb(ishst);
>> +
>> + /* Switch to requested VMID */
>> + kvm = kern_hyp_va(kvm);
>> + write_sysreg(kvm->arch.vttbr, VTTBR);
>> + isb();
>> +
>> + write_sysreg(0, TLBIALLIS);
>> + dsb(ish);
>> + isb();
>> +
>> + write_sysreg(0, VTTBR);
>> +}
>> +
>> +__alias(__tlb_flush_vmid) void __weak __kvm_tlb_flush_vmid(struct kvm *kvm);
>> +
>> +static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
>> +{
>> + __tlb_flush_vmid(kvm);
>> +}
>> +
>> +__alias(__tlb_flush_vmid_ipa) void __weak __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
>> + phys_addr_t ipa);
>> +
>> +static void __hyp_text __tlb_flush_vm_context(void)
>> +{
>> + dsb(ishst);
>
> do we need this initial dsb?
I'm a copy-paste muppet. I'll drop that.
>> + write_sysreg(0, TLBIALLNSNHIS);
>> + write_sysreg(0, ICIALLUIS);
>> + dsb(ish);
>
> we used to have an isb here, but we got rid of this because it's always
> followed by eret?
Indeed. We were super extra cautious in the old code, and eret does the
right thing.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 08/28] ARM: KVM: Add TLB invalidation code
@ 2016-02-10 15:32 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 15:32 UTC (permalink / raw)
To: linux-arm-kernel
On 09/02/16 18:42, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:25AM +0000, Marc Zyngier wrote:
>> Convert the TLB invalidation code to C, hooking it into the
>> build system whilst we're at it.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/Makefile | 1 +
>> arch/arm/kvm/hyp/Makefile | 5 ++++
>> arch/arm/kvm/hyp/hyp.h | 5 ++++
>> arch/arm/kvm/hyp/tlb.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++
>> 4 files changed, 82 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/Makefile
>> create mode 100644 arch/arm/kvm/hyp/tlb.c
>>
>> diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
>> index c5eef02c..eb1bf43 100644
>> --- a/arch/arm/kvm/Makefile
>> +++ b/arch/arm/kvm/Makefile
>> @@ -17,6 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
>> KVM := ../../../virt/kvm
>> kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o
>>
>> +obj-$(CONFIG_KVM_ARM_HOST) += hyp/
>> obj-y += kvm-arm.o init.o interrupts.o
>> obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
>> obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> new file mode 100644
>> index 0000000..36c760d
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -0,0 +1,5 @@
>> +#
>> +# Makefile for Kernel-based Virtual Machine module, HYP part
>> +#
>> +
>> +obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index 727089f..5808bbd 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -42,4 +42,9 @@
>> })
>> #define read_sysreg(...) __read_sysreg(__VA_ARGS__)
>>
>> +#define VTTBR __ACCESS_CP15_64(6, c2)
>> +#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
>> +#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
>> +#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
>> +
>> #endif /* __ARM_KVM_HYP_H__ */
>> diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c
>> new file mode 100644
>> index 0000000..993fe89
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/tlb.c
>> @@ -0,0 +1,71 @@
>> +/*
>> + * Original code:
>> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
>> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
>> + *
>> + * Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include "hyp.h"
>> +
>> +/**
>> + * Flush per-VMID TLBs
>> + *
>> + * __kvm_tlb_flush_vmid(struct kvm *kvm);
>> + *
>> + * We rely on the hardware to broadcast the TLB invalidation to all CPUs
>> + * inside the inner-shareable domain (which is the case for all v7
>> + * implementations). If we come across a non-IS SMP implementation, we'll
>> + * have to use an IPI based mechanism. Until then, we stick to the simple
>> + * hardware assisted version.
>> + *
>> + * As v7 does not support flushing per IPA, just nuke the whole TLB
>> + * instead, ignoring the ipa value.
>> + */
>> +static void __hyp_text __tlb_flush_vmid(struct kvm *kvm)
>> +{
>> + dsb(ishst);
>> +
>> + /* Switch to requested VMID */
>> + kvm = kern_hyp_va(kvm);
>> + write_sysreg(kvm->arch.vttbr, VTTBR);
>> + isb();
>> +
>> + write_sysreg(0, TLBIALLIS);
>> + dsb(ish);
>> + isb();
>> +
>> + write_sysreg(0, VTTBR);
>> +}
>> +
>> +__alias(__tlb_flush_vmid) void __weak __kvm_tlb_flush_vmid(struct kvm *kvm);
>> +
>> +static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
>> +{
>> + __tlb_flush_vmid(kvm);
>> +}
>> +
>> +__alias(__tlb_flush_vmid_ipa) void __weak __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
>> + phys_addr_t ipa);
>> +
>> +static void __hyp_text __tlb_flush_vm_context(void)
>> +{
>> + dsb(ishst);
>
> do we need this initial dsb?
I'm a copy-paste muppet. I'll drop that.
>> + write_sysreg(0, TLBIALLNSNHIS);
>> + write_sysreg(0, ICIALLUIS);
>> + dsb(ish);
>
> we used to have an isb here, but we got rid of this because it's always
> followed by eret?
Indeed. We were super extra cautious in the old code, and eret does the
right thing.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 10/28] ARM: KVM: Add timer save/restore
2016-02-09 18:42 ` Christoffer Dall
@ 2016-02-10 15:36 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 15:36 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
On 09/02/16 18:42, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:27AM +0000, Marc Zyngier wrote:
>> This patch shouldn't exist, as we should be able to reuse the
>> arm64 version for free. I'll get there eventually, but in the
>> meantime I need a timer ticking.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/Makefile | 1 +
>> arch/arm/kvm/hyp/hyp.h | 8 +++++
>> arch/arm/kvm/hyp/timer-sr.c | 71 +++++++++++++++++++++++++++++++++++++++++++++
>> 3 files changed, 80 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/timer-sr.c
>>
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> index 9f96fcb..9241ae8 100644
>> --- a/arch/arm/kvm/hyp/Makefile
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -4,3 +4,4 @@
>>
>> obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
>> obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
>> +obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index ab2cb82..4924418 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -46,6 +46,9 @@
>> #define TTBR1 __ACCESS_CP15_64(1, c2)
>> #define VTTBR __ACCESS_CP15_64(6, c2)
>> #define PAR __ACCESS_CP15_64(0, c7)
>> +#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
>> +#define CNTVOFF __ACCESS_CP15_64(4, c14)
>> +
>> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
>> #define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
>> #define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
>> @@ -71,6 +74,11 @@
>> #define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
>> #define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
>> #define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
>> +#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
>> +#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
>> +
>> +void __timer_save_state(struct kvm_vcpu *vcpu);
>> +void __timer_restore_state(struct kvm_vcpu *vcpu);
>>
>> void __sysreg_save_state(struct kvm_cpu_context *ctxt);
>> void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
>> diff --git a/arch/arm/kvm/hyp/timer-sr.c b/arch/arm/kvm/hyp/timer-sr.c
>> new file mode 100644
>> index 0000000..d7535fd
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/timer-sr.c
>> @@ -0,0 +1,71 @@
>> +/*
>> + * Copyright (C) 2012-2015 - ARM Ltd
>> + * Author: Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <clocksource/arm_arch_timer.h>
>> +#include <linux/compiler.h>
>> +#include <linux/kvm_host.h>
>> +
>> +#include <asm/kvm_mmu.h>
>> +
>> +#include "hyp.h"
>> +
>> +/* vcpu is already in the HYP VA space */
>> +void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
>> +{
>> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>> + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>> + u64 val;
>> +
>> + if (kvm->arch.timer.enabled) {
>> + timer->cntv_ctl = read_sysreg(CNTV_CTL);
>> + timer->cntv_cval = read_sysreg(CNTV_CVAL);
>> + }
>> +
>> + /* Disable the virtual timer */
>> + write_sysreg(0, CNTV_CTL);
>> +
>> + /* Allow physical timer/counter access for the host */
>> + val = read_sysreg(CNTHCTL);
>> + val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
>> + write_sysreg(val, CNTHCTL);
>> +
>> + /* Clear cntvoff for the host */
>> + write_sysreg(0, CNTVOFF);
>
> in the asm version we only did this if the timer was enabled, probably
> the theory being that only in that case did we mody the offset. But it
> should be safe to just clear the cntvoff in any case, right?
It is indeed perfectly safe. I've copied the arm64 code into the 32bit
tree, so we get this cntvoff reset (arm64 requires it since it the
virtual counter is used in the vdso), but this doesn't hurt on 32bit either.
>
>> +}
>> +
>> +void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>> +{
>> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>> + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>> + u64 val;
>> +
>> + /*
>> + * Disallow physical timer access for the guest
>> + * Physical counter access is allowed
>> + */
>> + val = read_sysreg(CNTHCTL);
>> + val &= ~CNTHCTL_EL1PCEN;
>> + val |= CNTHCTL_EL1PCTEN;
>> + write_sysreg(val, CNTHCTL);
>> +
>> + if (kvm->arch.timer.enabled) {
>> + write_sysreg(kvm->arch.timer.cntvoff, CNTVOFF);
>> + write_sysreg(timer->cntv_cval, CNTV_CVAL);
>> + isb();
>> + write_sysreg(timer->cntv_ctl, CNTV_CTL);
>> + }
>> +}
>> --
>> 2.1.4
>>
>
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 10/28] ARM: KVM: Add timer save/restore
@ 2016-02-10 15:36 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 15:36 UTC (permalink / raw)
To: linux-arm-kernel
On 09/02/16 18:42, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:27AM +0000, Marc Zyngier wrote:
>> This patch shouldn't exist, as we should be able to reuse the
>> arm64 version for free. I'll get there eventually, but in the
>> meantime I need a timer ticking.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/Makefile | 1 +
>> arch/arm/kvm/hyp/hyp.h | 8 +++++
>> arch/arm/kvm/hyp/timer-sr.c | 71 +++++++++++++++++++++++++++++++++++++++++++++
>> 3 files changed, 80 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/timer-sr.c
>>
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> index 9f96fcb..9241ae8 100644
>> --- a/arch/arm/kvm/hyp/Makefile
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -4,3 +4,4 @@
>>
>> obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
>> obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
>> +obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index ab2cb82..4924418 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -46,6 +46,9 @@
>> #define TTBR1 __ACCESS_CP15_64(1, c2)
>> #define VTTBR __ACCESS_CP15_64(6, c2)
>> #define PAR __ACCESS_CP15_64(0, c7)
>> +#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
>> +#define CNTVOFF __ACCESS_CP15_64(4, c14)
>> +
>> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
>> #define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
>> #define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
>> @@ -71,6 +74,11 @@
>> #define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
>> #define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
>> #define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
>> +#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
>> +#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
>> +
>> +void __timer_save_state(struct kvm_vcpu *vcpu);
>> +void __timer_restore_state(struct kvm_vcpu *vcpu);
>>
>> void __sysreg_save_state(struct kvm_cpu_context *ctxt);
>> void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
>> diff --git a/arch/arm/kvm/hyp/timer-sr.c b/arch/arm/kvm/hyp/timer-sr.c
>> new file mode 100644
>> index 0000000..d7535fd
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/timer-sr.c
>> @@ -0,0 +1,71 @@
>> +/*
>> + * Copyright (C) 2012-2015 - ARM Ltd
>> + * Author: Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <clocksource/arm_arch_timer.h>
>> +#include <linux/compiler.h>
>> +#include <linux/kvm_host.h>
>> +
>> +#include <asm/kvm_mmu.h>
>> +
>> +#include "hyp.h"
>> +
>> +/* vcpu is already in the HYP VA space */
>> +void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
>> +{
>> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>> + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>> + u64 val;
>> +
>> + if (kvm->arch.timer.enabled) {
>> + timer->cntv_ctl = read_sysreg(CNTV_CTL);
>> + timer->cntv_cval = read_sysreg(CNTV_CVAL);
>> + }
>> +
>> + /* Disable the virtual timer */
>> + write_sysreg(0, CNTV_CTL);
>> +
>> + /* Allow physical timer/counter access for the host */
>> + val = read_sysreg(CNTHCTL);
>> + val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
>> + write_sysreg(val, CNTHCTL);
>> +
>> + /* Clear cntvoff for the host */
>> + write_sysreg(0, CNTVOFF);
>
> in the asm version we only did this if the timer was enabled, probably
> the theory being that only in that case did we mody the offset. But it
> should be safe to just clear the cntvoff in any case, right?
It is indeed perfectly safe. I've copied the arm64 code into the 32bit
tree, so we get this cntvoff reset (arm64 requires it since it the
virtual counter is used in the vdso), but this doesn't hurt on 32bit either.
>
>> +}
>> +
>> +void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>> +{
>> + struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>> + struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>> + u64 val;
>> +
>> + /*
>> + * Disallow physical timer access for the guest
>> + * Physical counter access is allowed
>> + */
>> + val = read_sysreg(CNTHCTL);
>> + val &= ~CNTHCTL_EL1PCEN;
>> + val |= CNTHCTL_EL1PCTEN;
>> + write_sysreg(val, CNTHCTL);
>> +
>> + if (kvm->arch.timer.enabled) {
>> + write_sysreg(kvm->arch.timer.cntvoff, CNTVOFF);
>> + write_sysreg(timer->cntv_cval, CNTV_CVAL);
>> + isb();
>> + write_sysreg(timer->cntv_ctl, CNTV_CTL);
>> + }
>> +}
>> --
>> 2.1.4
>>
>
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 14/28] ARM: KVM: Add guest entry code
2016-02-09 18:44 ` Christoffer Dall
@ 2016-02-10 15:48 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 15:48 UTC (permalink / raw)
To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm
On 09/02/16 18:44, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:31AM +0000, Marc Zyngier wrote:
>> Add the very minimal piece of code that is now required to jump
>> into the guest (and return from it). This code is only concerned
>> with save/restoring the USR registers (r0-r12+lr for the guest,
>> r4-r12+lr for the host), as everything else is dealt with in C
>> (VFP is another matter though).
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/Makefile | 1 +
>> arch/arm/kvm/hyp/entry.S | 70 +++++++++++++++++++++++++++++++++++++++++++++++
>> arch/arm/kvm/hyp/hyp.h | 2 ++
>> 3 files changed, 73 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/entry.S
>>
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> index 173bd1d..c779690 100644
>> --- a/arch/arm/kvm/hyp/Makefile
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -8,3 +8,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
>> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
>> +obj-$(CONFIG_KVM_ARM_HOST) += entry.o
>> diff --git a/arch/arm/kvm/hyp/entry.S b/arch/arm/kvm/hyp/entry.S
>> new file mode 100644
>> index 0000000..32f79b0
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/entry.S
>> @@ -0,0 +1,70 @@
>> +/*
>> + * Copyright (C) 2016 - ARM Ltd
>> + * Author: Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
>> +*/
>> +
>> +#include <linux/linkage.h>
>> +#include <asm/asm-offsets.h>
>> +#include <asm/kvm_arm.h>
>> +
>> + .arch_extension virt
>> +
>> + .text
>> + .pushsection .hyp.text, "ax"
>> +
>> +#define USR_REGS_OFFSET (CPU_CTXT_GP_REGS + GP_REGS_USR)
>> +
>> +/* int __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host) */
>> +ENTRY(__guest_enter)
>> + @ Save host registers
>> + add r1, r1, #(USR_REGS_OFFSET + S_R4)
>> + stm r1!, {r4-r12}
>> + str lr, [r1, #4] @ Skip SP_usr (already saved)
>> +
>> + @ Restore guest registers
>> + add r0, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
>
> this really relies on offsetof(struct pt_regs, ARM_r0) == 0, which I
> guess will likely never change, but given there's both a kernel and uapi
> version of struct pt_regs, are we sure about this?
If they did diverge, a lot of things would just break. arm64 does have
different types between user and kernel, but the userspace version is
guaranteed to be a strict prefix of the kernel one. I believe arm would
have to enforce the same thing if it changed.
>
>> + ldr lr, [r0, #S_LR]
>> + ldm r0, {r0-r12}
>> +
>> + clrex
>> + eret
>> +ENDPROC(__guest_enter)
>> +
>> +ENTRY(__guest_exit)
>> + /*
>> + * return convention:
>> + * guest r0, r1, r2 saved on the stack
>> + * r0: vcpu pointer
>> + * r1: exception code
>> + */
>> +
>> + add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R3)
>> + stm r2!, {r3-r12}
>> + str lr, [r2, #4]
>> + add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
>> + pop {r3, r4, r5} @ r0, r1, r2
>> + stm r2, {r3-r5}
>> +
>> + ldr r0, [r0, #VCPU_HOST_CTXT]
>> + add r0, r0, #(USR_REGS_OFFSET + S_R4)
>> + ldm r0!, {r4-r12}
>> + ldr lr, [r0, #4]
>> +
>> + mov r0, r1
>> + bx lr
>> +ENDPROC(__guest_exit)
>> +
>> + .popsection
>> +
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index 278eb1f..b3f6ed2 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -110,4 +110,6 @@ static inline bool __vfp_enabled(void)
>> void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt);
>> void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
>>
>> +int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
>> + struct kvm_cpu_context *host);
>> #endif /* __ARM_KVM_HYP_H__ */
>> --
>> 2.1.4
>>
>
> Otherwise:
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 14/28] ARM: KVM: Add guest entry code
@ 2016-02-10 15:48 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 15:48 UTC (permalink / raw)
To: linux-arm-kernel
On 09/02/16 18:44, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:31AM +0000, Marc Zyngier wrote:
>> Add the very minimal piece of code that is now required to jump
>> into the guest (and return from it). This code is only concerned
>> with save/restoring the USR registers (r0-r12+lr for the guest,
>> r4-r12+lr for the host), as everything else is dealt with in C
>> (VFP is another matter though).
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/Makefile | 1 +
>> arch/arm/kvm/hyp/entry.S | 70 +++++++++++++++++++++++++++++++++++++++++++++++
>> arch/arm/kvm/hyp/hyp.h | 2 ++
>> 3 files changed, 73 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/entry.S
>>
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> index 173bd1d..c779690 100644
>> --- a/arch/arm/kvm/hyp/Makefile
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -8,3 +8,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
>> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
>> +obj-$(CONFIG_KVM_ARM_HOST) += entry.o
>> diff --git a/arch/arm/kvm/hyp/entry.S b/arch/arm/kvm/hyp/entry.S
>> new file mode 100644
>> index 0000000..32f79b0
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/entry.S
>> @@ -0,0 +1,70 @@
>> +/*
>> + * Copyright (C) 2016 - ARM Ltd
>> + * Author: Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
>> +*/
>> +
>> +#include <linux/linkage.h>
>> +#include <asm/asm-offsets.h>
>> +#include <asm/kvm_arm.h>
>> +
>> + .arch_extension virt
>> +
>> + .text
>> + .pushsection .hyp.text, "ax"
>> +
>> +#define USR_REGS_OFFSET (CPU_CTXT_GP_REGS + GP_REGS_USR)
>> +
>> +/* int __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host) */
>> +ENTRY(__guest_enter)
>> + @ Save host registers
>> + add r1, r1, #(USR_REGS_OFFSET + S_R4)
>> + stm r1!, {r4-r12}
>> + str lr, [r1, #4] @ Skip SP_usr (already saved)
>> +
>> + @ Restore guest registers
>> + add r0, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
>
> this really relies on offsetof(struct pt_regs, ARM_r0) == 0, which I
> guess will likely never change, but given there's both a kernel and uapi
> version of struct pt_regs, are we sure about this?
If they did diverge, a lot of things would just break. arm64 does have
different types between user and kernel, but the userspace version is
guaranteed to be a strict prefix of the kernel one. I believe arm would
have to enforce the same thing if it changed.
>
>> + ldr lr, [r0, #S_LR]
>> + ldm r0, {r0-r12}
>> +
>> + clrex
>> + eret
>> +ENDPROC(__guest_enter)
>> +
>> +ENTRY(__guest_exit)
>> + /*
>> + * return convention:
>> + * guest r0, r1, r2 saved on the stack
>> + * r0: vcpu pointer
>> + * r1: exception code
>> + */
>> +
>> + add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R3)
>> + stm r2!, {r3-r12}
>> + str lr, [r2, #4]
>> + add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
>> + pop {r3, r4, r5} @ r0, r1, r2
>> + stm r2, {r3-r5}
>> +
>> + ldr r0, [r0, #VCPU_HOST_CTXT]
>> + add r0, r0, #(USR_REGS_OFFSET + S_R4)
>> + ldm r0!, {r4-r12}
>> + ldr lr, [r0, #4]
>> +
>> + mov r0, r1
>> + bx lr
>> +ENDPROC(__guest_exit)
>> +
>> + .popsection
>> +
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index 278eb1f..b3f6ed2 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -110,4 +110,6 @@ static inline bool __vfp_enabled(void)
>> void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt);
>> void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
>>
>> +int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
>> + struct kvm_cpu_context *host);
>> #endif /* __ARM_KVM_HYP_H__ */
>> --
>> 2.1.4
>>
>
> Otherwise:
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 16/28] ARM: KVM: Add the new world switch implementation
2016-02-09 18:44 ` Christoffer Dall
@ 2016-02-10 16:00 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 16:00 UTC (permalink / raw)
To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm
On 09/02/16 18:44, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:33AM +0000, Marc Zyngier wrote:
>> The new world switch implementation is modeled after the arm64 one,
>> calling the various save/restore functions in turn, and having as
>> little state as possible.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/Makefile | 1 +
>> arch/arm/kvm/hyp/hyp.h | 7 +++
>> arch/arm/kvm/hyp/switch.c | 136 ++++++++++++++++++++++++++++++++++++++++++++++
>> 3 files changed, 144 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/switch.c
>>
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> index c779690..cfab402 100644
>> --- a/arch/arm/kvm/hyp/Makefile
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -9,3 +9,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
>> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
>> +obj-$(CONFIG_KVM_ARM_HOST) += switch.o
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index b3f6ed2..2ca651f 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -60,11 +60,16 @@
>> #define CNTV_CVAL __ACCESS_CP15_64(3, c14)
>> #define CNTVOFF __ACCESS_CP15_64(4, c14)
>>
>> +#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
>> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
>> +#define VMIDR __ACCESS_CP15(c0, 4, c0, 0)
>
> Nit: This is called VPIDR in v7 and VMPIDR_EL2 in v8 IIUC. Should we
> refer to it by one of those names instead?
Seems to be VPIDR in all cases, actually (I stupidly made it consistent,
silly me!). I'll definitely fix that, thanks for noticing it!
Cheers,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 16/28] ARM: KVM: Add the new world switch implementation
@ 2016-02-10 16:00 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 16:00 UTC (permalink / raw)
To: linux-arm-kernel
On 09/02/16 18:44, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:33AM +0000, Marc Zyngier wrote:
>> The new world switch implementation is modeled after the arm64 one,
>> calling the various save/restore functions in turn, and having as
>> little state as possible.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/Makefile | 1 +
>> arch/arm/kvm/hyp/hyp.h | 7 +++
>> arch/arm/kvm/hyp/switch.c | 136 ++++++++++++++++++++++++++++++++++++++++++++++
>> 3 files changed, 144 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/switch.c
>>
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> index c779690..cfab402 100644
>> --- a/arch/arm/kvm/hyp/Makefile
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -9,3 +9,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
>> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
>> +obj-$(CONFIG_KVM_ARM_HOST) += switch.o
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index b3f6ed2..2ca651f 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -60,11 +60,16 @@
>> #define CNTV_CVAL __ACCESS_CP15_64(3, c14)
>> #define CNTVOFF __ACCESS_CP15_64(4, c14)
>>
>> +#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
>> #define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
>> +#define VMIDR __ACCESS_CP15(c0, 4, c0, 0)
>
> Nit: This is called VPIDR in v7 and VMPIDR_EL2 in v8 IIUC. Should we
> refer to it by one of those names instead?
Seems to be VPIDR in all cases, actually (I stupidly made it consistent,
silly me!). I'll definitely fix that, thanks for noticing it!
Cheers,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 18/28] ARM: KVM: Add HYP mode entry code
2016-02-09 17:00 ` Christoffer Dall
@ 2016-02-10 16:02 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 16:02 UTC (permalink / raw)
To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm
On 09/02/16 17:00, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:35AM +0000, Marc Zyngier wrote:
>> This part is almost entierely borrowed from the existing code, just
>> slightly simplifying the HYP function call (as we now save SPSR_hyp
>> in the world switch).
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/Makefile | 1 +
>> arch/arm/kvm/hyp/hyp-entry.S | 157 +++++++++++++++++++++++++++++++++++++++++++
>> arch/arm/kvm/hyp/hyp.h | 2 +
>> 3 files changed, 160 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
>>
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> index cfab402..a7d3a7e 100644
>> --- a/arch/arm/kvm/hyp/Makefile
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -9,4 +9,5 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
>> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
>> +obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
>> obj-$(CONFIG_KVM_ARM_HOST) += switch.o
>> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
>> new file mode 100644
>> index 0000000..44bc11f
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/hyp-entry.S
>> @@ -0,0 +1,157 @@
>> +/*
>> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
>> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License, version 2, as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program; if not, write to the Free Software
>> + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
>> + */
>> +
>> +#include <linux/linkage.h>
>> +#include <asm/kvm_arm.h>
>> +#include <asm/kvm_asm.h>
>> +
>> + .arch_extension virt
>> +
>> + .text
>> + .pushsection .hyp.text, "ax"
>> +
>> +.macro load_vcpu reg
>> + mrc p15, 4, \reg, c13, c0, 2 @ HTPIDR
>> +.endm
>> +
>> +/********************************************************************
>> + * Hypervisor exception vector and handlers
>> + *
>> + *
>> + * The KVM/ARM Hypervisor ABI is defined as follows:
>> + *
>> + * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
>> + * instruction is issued since all traps are disabled when running the host
>> + * kernel as per the Hyp-mode initialization at boot time.
>> + *
>> + * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
>> + * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
>> + * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
>> + * instructions are called from within Hyp-mode.
>> + *
>> + * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
>> + * Switching to Hyp mode is done through a simple HVC #0 instruction. The
>> + * exception vector code will check that the HVC comes from VMID==0.
>> + * - r0 contains a pointer to a HYP function
>> + * - r1, r2, and r3 contain arguments to the above function.
>> + * - The HYP function will be called with its arguments in r0, r1 and r2.
>> + * On HYP function return, we return directly to SVC.
>> + *
>> + * Note that the above is used to execute code in Hyp-mode from a host-kernel
>> + * point of view, and is a different concept from performing a world-switch and
>> + * executing guest code SVC mode (with a VMID != 0).
>> + */
>> +
>> + .align 5
>> +__hyp_vector:
>> + .global __hyp_vector
>> +__kvm_hyp_vector:
>> + .weak __kvm_hyp_vector
>> +
>> + @ Hyp-mode exception vector
>> + W(b) hyp_reset
>> + W(b) hyp_undef
>> + W(b) hyp_svc
>> + W(b) hyp_pabt
>> + W(b) hyp_dabt
>> + W(b) hyp_hvc
>> + W(b) hyp_irq
>> + W(b) hyp_fiq
>> +
>> +.macro invalid_vector label, cause
>> + .align
>> +\label: b .
>> +.endm
>> +
>> + invalid_vector hyp_reset
>> + invalid_vector hyp_undef
>> + invalid_vector hyp_svc
>> + invalid_vector hyp_pabt
>> + invalid_vector hyp_dabt
>> + invalid_vector hyp_fiq
>> +
>> +hyp_hvc:
>> + /*
>> + * Getting here is either because of a trap from a guest,
>> + * or from executing HVC from the host kernel, which means
>> + * "do something in Hyp mode".
>> + */
>> + push {r0, r1, r2}
>> +
>> + @ Check syndrome register
>> + mrc p15, 4, r1, c5, c2, 0 @ HSR
>> + lsr r0, r1, #HSR_EC_SHIFT
>> + cmp r0, #HSR_EC_HVC
>> + bne guest_trap @ Not HVC instr.
>> +
>> + /*
>> + * Let's check if the HVC came from VMID 0 and allow simple
>> + * switch to Hyp mode
>> + */
>> + mrrc p15, 6, r0, r2, c2
>> + lsr r2, r2, #16
>> + and r2, r2, #0xff
>> + cmp r2, #0
>> + bne guest_trap @ Guest called HVC
>> +
>> + /*
>> + * Getting here means host called HVC, we shift parameters and branch
>> + * to Hyp function.
>> + */
>> + pop {r0, r1, r2}
>> +
>> + /* Check for __hyp_get_vectors */
>> + cmp r0, #-1
>> + mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
>> + beq 1f
>> +
>> + push {lr}
>> +
>> + mov lr, r0
>> + mov r0, r1
>> + mov r1, r2
>> + mov r2, r3
>> +
>> +THUMB( orr lr, #1)
>> + blx lr @ Call the HYP function
>> +
>> + pop {lr}
>> +1: eret
>> +
>> +guest_trap:
>> + load_vcpu r0 @ Load VCPU pointer to r0
>> +
>> + @ Check if we need the fault information
>
> nit: this is not about faults at this point, so this comment should
> either go or be reworded to "let's check if we trapped on guest VFP
> access"
>
> and I think the lsr can be moved into the ifdef as well.
Yes, both good points.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 18/28] ARM: KVM: Add HYP mode entry code
@ 2016-02-10 16:02 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 16:02 UTC (permalink / raw)
To: linux-arm-kernel
On 09/02/16 17:00, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:35AM +0000, Marc Zyngier wrote:
>> This part is almost entierely borrowed from the existing code, just
>> slightly simplifying the HYP function call (as we now save SPSR_hyp
>> in the world switch).
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/Makefile | 1 +
>> arch/arm/kvm/hyp/hyp-entry.S | 157 +++++++++++++++++++++++++++++++++++++++++++
>> arch/arm/kvm/hyp/hyp.h | 2 +
>> 3 files changed, 160 insertions(+)
>> create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
>>
>> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
>> index cfab402..a7d3a7e 100644
>> --- a/arch/arm/kvm/hyp/Makefile
>> +++ b/arch/arm/kvm/hyp/Makefile
>> @@ -9,4 +9,5 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
>> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
>> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
>> +obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
>> obj-$(CONFIG_KVM_ARM_HOST) += switch.o
>> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
>> new file mode 100644
>> index 0000000..44bc11f
>> --- /dev/null
>> +++ b/arch/arm/kvm/hyp/hyp-entry.S
>> @@ -0,0 +1,157 @@
>> +/*
>> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
>> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License, version 2, as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program; if not, write to the Free Software
>> + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
>> + */
>> +
>> +#include <linux/linkage.h>
>> +#include <asm/kvm_arm.h>
>> +#include <asm/kvm_asm.h>
>> +
>> + .arch_extension virt
>> +
>> + .text
>> + .pushsection .hyp.text, "ax"
>> +
>> +.macro load_vcpu reg
>> + mrc p15, 4, \reg, c13, c0, 2 @ HTPIDR
>> +.endm
>> +
>> +/********************************************************************
>> + * Hypervisor exception vector and handlers
>> + *
>> + *
>> + * The KVM/ARM Hypervisor ABI is defined as follows:
>> + *
>> + * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
>> + * instruction is issued since all traps are disabled when running the host
>> + * kernel as per the Hyp-mode initialization at boot time.
>> + *
>> + * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
>> + * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
>> + * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
>> + * instructions are called from within Hyp-mode.
>> + *
>> + * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
>> + * Switching to Hyp mode is done through a simple HVC #0 instruction. The
>> + * exception vector code will check that the HVC comes from VMID==0.
>> + * - r0 contains a pointer to a HYP function
>> + * - r1, r2, and r3 contain arguments to the above function.
>> + * - The HYP function will be called with its arguments in r0, r1 and r2.
>> + * On HYP function return, we return directly to SVC.
>> + *
>> + * Note that the above is used to execute code in Hyp-mode from a host-kernel
>> + * point of view, and is a different concept from performing a world-switch and
>> + * executing guest code SVC mode (with a VMID != 0).
>> + */
>> +
>> + .align 5
>> +__hyp_vector:
>> + .global __hyp_vector
>> +__kvm_hyp_vector:
>> + .weak __kvm_hyp_vector
>> +
>> + @ Hyp-mode exception vector
>> + W(b) hyp_reset
>> + W(b) hyp_undef
>> + W(b) hyp_svc
>> + W(b) hyp_pabt
>> + W(b) hyp_dabt
>> + W(b) hyp_hvc
>> + W(b) hyp_irq
>> + W(b) hyp_fiq
>> +
>> +.macro invalid_vector label, cause
>> + .align
>> +\label: b .
>> +.endm
>> +
>> + invalid_vector hyp_reset
>> + invalid_vector hyp_undef
>> + invalid_vector hyp_svc
>> + invalid_vector hyp_pabt
>> + invalid_vector hyp_dabt
>> + invalid_vector hyp_fiq
>> +
>> +hyp_hvc:
>> + /*
>> + * Getting here is either because of a trap from a guest,
>> + * or from executing HVC from the host kernel, which means
>> + * "do something in Hyp mode".
>> + */
>> + push {r0, r1, r2}
>> +
>> + @ Check syndrome register
>> + mrc p15, 4, r1, c5, c2, 0 @ HSR
>> + lsr r0, r1, #HSR_EC_SHIFT
>> + cmp r0, #HSR_EC_HVC
>> + bne guest_trap @ Not HVC instr.
>> +
>> + /*
>> + * Let's check if the HVC came from VMID 0 and allow simple
>> + * switch to Hyp mode
>> + */
>> + mrrc p15, 6, r0, r2, c2
>> + lsr r2, r2, #16
>> + and r2, r2, #0xff
>> + cmp r2, #0
>> + bne guest_trap @ Guest called HVC
>> +
>> + /*
>> + * Getting here means host called HVC, we shift parameters and branch
>> + * to Hyp function.
>> + */
>> + pop {r0, r1, r2}
>> +
>> + /* Check for __hyp_get_vectors */
>> + cmp r0, #-1
>> + mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
>> + beq 1f
>> +
>> + push {lr}
>> +
>> + mov lr, r0
>> + mov r0, r1
>> + mov r1, r2
>> + mov r2, r3
>> +
>> +THUMB( orr lr, #1)
>> + blx lr @ Call the HYP function
>> +
>> + pop {lr}
>> +1: eret
>> +
>> +guest_trap:
>> + load_vcpu r0 @ Load VCPU pointer to r0
>> +
>> + @ Check if we need the fault information
>
> nit: this is not about faults at this point, so this comment should
> either go or be reworded to "let's check if we trapped on guest VFP
> access"
>
> and I think the lsr can be moved into the ifdef as well.
Yes, both good points.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 19/28] ARM: KVM: Add panic handling code
2016-02-09 18:45 ` Christoffer Dall
@ 2016-02-10 16:03 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 16:03 UTC (permalink / raw)
To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm
On 09/02/16 18:45, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:36AM +0000, Marc Zyngier wrote:
>> Instead of spinning forever, let's "properly" handle any unexpected
>> exception ("properly" meaning "print a spat on the console and die").
>>
>> This has proved useful quite a few times...
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/hyp-entry.S | 28 +++++++++++++++++++++-------
>> arch/arm/kvm/hyp/switch.c | 38 ++++++++++++++++++++++++++++++++++++++
>> 2 files changed, 59 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
>> index 44bc11f..ca412ad 100644
>> --- a/arch/arm/kvm/hyp/hyp-entry.S
>> +++ b/arch/arm/kvm/hyp/hyp-entry.S
>> @@ -75,15 +75,29 @@ __kvm_hyp_vector:
>>
>> .macro invalid_vector label, cause
>> .align
>> -\label: b .
>> +\label: mov r0, #\cause
>> + b __hyp_panic
>> .endm
>>
>> - invalid_vector hyp_reset
>> - invalid_vector hyp_undef
>> - invalid_vector hyp_svc
>> - invalid_vector hyp_pabt
>> - invalid_vector hyp_dabt
>> - invalid_vector hyp_fiq
>> + invalid_vector hyp_reset ARM_EXCEPTION_RESET
>> + invalid_vector hyp_undef ARM_EXCEPTION_UNDEFINED
>> + invalid_vector hyp_svc ARM_EXCEPTION_SOFTWARE
>> + invalid_vector hyp_pabt ARM_EXCEPTION_PREF_ABORT
>> + invalid_vector hyp_dabt ARM_EXCEPTION_DATA_ABORT
>> + invalid_vector hyp_fiq ARM_EXCEPTION_FIQ
>> +
>> +ENTRY(__hyp_do_panic)
>> + mrs lr, cpsr
>> + bic lr, lr, #MODE_MASK
>> + orr lr, lr, #SVC_MODE
>> +THUMB( orr lr, lr, #PSR_T_BIT )
>> + msr spsr_cxsf, lr
>> + ldr lr, =panic
>> + msr ELR_hyp, lr
>> + ldr lr, =kvm_call_hyp
>> + clrex
>> + eret
>> +ENDPROC(__hyp_do_panic)
>>
>> hyp_hvc:
>> /*
>> diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
>> index 8bfd729..67f3944 100644
>> --- a/arch/arm/kvm/hyp/switch.c
>> +++ b/arch/arm/kvm/hyp/switch.c
>> @@ -188,3 +188,41 @@ again:
>> }
>>
>> __alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
>> +
>> +static const char * const __hyp_panic_string[] = {
>> + [ARM_EXCEPTION_RESET] = "\nHYP panic: RST?? PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_UNDEFINED] = "\nHYP panic: UNDEF PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_SOFTWARE] = "\nHYP panic: SVC?? PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_PREF_ABORT] = "\nHYP panic: PABRT PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_DATA_ABORT] = "\nHYP panic: DABRT PC:%08x ADDR:%08x",
>> + [ARM_EXCEPTION_IRQ] = "\nHYP panic: IRQ?? PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_FIQ] = "\nHYP panic: FIQ?? PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_HVC] = "\nHYP panic: HVC?? PC:%08x CPSR:%08x",
>
> Why the question marks?
Ah, that was me wondering how we'd get those! I'll remove them.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 19/28] ARM: KVM: Add panic handling code
@ 2016-02-10 16:03 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 16:03 UTC (permalink / raw)
To: linux-arm-kernel
On 09/02/16 18:45, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:36AM +0000, Marc Zyngier wrote:
>> Instead of spinning forever, let's "properly" handle any unexpected
>> exception ("properly" meaning "print a spat on the console and die").
>>
>> This has proved useful quite a few times...
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/hyp-entry.S | 28 +++++++++++++++++++++-------
>> arch/arm/kvm/hyp/switch.c | 38 ++++++++++++++++++++++++++++++++++++++
>> 2 files changed, 59 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
>> index 44bc11f..ca412ad 100644
>> --- a/arch/arm/kvm/hyp/hyp-entry.S
>> +++ b/arch/arm/kvm/hyp/hyp-entry.S
>> @@ -75,15 +75,29 @@ __kvm_hyp_vector:
>>
>> .macro invalid_vector label, cause
>> .align
>> -\label: b .
>> +\label: mov r0, #\cause
>> + b __hyp_panic
>> .endm
>>
>> - invalid_vector hyp_reset
>> - invalid_vector hyp_undef
>> - invalid_vector hyp_svc
>> - invalid_vector hyp_pabt
>> - invalid_vector hyp_dabt
>> - invalid_vector hyp_fiq
>> + invalid_vector hyp_reset ARM_EXCEPTION_RESET
>> + invalid_vector hyp_undef ARM_EXCEPTION_UNDEFINED
>> + invalid_vector hyp_svc ARM_EXCEPTION_SOFTWARE
>> + invalid_vector hyp_pabt ARM_EXCEPTION_PREF_ABORT
>> + invalid_vector hyp_dabt ARM_EXCEPTION_DATA_ABORT
>> + invalid_vector hyp_fiq ARM_EXCEPTION_FIQ
>> +
>> +ENTRY(__hyp_do_panic)
>> + mrs lr, cpsr
>> + bic lr, lr, #MODE_MASK
>> + orr lr, lr, #SVC_MODE
>> +THUMB( orr lr, lr, #PSR_T_BIT )
>> + msr spsr_cxsf, lr
>> + ldr lr, =panic
>> + msr ELR_hyp, lr
>> + ldr lr, =kvm_call_hyp
>> + clrex
>> + eret
>> +ENDPROC(__hyp_do_panic)
>>
>> hyp_hvc:
>> /*
>> diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
>> index 8bfd729..67f3944 100644
>> --- a/arch/arm/kvm/hyp/switch.c
>> +++ b/arch/arm/kvm/hyp/switch.c
>> @@ -188,3 +188,41 @@ again:
>> }
>>
>> __alias(__guest_run) int __weak __kvm_vcpu_run(struct kvm_vcpu *vcpu);
>> +
>> +static const char * const __hyp_panic_string[] = {
>> + [ARM_EXCEPTION_RESET] = "\nHYP panic: RST?? PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_UNDEFINED] = "\nHYP panic: UNDEF PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_SOFTWARE] = "\nHYP panic: SVC?? PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_PREF_ABORT] = "\nHYP panic: PABRT PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_DATA_ABORT] = "\nHYP panic: DABRT PC:%08x ADDR:%08x",
>> + [ARM_EXCEPTION_IRQ] = "\nHYP panic: IRQ?? PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_FIQ] = "\nHYP panic: FIQ?? PC:%08x CPSR:%08x",
>> + [ARM_EXCEPTION_HVC] = "\nHYP panic: HVC?? PC:%08x CPSR:%08x",
>
> Why the question marks?
Ah, that was me wondering how we'd get those! I'll remove them.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 20/28] ARM: KVM: Change kvm_call_hyp return type to unsigned long
2016-02-09 18:28 ` Christoffer Dall
@ 2016-02-10 16:07 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 16:07 UTC (permalink / raw)
To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm
On 09/02/16 18:28, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:37AM +0000, Marc Zyngier wrote:
>> Having u64 as the kvm_call_hyp return type is problematic, as
>> it forces all kind of tricks for the return values from HYP
>> to be promoted to 64bit (LE has the LSB in r0, and BE has them
>> in r1).
>>
>> Since the only user of the return value is perfectly happy with
>> a 32bit value, let's make kvm_call_hyp return an unsigned long,
>> which is 32bit on ARM.
>
> I wonder why I ever did this as a u64...
Probably to cater for the largest possible return value, before we
started looking at BE... ;-)
> should the arm64 counterpart be modified to an unsigned long as well?
That'd be a sensible change.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 20/28] ARM: KVM: Change kvm_call_hyp return type to unsigned long
@ 2016-02-10 16:07 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 16:07 UTC (permalink / raw)
To: linux-arm-kernel
On 09/02/16 18:28, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:37AM +0000, Marc Zyngier wrote:
>> Having u64 as the kvm_call_hyp return type is problematic, as
>> it forces all kind of tricks for the return values from HYP
>> to be promoted to 64bit (LE has the LSB in r0, and BE has them
>> in r1).
>>
>> Since the only user of the return value is perfectly happy with
>> a 32bit value, let's make kvm_call_hyp return an unsigned long,
>> which is 32bit on ARM.
>
> I wonder why I ever did this as a u64...
Probably to cater for the largest possible return value, before we
started looking at BE... ;-)
> should the arm64 counterpart be modified to an unsigned long as well?
That'd be a sensible change.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 18/28] ARM: KVM: Add HYP mode entry code
2016-02-10 16:02 ` Marc Zyngier
@ 2016-02-10 17:23 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-10 17:23 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm
On Wed, Feb 10, 2016 at 04:02:14PM +0000, Marc Zyngier wrote:
> On 09/02/16 17:00, Christoffer Dall wrote:
> > On Thu, Feb 04, 2016 at 11:00:35AM +0000, Marc Zyngier wrote:
> >> This part is almost entierely borrowed from the existing code, just
> >> slightly simplifying the HYP function call (as we now save SPSR_hyp
> >> in the world switch).
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >> arch/arm/kvm/hyp/Makefile | 1 +
> >> arch/arm/kvm/hyp/hyp-entry.S | 157 +++++++++++++++++++++++++++++++++++++++++++
> >> arch/arm/kvm/hyp/hyp.h | 2 +
> >> 3 files changed, 160 insertions(+)
> >> create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
> >>
> >> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> >> index cfab402..a7d3a7e 100644
> >> --- a/arch/arm/kvm/hyp/Makefile
> >> +++ b/arch/arm/kvm/hyp/Makefile
> >> @@ -9,4 +9,5 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
> >> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
> >> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
> >> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
> >> +obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
> >> obj-$(CONFIG_KVM_ARM_HOST) += switch.o
> >> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
> >> new file mode 100644
> >> index 0000000..44bc11f
> >> --- /dev/null
> >> +++ b/arch/arm/kvm/hyp/hyp-entry.S
> >> @@ -0,0 +1,157 @@
> >> +/*
> >> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> >> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> >> + *
> >> + * This program is free software; you can redistribute it and/or modify
> >> + * it under the terms of the GNU General Public License, version 2, as
> >> + * published by the Free Software Foundation.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> >> + * GNU General Public License for more details.
> >> + *
> >> + * You should have received a copy of the GNU General Public License
> >> + * along with this program; if not, write to the Free Software
> >> + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
> >> + */
> >> +
> >> +#include <linux/linkage.h>
> >> +#include <asm/kvm_arm.h>
> >> +#include <asm/kvm_asm.h>
> >> +
> >> + .arch_extension virt
> >> +
> >> + .text
> >> + .pushsection .hyp.text, "ax"
> >> +
> >> +.macro load_vcpu reg
> >> + mrc p15, 4, \reg, c13, c0, 2 @ HTPIDR
> >> +.endm
> >> +
> >> +/********************************************************************
> >> + * Hypervisor exception vector and handlers
> >> + *
> >> + *
> >> + * The KVM/ARM Hypervisor ABI is defined as follows:
> >> + *
> >> + * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
> >> + * instruction is issued since all traps are disabled when running the host
> >> + * kernel as per the Hyp-mode initialization at boot time.
> >> + *
> >> + * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
> >> + * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
> >> + * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
> >> + * instructions are called from within Hyp-mode.
> >> + *
> >> + * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
> >> + * Switching to Hyp mode is done through a simple HVC #0 instruction. The
> >> + * exception vector code will check that the HVC comes from VMID==0.
> >> + * - r0 contains a pointer to a HYP function
> >> + * - r1, r2, and r3 contain arguments to the above function.
> >> + * - The HYP function will be called with its arguments in r0, r1 and r2.
> >> + * On HYP function return, we return directly to SVC.
> >> + *
> >> + * Note that the above is used to execute code in Hyp-mode from a host-kernel
> >> + * point of view, and is a different concept from performing a world-switch and
> >> + * executing guest code SVC mode (with a VMID != 0).
> >> + */
> >> +
> >> + .align 5
> >> +__hyp_vector:
> >> + .global __hyp_vector
> >> +__kvm_hyp_vector:
> >> + .weak __kvm_hyp_vector
> >> +
> >> + @ Hyp-mode exception vector
> >> + W(b) hyp_reset
> >> + W(b) hyp_undef
> >> + W(b) hyp_svc
> >> + W(b) hyp_pabt
> >> + W(b) hyp_dabt
> >> + W(b) hyp_hvc
> >> + W(b) hyp_irq
> >> + W(b) hyp_fiq
> >> +
> >> +.macro invalid_vector label, cause
> >> + .align
> >> +\label: b .
> >> +.endm
> >> +
> >> + invalid_vector hyp_reset
> >> + invalid_vector hyp_undef
> >> + invalid_vector hyp_svc
> >> + invalid_vector hyp_pabt
> >> + invalid_vector hyp_dabt
> >> + invalid_vector hyp_fiq
> >> +
> >> +hyp_hvc:
> >> + /*
> >> + * Getting here is either because of a trap from a guest,
> >> + * or from executing HVC from the host kernel, which means
> >> + * "do something in Hyp mode".
> >> + */
> >> + push {r0, r1, r2}
> >> +
> >> + @ Check syndrome register
> >> + mrc p15, 4, r1, c5, c2, 0 @ HSR
> >> + lsr r0, r1, #HSR_EC_SHIFT
> >> + cmp r0, #HSR_EC_HVC
> >> + bne guest_trap @ Not HVC instr.
> >> +
> >> + /*
> >> + * Let's check if the HVC came from VMID 0 and allow simple
> >> + * switch to Hyp mode
> >> + */
> >> + mrrc p15, 6, r0, r2, c2
> >> + lsr r2, r2, #16
> >> + and r2, r2, #0xff
> >> + cmp r2, #0
> >> + bne guest_trap @ Guest called HVC
> >> +
> >> + /*
> >> + * Getting here means host called HVC, we shift parameters and branch
> >> + * to Hyp function.
> >> + */
> >> + pop {r0, r1, r2}
> >> +
> >> + /* Check for __hyp_get_vectors */
> >> + cmp r0, #-1
> >> + mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
> >> + beq 1f
> >> +
> >> + push {lr}
> >> +
> >> + mov lr, r0
> >> + mov r0, r1
> >> + mov r1, r2
> >> + mov r2, r3
> >> +
> >> +THUMB( orr lr, #1)
> >> + blx lr @ Call the HYP function
> >> +
> >> + pop {lr}
> >> +1: eret
> >> +
> >> +guest_trap:
> >> + load_vcpu r0 @ Load VCPU pointer to r0
> >> +
> >> + @ Check if we need the fault information
> >
> > nit: this is not about faults at this point, so this comment should
> > either go or be reworded to "let's check if we trapped on guest VFP
> > access"
> >
> > and I think the lsr can be moved into the ifdef as well.
>
> Yes, both good points.
>
fixing that, you can also have my
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
if I didn't give it already.
-Christoffer
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 18/28] ARM: KVM: Add HYP mode entry code
@ 2016-02-10 17:23 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-10 17:23 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Feb 10, 2016 at 04:02:14PM +0000, Marc Zyngier wrote:
> On 09/02/16 17:00, Christoffer Dall wrote:
> > On Thu, Feb 04, 2016 at 11:00:35AM +0000, Marc Zyngier wrote:
> >> This part is almost entierely borrowed from the existing code, just
> >> slightly simplifying the HYP function call (as we now save SPSR_hyp
> >> in the world switch).
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >> arch/arm/kvm/hyp/Makefile | 1 +
> >> arch/arm/kvm/hyp/hyp-entry.S | 157 +++++++++++++++++++++++++++++++++++++++++++
> >> arch/arm/kvm/hyp/hyp.h | 2 +
> >> 3 files changed, 160 insertions(+)
> >> create mode 100644 arch/arm/kvm/hyp/hyp-entry.S
> >>
> >> diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
> >> index cfab402..a7d3a7e 100644
> >> --- a/arch/arm/kvm/hyp/Makefile
> >> +++ b/arch/arm/kvm/hyp/Makefile
> >> @@ -9,4 +9,5 @@ obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o
> >> obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
> >> obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
> >> obj-$(CONFIG_KVM_ARM_HOST) += entry.o
> >> +obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
> >> obj-$(CONFIG_KVM_ARM_HOST) += switch.o
> >> diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S
> >> new file mode 100644
> >> index 0000000..44bc11f
> >> --- /dev/null
> >> +++ b/arch/arm/kvm/hyp/hyp-entry.S
> >> @@ -0,0 +1,157 @@
> >> +/*
> >> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> >> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> >> + *
> >> + * This program is free software; you can redistribute it and/or modify
> >> + * it under the terms of the GNU General Public License, version 2, as
> >> + * published by the Free Software Foundation.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> >> + * GNU General Public License for more details.
> >> + *
> >> + * You should have received a copy of the GNU General Public License
> >> + * along with this program; if not, write to the Free Software
> >> + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
> >> + */
> >> +
> >> +#include <linux/linkage.h>
> >> +#include <asm/kvm_arm.h>
> >> +#include <asm/kvm_asm.h>
> >> +
> >> + .arch_extension virt
> >> +
> >> + .text
> >> + .pushsection .hyp.text, "ax"
> >> +
> >> +.macro load_vcpu reg
> >> + mrc p15, 4, \reg, c13, c0, 2 @ HTPIDR
> >> +.endm
> >> +
> >> +/********************************************************************
> >> + * Hypervisor exception vector and handlers
> >> + *
> >> + *
> >> + * The KVM/ARM Hypervisor ABI is defined as follows:
> >> + *
> >> + * Entry to Hyp mode from the host kernel will happen _only_ when an HVC
> >> + * instruction is issued since all traps are disabled when running the host
> >> + * kernel as per the Hyp-mode initialization at boot time.
> >> + *
> >> + * HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
> >> + * below) when the HVC instruction is called from SVC mode (i.e. a guest or the
> >> + * host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
> >> + * instructions are called from within Hyp-mode.
> >> + *
> >> + * Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
> >> + * Switching to Hyp mode is done through a simple HVC #0 instruction. The
> >> + * exception vector code will check that the HVC comes from VMID==0.
> >> + * - r0 contains a pointer to a HYP function
> >> + * - r1, r2, and r3 contain arguments to the above function.
> >> + * - The HYP function will be called with its arguments in r0, r1 and r2.
> >> + * On HYP function return, we return directly to SVC.
> >> + *
> >> + * Note that the above is used to execute code in Hyp-mode from a host-kernel
> >> + * point of view, and is a different concept from performing a world-switch and
> >> + * executing guest code SVC mode (with a VMID != 0).
> >> + */
> >> +
> >> + .align 5
> >> +__hyp_vector:
> >> + .global __hyp_vector
> >> +__kvm_hyp_vector:
> >> + .weak __kvm_hyp_vector
> >> +
> >> + @ Hyp-mode exception vector
> >> + W(b) hyp_reset
> >> + W(b) hyp_undef
> >> + W(b) hyp_svc
> >> + W(b) hyp_pabt
> >> + W(b) hyp_dabt
> >> + W(b) hyp_hvc
> >> + W(b) hyp_irq
> >> + W(b) hyp_fiq
> >> +
> >> +.macro invalid_vector label, cause
> >> + .align
> >> +\label: b .
> >> +.endm
> >> +
> >> + invalid_vector hyp_reset
> >> + invalid_vector hyp_undef
> >> + invalid_vector hyp_svc
> >> + invalid_vector hyp_pabt
> >> + invalid_vector hyp_dabt
> >> + invalid_vector hyp_fiq
> >> +
> >> +hyp_hvc:
> >> + /*
> >> + * Getting here is either because of a trap from a guest,
> >> + * or from executing HVC from the host kernel, which means
> >> + * "do something in Hyp mode".
> >> + */
> >> + push {r0, r1, r2}
> >> +
> >> + @ Check syndrome register
> >> + mrc p15, 4, r1, c5, c2, 0 @ HSR
> >> + lsr r0, r1, #HSR_EC_SHIFT
> >> + cmp r0, #HSR_EC_HVC
> >> + bne guest_trap @ Not HVC instr.
> >> +
> >> + /*
> >> + * Let's check if the HVC came from VMID 0 and allow simple
> >> + * switch to Hyp mode
> >> + */
> >> + mrrc p15, 6, r0, r2, c2
> >> + lsr r2, r2, #16
> >> + and r2, r2, #0xff
> >> + cmp r2, #0
> >> + bne guest_trap @ Guest called HVC
> >> +
> >> + /*
> >> + * Getting here means host called HVC, we shift parameters and branch
> >> + * to Hyp function.
> >> + */
> >> + pop {r0, r1, r2}
> >> +
> >> + /* Check for __hyp_get_vectors */
> >> + cmp r0, #-1
> >> + mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
> >> + beq 1f
> >> +
> >> + push {lr}
> >> +
> >> + mov lr, r0
> >> + mov r0, r1
> >> + mov r1, r2
> >> + mov r2, r3
> >> +
> >> +THUMB( orr lr, #1)
> >> + blx lr @ Call the HYP function
> >> +
> >> + pop {lr}
> >> +1: eret
> >> +
> >> +guest_trap:
> >> + load_vcpu r0 @ Load VCPU pointer to r0
> >> +
> >> + @ Check if we need the fault information
> >
> > nit: this is not about faults at this point, so this comment should
> > either go or be reworded to "let's check if we trapped on guest VFP
> > access"
> >
> > and I think the lsr can be moved into the ifdef as well.
>
> Yes, both good points.
>
fixing that, you can also have my
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
if I didn't give it already.
-Christoffer
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 07/28] ARM: KVM: Add system register accessor macros
2016-02-04 11:00 ` Marc Zyngier
@ 2016-02-10 17:25 ` Christoffer Dall
-1 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-10 17:25 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm
On Thu, Feb 04, 2016 at 11:00:24AM +0000, Marc Zyngier wrote:
> In order to move system register (CP15, mostly) access to C code,
> add a few macros to facilitate this, and minimize the difference
> between 32 and 64bit CP15 registers.
>
> This will get heavily used in the following patches.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/hyp.h | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index c723870..727089f 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -27,4 +27,19 @@
> #define kern_hyp_va(v) (v)
> #define hyp_kern_va(v) (v)
>
> +#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
> + "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
> +#define __ACCESS_CP15_64(Op1, CRm) \
> + "mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
> +
> +#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> +#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
> +
> +#define __read_sysreg(r, w, c, t) ({ \
> + t __val; \
> + asm volatile(r " " c : "=r" (__val)); \
> + __val; \
> +})
> +#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
> +
> #endif /* __ARM_KVM_HYP_H__ */
> --
> 2.1.4
>
I sort of figured that a reviewed-by tag on patches that actually use
these macros would be an implicit review of this code.
not feeling comfortable enough that I read this jibberish perfectly in
isolation, but given that the stuff compiles and works, I'll just ack
it:
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 07/28] ARM: KVM: Add system register accessor macros
@ 2016-02-10 17:25 ` Christoffer Dall
0 siblings, 0 replies; 138+ messages in thread
From: Christoffer Dall @ 2016-02-10 17:25 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 04, 2016 at 11:00:24AM +0000, Marc Zyngier wrote:
> In order to move system register (CP15, mostly) access to C code,
> add a few macros to facilitate this, and minimize the difference
> between 32 and 64bit CP15 registers.
>
> This will get heavily used in the following patches.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/kvm/hyp/hyp.h | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
> index c723870..727089f 100644
> --- a/arch/arm/kvm/hyp/hyp.h
> +++ b/arch/arm/kvm/hyp/hyp.h
> @@ -27,4 +27,19 @@
> #define kern_hyp_va(v) (v)
> #define hyp_kern_va(v) (v)
>
> +#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
> + "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
> +#define __ACCESS_CP15_64(Op1, CRm) \
> + "mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
> +
> +#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
> +#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
> +
> +#define __read_sysreg(r, w, c, t) ({ \
> + t __val; \
> + asm volatile(r " " c : "=r" (__val)); \
> + __val; \
> +})
> +#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
> +
> #endif /* __ARM_KVM_HYP_H__ */
> --
> 2.1.4
>
I sort of figured that a reviewed-by tag on patches that actually use
these macros would be an implicit review of this code.
not feeling comfortable enough that I read this jibberish perfectly in
isolation, but given that the stuff compiles and works, I'll just ack
it:
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v2 07/28] ARM: KVM: Add system register accessor macros
2016-02-10 17:25 ` Christoffer Dall
@ 2016-02-10 17:32 ` Marc Zyngier
-1 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 17:32 UTC (permalink / raw)
To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm
On 10/02/16 17:25, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:24AM +0000, Marc Zyngier wrote:
>> In order to move system register (CP15, mostly) access to C code,
>> add a few macros to facilitate this, and minimize the difference
>> between 32 and 64bit CP15 registers.
>>
>> This will get heavily used in the following patches.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/hyp.h | 15 +++++++++++++++
>> 1 file changed, 15 insertions(+)
>>
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index c723870..727089f 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -27,4 +27,19 @@
>> #define kern_hyp_va(v) (v)
>> #define hyp_kern_va(v) (v)
>>
>> +#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
>> + "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
>> +#define __ACCESS_CP15_64(Op1, CRm) \
>> + "mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
>> +
>> +#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>> +#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>> +
>> +#define __read_sysreg(r, w, c, t) ({ \
>> + t __val; \
>> + asm volatile(r " " c : "=r" (__val)); \
>> + __val; \
>> +})
>> +#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
>> +
>> #endif /* __ARM_KVM_HYP_H__ */
>> --
>> 2.1.4
>>
>
> I sort of figured that a reviewed-by tag on patches that actually use
> these macros would be an implicit review of this code.
>
> not feeling comfortable enough that I read this jibberish perfectly in
> isolation, but given that the stuff compiles and works, I'll just ack
> it:
>
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
>
Yeah, this is admittedly cryptic. Despite my efforts, I'm still
considering C (and the preprocessor) as an evolved macro-assembler.
I guess next time, I'll rewrite it in OCaml.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v2 07/28] ARM: KVM: Add system register accessor macros
@ 2016-02-10 17:32 ` Marc Zyngier
0 siblings, 0 replies; 138+ messages in thread
From: Marc Zyngier @ 2016-02-10 17:32 UTC (permalink / raw)
To: linux-arm-kernel
On 10/02/16 17:25, Christoffer Dall wrote:
> On Thu, Feb 04, 2016 at 11:00:24AM +0000, Marc Zyngier wrote:
>> In order to move system register (CP15, mostly) access to C code,
>> add a few macros to facilitate this, and minimize the difference
>> between 32 and 64bit CP15 registers.
>>
>> This will get heavily used in the following patches.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/kvm/hyp/hyp.h | 15 +++++++++++++++
>> 1 file changed, 15 insertions(+)
>>
>> diff --git a/arch/arm/kvm/hyp/hyp.h b/arch/arm/kvm/hyp/hyp.h
>> index c723870..727089f 100644
>> --- a/arch/arm/kvm/hyp/hyp.h
>> +++ b/arch/arm/kvm/hyp/hyp.h
>> @@ -27,4 +27,19 @@
>> #define kern_hyp_va(v) (v)
>> #define hyp_kern_va(v) (v)
>>
>> +#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
>> + "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
>> +#define __ACCESS_CP15_64(Op1, CRm) \
>> + "mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
>> +
>> +#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
>> +#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
>> +
>> +#define __read_sysreg(r, w, c, t) ({ \
>> + t __val; \
>> + asm volatile(r " " c : "=r" (__val)); \
>> + __val; \
>> +})
>> +#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
>> +
>> #endif /* __ARM_KVM_HYP_H__ */
>> --
>> 2.1.4
>>
>
> I sort of figured that a reviewed-by tag on patches that actually use
> these macros would be an implicit review of this code.
>
> not feeling comfortable enough that I read this jibberish perfectly in
> isolation, but given that the stuff compiles and works, I'll just ack
> it:
>
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
>
Yeah, this is admittedly cryptic. Despite my efforts, I'm still
considering C (and the preprocessor) as an evolved macro-assembler.
I guess next time, I'll rewrite it in OCaml.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 138+ messages in thread
end of thread, other threads:[~2016-02-10 17:32 UTC | newest]
Thread overview: 138+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-04 11:00 [PATCH v2 00/28] ARM: KVM: Rewrite the world switch in C (mostly) Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-04 11:00 ` [PATCH v2 01/28] ARM: KVM: Move the HYP code to its own section Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:39 ` Christoffer Dall
2016-02-09 18:39 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 02/28] ARM: KVM: Remove __kvm_hyp_code_start/__kvm_hyp_code_end Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:39 ` Christoffer Dall
2016-02-09 18:39 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 03/28] ARM: KVM: Move VFP registers to a CPU context structure Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 04/28] ARM: KVM: Move CP15 array into the " Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 05/28] ARM: KVM: Move GP registers " Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 06/28] ARM: KVM: Add a HYP-specific header file Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 07/28] ARM: KVM: Add system register accessor macros Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-10 17:25 ` Christoffer Dall
2016-02-10 17:25 ` Christoffer Dall
2016-02-10 17:32 ` Marc Zyngier
2016-02-10 17:32 ` Marc Zyngier
2016-02-04 11:00 ` [PATCH v2 08/28] ARM: KVM: Add TLB invalidation code Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-10 15:32 ` Marc Zyngier
2016-02-10 15:32 ` Marc Zyngier
2016-02-04 11:00 ` [PATCH v2 09/28] ARM: KVM: Add CP15 save/restore code Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 10/28] ARM: KVM: Add timer save/restore Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-10 15:36 ` Marc Zyngier
2016-02-10 15:36 ` Marc Zyngier
2016-02-04 11:00 ` [PATCH v2 11/28] ARM: KVM: Add vgic v2 save/restore Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 12/28] ARM: KVM: Add VFP save/restore Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 13/28] ARM: KVM: Add banked registers save/restore Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:42 ` Christoffer Dall
2016-02-09 18:42 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 14/28] ARM: KVM: Add guest entry code Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:44 ` Christoffer Dall
2016-02-09 18:44 ` Christoffer Dall
2016-02-10 15:48 ` Marc Zyngier
2016-02-10 15:48 ` Marc Zyngier
2016-02-04 11:00 ` [PATCH v2 15/28] ARM: KVM: Add VFP lazy save/restore handler Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:44 ` Christoffer Dall
2016-02-09 18:44 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 16/28] ARM: KVM: Add the new world switch implementation Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:44 ` Christoffer Dall
2016-02-09 18:44 ` Christoffer Dall
2016-02-10 16:00 ` Marc Zyngier
2016-02-10 16:00 ` Marc Zyngier
2016-02-04 11:00 ` [PATCH v2 17/28] ARM: KVM: Add populating of fault data structure Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:44 ` Christoffer Dall
2016-02-09 18:44 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 18/28] ARM: KVM: Add HYP mode entry code Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 17:00 ` Christoffer Dall
2016-02-09 17:00 ` Christoffer Dall
2016-02-10 16:02 ` Marc Zyngier
2016-02-10 16:02 ` Marc Zyngier
2016-02-10 17:23 ` Christoffer Dall
2016-02-10 17:23 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 19/28] ARM: KVM: Add panic handling code Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:45 ` Christoffer Dall
2016-02-09 18:45 ` Christoffer Dall
2016-02-10 16:03 ` Marc Zyngier
2016-02-10 16:03 ` Marc Zyngier
2016-02-04 11:00 ` [PATCH v2 20/28] ARM: KVM: Change kvm_call_hyp return type to unsigned long Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:28 ` Christoffer Dall
2016-02-09 18:28 ` Christoffer Dall
2016-02-10 16:07 ` Marc Zyngier
2016-02-10 16:07 ` Marc Zyngier
2016-02-04 11:00 ` [PATCH v2 21/28] ARM: KVM: Remove the old world switch Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:45 ` Christoffer Dall
2016-02-09 18:45 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 22/28] ARM: KVM: Switch to C-based stage2 init Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:45 ` Christoffer Dall
2016-02-09 18:45 ` Christoffer Dall
2016-02-10 7:42 ` Marc Zyngier
2016-02-10 7:42 ` Marc Zyngier
2016-02-10 8:04 ` Christoffer Dall
2016-02-10 8:04 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 23/28] ARM: KVM: Remove __weak attributes Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:45 ` Christoffer Dall
2016-02-09 18:45 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 24/28] ARM: KVM: Turn CP15 defines to an enum Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:45 ` Christoffer Dall
2016-02-09 18:45 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 25/28] ARM: KVM: Cleanup asm-offsets.c Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:45 ` Christoffer Dall
2016-02-09 18:45 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 26/28] ARM: KVM: Remove unused hyp_pc field Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:39 ` Christoffer Dall
2016-02-09 18:39 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 27/28] ARM: KVM: Remove handling of ARM_EXCEPTION_DATA/PREF_ABORT Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:39 ` Christoffer Dall
2016-02-09 18:39 ` Christoffer Dall
2016-02-04 11:00 ` [PATCH v2 28/28] ARM: KVM: Remove __kvm_hyp_exit/__kvm_hyp_exit_end Marc Zyngier
2016-02-04 11:00 ` Marc Zyngier
2016-02-09 18:39 ` Christoffer Dall
2016-02-09 18:39 ` Christoffer Dall
2016-02-09 18:49 ` [PATCH v2 00/28] ARM: KVM: Rewrite the world switch in C (mostly) Christoffer Dall
2016-02-09 18:49 ` Christoffer Dall
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.