* [PATCH 00/29] Port of KVM to arm64
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
This series contains the implementation of KVM for arm64. It depends
on the "pre-arm64 rework" series I posted earlier, as well as on the
tiny perf patch sent just after.
The code is unsurprisingly extremely similar to the KVM/arm code, and
a lot of it is actually shared with the 32bit version. Some of the
include files are duplicated though (I'm definitely willing to fix
that).
In terms of features:
- Support for 4k and 64k pages
- Support for 32bit and 64bit guests
- PSCI support for SMP booting
As we do not have a 64bit QEMU port, it has been tested using kvmtool
(support has already been merged).
Marc Zyngier (29):
arm64: KVM: define HYP and Stage-2 translation page flags
arm64: KVM: HYP mode idmap support
arm64: KVM: EL2 register definitions
arm64: KVM: system register definitions for 64bit guests
arm64: KVM: Basic ESR_EL2 helpers and vcpu register access
arm64: KVM: fault injection into a guest
arm64: KVM: architecture specific MMU backend
arm64: KVM: user space interface
arm64: KVM: system register handling
arm64: KVM: Cortex-A57 specific system registers handling
arm64: KVM: virtual CPU reset
arm64: KVM: kvm_arch and kvm_vcpu_arch definitions
arm64: KVM: MMIO access backend
arm64: KVM: guest one-reg interface
arm64: KVM: hypervisor initialization code
arm64: KVM: HYP mode world switch implementation
arm64: KVM: Exit handling
arm64: KVM: Plug the VGIC
arm64: KVM: Plug the arch timer
arm64: KVM: PSCI implementation
arm64: KVM: Build system integration
arm64: KVM: define 32bit specific registers
arm64: KVM: 32bit GP register access
arm64: KVM: 32bit conditional execution emulation
arm64: KVM: 32bit handling of coprocessor traps
arm64: KVM: 32bit coprocessor access for Cortex-A57
arm64: KVM: 32bit specific register world switch
arm64: KVM: 32bit guest fault injection
arm64: KVM: enable initialization of a 32bit vcpu
arch/arm/kvm/arch_timer.c | 1 +
arch/arm64/Kconfig | 2 +
arch/arm64/Makefile | 2 +-
arch/arm64/include/asm/kvm_arch_timer.h | 58 ++
arch/arm64/include/asm/kvm_arm.h | 243 +++++++
arch/arm64/include/asm/kvm_asm.h | 104 +++
arch/arm64/include/asm/kvm_coproc.h | 56 ++
arch/arm64/include/asm/kvm_emulate.h | 181 +++++
arch/arm64/include/asm/kvm_host.h | 192 ++++++
arch/arm64/include/asm/kvm_mmio.h | 59 ++
arch/arm64/include/asm/kvm_mmu.h | 126 ++++
arch/arm64/include/asm/kvm_psci.h | 23 +
arch/arm64/include/asm/kvm_vgic.h | 156 +++++
arch/arm64/include/asm/pgtable-hwdef.h | 13 +
arch/arm64/include/asm/pgtable.h | 13 +
arch/arm64/include/uapi/asm/kvm.h | 190 ++++++
arch/arm64/kernel/asm-offsets.c | 33 +
arch/arm64/kernel/vmlinux.lds.S | 10 +
arch/arm64/kvm/Kconfig | 59 ++
arch/arm64/kvm/Makefile | 18 +
arch/arm64/kvm/emulate.c | 154 +++++
arch/arm64/kvm/guest.c | 246 +++++++
arch/arm64/kvm/handle_exit.c | 124 ++++
arch/arm64/kvm/hyp-init.S | 89 +++
arch/arm64/kvm/hyp.S | 826 +++++++++++++++++++++++
arch/arm64/kvm/idmap.c | 141 ++++
arch/arm64/kvm/idmap.h | 8 +
arch/arm64/kvm/inject_fault.c | 194 ++++++
arch/arm64/kvm/regmap.c | 168 +++++
arch/arm64/kvm/reset.c | 83 +++
arch/arm64/kvm/sys_regs.c | 1113 +++++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.h | 141 ++++
arch/arm64/kvm/sys_regs_a57.c | 118 ++++
arch/arm64/mm/mmu.c | 6 +-
include/uapi/linux/kvm.h | 1 +
35 files changed, 4949 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_arch_timer.h
create mode 100644 arch/arm64/include/asm/kvm_arm.h
create mode 100644 arch/arm64/include/asm/kvm_asm.h
create mode 100644 arch/arm64/include/asm/kvm_coproc.h
create mode 100644 arch/arm64/include/asm/kvm_emulate.h
create mode 100644 arch/arm64/include/asm/kvm_host.h
create mode 100644 arch/arm64/include/asm/kvm_mmio.h
create mode 100644 arch/arm64/include/asm/kvm_mmu.h
create mode 100644 arch/arm64/include/asm/kvm_psci.h
create mode 100644 arch/arm64/include/asm/kvm_vgic.h
create mode 100644 arch/arm64/include/uapi/asm/kvm.h
create mode 100644 arch/arm64/kvm/Kconfig
create mode 100644 arch/arm64/kvm/Makefile
create mode 100644 arch/arm64/kvm/emulate.c
create mode 100644 arch/arm64/kvm/guest.c
create mode 100644 arch/arm64/kvm/handle_exit.c
create mode 100644 arch/arm64/kvm/hyp-init.S
create mode 100644 arch/arm64/kvm/hyp.S
create mode 100644 arch/arm64/kvm/idmap.c
create mode 100644 arch/arm64/kvm/idmap.h
create mode 100644 arch/arm64/kvm/inject_fault.c
create mode 100644 arch/arm64/kvm/regmap.c
create mode 100644 arch/arm64/kvm/reset.c
create mode 100644 arch/arm64/kvm/sys_regs.c
create mode 100644 arch/arm64/kvm/sys_regs.h
create mode 100644 arch/arm64/kvm/sys_regs_a57.c
--
1.7.12.4
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 00/29] Port of KVM to arm64
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
This series contains the implementation of KVM for arm64. It depends
on the "pre-arm64 rework" series I posted earlier, as well as on the
tiny perf patch sent just after.
The code is unsurprisingly extremely similar to the KVM/arm code, and
a lot of it is actually shared with the 32bit version. Some of the
include files are duplicated though (I'm definitely willing to fix
that).
In terms of features:
- Support for 4k and 64k pages
- Support for 32bit and 64bit guests
- PSCI support for SMP booting
As we do not have a 64bit QEMU port, it has been tested using kvmtool
(support has already been merged).
Marc Zyngier (29):
arm64: KVM: define HYP and Stage-2 translation page flags
arm64: KVM: HYP mode idmap support
arm64: KVM: EL2 register definitions
arm64: KVM: system register definitions for 64bit guests
arm64: KVM: Basic ESR_EL2 helpers and vcpu register access
arm64: KVM: fault injection into a guest
arm64: KVM: architecture specific MMU backend
arm64: KVM: user space interface
arm64: KVM: system register handling
arm64: KVM: Cortex-A57 specific system registers handling
arm64: KVM: virtual CPU reset
arm64: KVM: kvm_arch and kvm_vcpu_arch definitions
arm64: KVM: MMIO access backend
arm64: KVM: guest one-reg interface
arm64: KVM: hypervisor initialization code
arm64: KVM: HYP mode world switch implementation
arm64: KVM: Exit handling
arm64: KVM: Plug the VGIC
arm64: KVM: Plug the arch timer
arm64: KVM: PSCI implementation
arm64: KVM: Build system integration
arm64: KVM: define 32bit specific registers
arm64: KVM: 32bit GP register access
arm64: KVM: 32bit conditional execution emulation
arm64: KVM: 32bit handling of coprocessor traps
arm64: KVM: 32bit coprocessor access for Cortex-A57
arm64: KVM: 32bit specific register world switch
arm64: KVM: 32bit guest fault injection
arm64: KVM: enable initialization of a 32bit vcpu
arch/arm/kvm/arch_timer.c | 1 +
arch/arm64/Kconfig | 2 +
arch/arm64/Makefile | 2 +-
arch/arm64/include/asm/kvm_arch_timer.h | 58 ++
arch/arm64/include/asm/kvm_arm.h | 243 +++++++
arch/arm64/include/asm/kvm_asm.h | 104 +++
arch/arm64/include/asm/kvm_coproc.h | 56 ++
arch/arm64/include/asm/kvm_emulate.h | 181 +++++
arch/arm64/include/asm/kvm_host.h | 192 ++++++
arch/arm64/include/asm/kvm_mmio.h | 59 ++
arch/arm64/include/asm/kvm_mmu.h | 126 ++++
arch/arm64/include/asm/kvm_psci.h | 23 +
arch/arm64/include/asm/kvm_vgic.h | 156 +++++
arch/arm64/include/asm/pgtable-hwdef.h | 13 +
arch/arm64/include/asm/pgtable.h | 13 +
arch/arm64/include/uapi/asm/kvm.h | 190 ++++++
arch/arm64/kernel/asm-offsets.c | 33 +
arch/arm64/kernel/vmlinux.lds.S | 10 +
arch/arm64/kvm/Kconfig | 59 ++
arch/arm64/kvm/Makefile | 18 +
arch/arm64/kvm/emulate.c | 154 +++++
arch/arm64/kvm/guest.c | 246 +++++++
arch/arm64/kvm/handle_exit.c | 124 ++++
arch/arm64/kvm/hyp-init.S | 89 +++
arch/arm64/kvm/hyp.S | 826 +++++++++++++++++++++++
arch/arm64/kvm/idmap.c | 141 ++++
arch/arm64/kvm/idmap.h | 8 +
arch/arm64/kvm/inject_fault.c | 194 ++++++
arch/arm64/kvm/regmap.c | 168 +++++
arch/arm64/kvm/reset.c | 83 +++
arch/arm64/kvm/sys_regs.c | 1113 +++++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.h | 141 ++++
arch/arm64/kvm/sys_regs_a57.c | 118 ++++
arch/arm64/mm/mmu.c | 6 +-
include/uapi/linux/kvm.h | 1 +
35 files changed, 4949 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_arch_timer.h
create mode 100644 arch/arm64/include/asm/kvm_arm.h
create mode 100644 arch/arm64/include/asm/kvm_asm.h
create mode 100644 arch/arm64/include/asm/kvm_coproc.h
create mode 100644 arch/arm64/include/asm/kvm_emulate.h
create mode 100644 arch/arm64/include/asm/kvm_host.h
create mode 100644 arch/arm64/include/asm/kvm_mmio.h
create mode 100644 arch/arm64/include/asm/kvm_mmu.h
create mode 100644 arch/arm64/include/asm/kvm_psci.h
create mode 100644 arch/arm64/include/asm/kvm_vgic.h
create mode 100644 arch/arm64/include/uapi/asm/kvm.h
create mode 100644 arch/arm64/kvm/Kconfig
create mode 100644 arch/arm64/kvm/Makefile
create mode 100644 arch/arm64/kvm/emulate.c
create mode 100644 arch/arm64/kvm/guest.c
create mode 100644 arch/arm64/kvm/handle_exit.c
create mode 100644 arch/arm64/kvm/hyp-init.S
create mode 100644 arch/arm64/kvm/hyp.S
create mode 100644 arch/arm64/kvm/idmap.c
create mode 100644 arch/arm64/kvm/idmap.h
create mode 100644 arch/arm64/kvm/inject_fault.c
create mode 100644 arch/arm64/kvm/regmap.c
create mode 100644 arch/arm64/kvm/reset.c
create mode 100644 arch/arm64/kvm/sys_regs.c
create mode 100644 arch/arm64/kvm/sys_regs.h
create mode 100644 arch/arm64/kvm/sys_regs_a57.c
--
1.7.12.4
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 01/29] arm64: KVM: define HYP and Stage-2 translation page flags
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Add HYP and S2 page flags, for both normal and device memory.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/pgtable-hwdef.h | 13 +++++++++++++
arch/arm64/include/asm/pgtable.h | 13 +++++++++++++
arch/arm64/mm/mmu.c | 6 +++++-
3 files changed, 31 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index 75fd13d..acb4ee5 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -35,6 +35,7 @@
/*
* Section
*/
+#define PMD_SECT_USER (_AT(pteval_t, 1) << 6) /* AP[1] */
#define PMD_SECT_S (_AT(pmdval_t, 3) << 8)
#define PMD_SECT_AF (_AT(pmdval_t, 1) << 10)
#define PMD_SECT_NG (_AT(pmdval_t, 1) << 11)
@@ -68,6 +69,18 @@
#define PTE_ATTRINDX_MASK (_AT(pteval_t, 7) << 2)
/*
+ * 2nd stage PTE definitions
+ */
+#define PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */
+#define PTE_S2_RDWR (_AT(pteval_t, 2) << 6) /* HAP[2:1] */
+
+/*
+ * EL2/HYP PTE/PMD definitions
+ */
+#define PMD_HYP PMD_SECT_USER
+#define PTE_HYP PTE_USER
+
+/*
* 40-bit physical address supported.
*/
#define PHYS_MASK_SHIFT (40)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index e333a24..11c608a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -60,6 +60,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
#define _PAGE_DEFAULT PTE_TYPE_PAGE | PTE_AF
extern pgprot_t pgprot_default;
+extern pgprot_t pgprot_device;
#define __pgprot_modify(prot,mask,bits) \
__pgprot((pgprot_val(prot) & ~(mask)) | (bits))
@@ -76,6 +77,12 @@ extern pgprot_t pgprot_default;
#define PAGE_KERNEL _MOD_PROT(pgprot_default, PTE_PXN | PTE_UXN | PTE_DIRTY)
#define PAGE_KERNEL_EXEC _MOD_PROT(pgprot_default, PTE_UXN | PTE_DIRTY)
+#define PAGE_HYP _MOD_PROT(pgprot_default, PTE_HYP)
+#define PAGE_HYP_DEVICE _MOD_PROT(pgprot_device, PTE_HYP)
+
+#define PAGE_S2 _MOD_PROT(pgprot_default, PTE_USER | PTE_S2_RDONLY)
+#define PAGE_S2_DEVICE _MOD_PROT(pgprot_device, PTE_USER | PTE_S2_RDWR)
+
#define __PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_TYPE_MASK) | PTE_PROT_NONE)
#define __PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN)
#define __PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN)
@@ -197,6 +204,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
#define pmd_bad(pmd) (!(pmd_val(pmd) & 2))
+#define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
+ PMD_TYPE_TABLE)
+#define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
+ PMD_TYPE_SECT)
+
+
static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
{
*pmdp = pmd;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 224b44a..df03aea 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -44,6 +44,7 @@ struct page *empty_zero_page;
EXPORT_SYMBOL(empty_zero_page);
pgprot_t pgprot_default;
+pgprot_t pgprot_device;
EXPORT_SYMBOL(pgprot_default);
static pmdval_t prot_sect_kernel;
@@ -127,10 +128,11 @@ early_param("cachepolicy", early_cachepolicy);
*/
static void __init init_mem_pgprot(void)
{
- pteval_t default_pgprot;
+ pteval_t default_pgprot, device_pgprot;
int i;
default_pgprot = PTE_ATTRINDX(MT_NORMAL);
+ device_pgprot = PTE_ATTRINDX(MT_DEVICE_nGnRE) | PTE_PXN | PTE_UXN;
prot_sect_kernel = PMD_TYPE_SECT | PMD_SECT_AF | PMD_ATTRINDX(MT_NORMAL);
#ifdef CONFIG_SMP
@@ -138,6 +140,7 @@ static void __init init_mem_pgprot(void)
* Mark memory with the "shared" attribute for SMP systems
*/
default_pgprot |= PTE_SHARED;
+ device_pgprot |= PTE_SHARED;
prot_sect_kernel |= PMD_SECT_S;
#endif
@@ -147,6 +150,7 @@ static void __init init_mem_pgprot(void)
}
pgprot_default = __pgprot(PTE_TYPE_PAGE | PTE_AF | default_pgprot);
+ pgprot_device = __pgprot(PTE_TYPE_PAGE | PTE_AF | device_pgprot);
}
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 01/29] arm64: KVM: define HYP and Stage-2 translation page flags
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Add HYP and S2 page flags, for both normal and device memory.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/pgtable-hwdef.h | 13 +++++++++++++
arch/arm64/include/asm/pgtable.h | 13 +++++++++++++
arch/arm64/mm/mmu.c | 6 +++++-
3 files changed, 31 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index 75fd13d..acb4ee5 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -35,6 +35,7 @@
/*
* Section
*/
+#define PMD_SECT_USER (_AT(pteval_t, 1) << 6) /* AP[1] */
#define PMD_SECT_S (_AT(pmdval_t, 3) << 8)
#define PMD_SECT_AF (_AT(pmdval_t, 1) << 10)
#define PMD_SECT_NG (_AT(pmdval_t, 1) << 11)
@@ -68,6 +69,18 @@
#define PTE_ATTRINDX_MASK (_AT(pteval_t, 7) << 2)
/*
+ * 2nd stage PTE definitions
+ */
+#define PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */
+#define PTE_S2_RDWR (_AT(pteval_t, 2) << 6) /* HAP[2:1] */
+
+/*
+ * EL2/HYP PTE/PMD definitions
+ */
+#define PMD_HYP PMD_SECT_USER
+#define PTE_HYP PTE_USER
+
+/*
* 40-bit physical address supported.
*/
#define PHYS_MASK_SHIFT (40)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index e333a24..11c608a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -60,6 +60,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
#define _PAGE_DEFAULT PTE_TYPE_PAGE | PTE_AF
extern pgprot_t pgprot_default;
+extern pgprot_t pgprot_device;
#define __pgprot_modify(prot,mask,bits) \
__pgprot((pgprot_val(prot) & ~(mask)) | (bits))
@@ -76,6 +77,12 @@ extern pgprot_t pgprot_default;
#define PAGE_KERNEL _MOD_PROT(pgprot_default, PTE_PXN | PTE_UXN | PTE_DIRTY)
#define PAGE_KERNEL_EXEC _MOD_PROT(pgprot_default, PTE_UXN | PTE_DIRTY)
+#define PAGE_HYP _MOD_PROT(pgprot_default, PTE_HYP)
+#define PAGE_HYP_DEVICE _MOD_PROT(pgprot_device, PTE_HYP)
+
+#define PAGE_S2 _MOD_PROT(pgprot_default, PTE_USER | PTE_S2_RDONLY)
+#define PAGE_S2_DEVICE _MOD_PROT(pgprot_device, PTE_USER | PTE_S2_RDWR)
+
#define __PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_TYPE_MASK) | PTE_PROT_NONE)
#define __PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN)
#define __PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN)
@@ -197,6 +204,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
#define pmd_bad(pmd) (!(pmd_val(pmd) & 2))
+#define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
+ PMD_TYPE_TABLE)
+#define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
+ PMD_TYPE_SECT)
+
+
static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
{
*pmdp = pmd;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 224b44a..df03aea 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -44,6 +44,7 @@ struct page *empty_zero_page;
EXPORT_SYMBOL(empty_zero_page);
pgprot_t pgprot_default;
+pgprot_t pgprot_device;
EXPORT_SYMBOL(pgprot_default);
static pmdval_t prot_sect_kernel;
@@ -127,10 +128,11 @@ early_param("cachepolicy", early_cachepolicy);
*/
static void __init init_mem_pgprot(void)
{
- pteval_t default_pgprot;
+ pteval_t default_pgprot, device_pgprot;
int i;
default_pgprot = PTE_ATTRINDX(MT_NORMAL);
+ device_pgprot = PTE_ATTRINDX(MT_DEVICE_nGnRE) | PTE_PXN | PTE_UXN;
prot_sect_kernel = PMD_TYPE_SECT | PMD_SECT_AF | PMD_ATTRINDX(MT_NORMAL);
#ifdef CONFIG_SMP
@@ -138,6 +140,7 @@ static void __init init_mem_pgprot(void)
* Mark memory with the "shared" attribute for SMP systems
*/
default_pgprot |= PTE_SHARED;
+ device_pgprot |= PTE_SHARED;
prot_sect_kernel |= PMD_SECT_S;
#endif
@@ -147,6 +150,7 @@ static void __init init_mem_pgprot(void)
}
pgprot_default = __pgprot(PTE_TYPE_PAGE | PTE_AF | default_pgprot);
+ pgprot_device = __pgprot(PTE_TYPE_PAGE | PTE_AF | device_pgprot);
}
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 02/29] arm64: KVM: HYP mode idmap support
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Add the necessary infrastructure for identity-mapped HYP page
tables. Idmap-ed code must be in the ".hyp.idmap.text" linker
section.
The rest of the HYP ends up in ".hyp.text".
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kernel/vmlinux.lds.S | 10 +++
arch/arm64/kvm/idmap.c | 141 ++++++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/idmap.h | 8 +++
3 files changed, 159 insertions(+)
create mode 100644 arch/arm64/kvm/idmap.c
create mode 100644 arch/arm64/kvm/idmap.h
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 3fae2be..51b87c3 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -17,6 +17,15 @@ ENTRY(stext)
jiffies = jiffies_64;
+#define HYPERVISOR_TEXT \
+ ALIGN_FUNCTION(); \
+ VMLINUX_SYMBOL(__hyp_idmap_text_start) = .; \
+ *(.hyp.idmap.text) \
+ VMLINUX_SYMBOL(__hyp_idmap_text_end) = .; \
+ VMLINUX_SYMBOL(__hyp_text_start) = .; \
+ *(.hyp.text) \
+ VMLINUX_SYMBOL(__hyp_text_end) = .;
+
SECTIONS
{
/*
@@ -49,6 +58,7 @@ SECTIONS
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
+ HYPERVISOR_TEXT
*(.fixup)
*(.gnu.warning)
. = ALIGN(16);
diff --git a/arch/arm64/kvm/idmap.c b/arch/arm64/kvm/idmap.c
new file mode 100644
index 0000000..68a55d4
--- /dev/null
+++ b/arch/arm64/kvm/idmap.c
@@ -0,0 +1,141 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+
+#include <asm/cputype.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/sections.h>
+#include <asm/virt.h>
+
+#include "idmap.h"
+
+pgd_t *hyp_pgd;
+
+/*
+ * We always use a 2-level mapping for hyp-idmap:
+ * - Section mapped for 4kB pages
+ * - Page mapped for 64kB pages
+ */
+#ifdef CONFIG_ARM64_64K_PAGES
+static void idmap_add_pte(pmd_t *pmd, unsigned long addr, unsigned long end)
+{
+ struct page *page;
+ pte_t *pte;
+ unsigned long next;
+
+ if (pmd_none(*pmd)) {
+ pte = pte_alloc_one_kernel(NULL, addr);
+ if (!pte) {
+ pr_warning("Failed to allocate identity pte.\n");
+ return;
+ }
+ pmd_populate_kernel(NULL, pmd, pte);
+ }
+
+ pte = pte_offset_kernel(pmd, addr);
+
+ do {
+ page = phys_to_page(addr);
+ next = (addr & PAGE_MASK) + PAGE_SIZE;
+ set_pte(pte, mk_pte(page, PAGE_HYP));
+ } while (pte++, addr = next, addr < end);
+}
+#else
+#define HYP_SECT_PROT (PMD_TYPE_SECT | PMD_SECT_AF | \
+ PMD_ATTRINDX(MT_NORMAL) | PMD_HYP)
+
+/*
+ * For 4kB pages, we use a section to perform the identity mapping,
+ * hence the direct call to __pmd_populate().
+ */
+static void idmap_add_pte(pmd_t *pmd, unsigned long addr, unsigned long end)
+{
+ __pmd_populate(pmd, addr & PMD_MASK, HYP_SECT_PROT);
+}
+#endif
+
+static void idmap_add_pmd(pud_t *pud, unsigned long addr, unsigned long end)
+{
+ pmd_t *pmd;
+ unsigned long next;
+
+ if (pud_none_or_clear_bad(pud)) {
+ pmd = pmd_alloc_one(NULL, addr);
+ if (!pmd) {
+ pr_warning("Failed to allocate identity pmd.\n");
+ return;
+ }
+ pud_populate(NULL, pud, pmd);
+ }
+
+ pmd = pmd_offset(pud, addr);
+
+ do {
+ next = pmd_addr_end(addr, end);
+ idmap_add_pte(pmd, addr, next);
+ } while (pmd++, addr = next, addr != end);
+}
+
+static void idmap_add_pud(pgd_t *pgd, unsigned long addr, unsigned long end)
+{
+ pud_t *pud = pud_offset(pgd, addr);
+ unsigned long next;
+
+ do {
+ next = pud_addr_end(addr, end);
+ idmap_add_pmd(pud, addr, next);
+ } while (pud++, addr = next, addr != end);
+}
+
+extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
+
+static int __init hyp_idmap_setup(void)
+{
+ unsigned long addr, end;
+ unsigned long next;
+ pgd_t *pgd;
+
+ if (!is_hyp_mode_available()) {
+ hyp_pgd = NULL;
+ return 0;
+ }
+
+ hyp_pgd = pgd_alloc(NULL);
+ if (!hyp_pgd)
+ return -ENOMEM;
+
+ addr = virt_to_phys(__hyp_idmap_text_start);
+ end = virt_to_phys(__hyp_idmap_text_end);
+
+ pr_info("Setting up static HYP identity map for 0x%lx - 0x%lx\n",
+ addr, end);
+
+ pgd = hyp_pgd + pgd_index(addr);
+ do {
+ next = pgd_addr_end(addr, end);
+ idmap_add_pud(pgd, addr, next);
+ } while (pgd++, addr = next, addr != end);
+
+ dsb();
+
+ return 0;
+}
+early_initcall(hyp_idmap_setup);
diff --git a/arch/arm64/kvm/idmap.h b/arch/arm64/kvm/idmap.h
new file mode 100644
index 0000000..fcb130f
--- /dev/null
+++ b/arch/arm64/kvm/idmap.h
@@ -0,0 +1,8 @@
+#ifndef __KVM_IDMAP_H
+#define __KVM_IDMAP_H
+
+#include <asm/pgtable.h>
+
+extern pgd_t *hyp_pgd;
+
+#endif /* __KVM_IDMAP_H */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 02/29] arm64: KVM: HYP mode idmap support
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Add the necessary infrastructure for identity-mapped HYP page
tables. Idmap-ed code must be in the ".hyp.idmap.text" linker
section.
The rest of the HYP ends up in ".hyp.text".
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kernel/vmlinux.lds.S | 10 +++
arch/arm64/kvm/idmap.c | 141 ++++++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/idmap.h | 8 +++
3 files changed, 159 insertions(+)
create mode 100644 arch/arm64/kvm/idmap.c
create mode 100644 arch/arm64/kvm/idmap.h
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 3fae2be..51b87c3 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -17,6 +17,15 @@ ENTRY(stext)
jiffies = jiffies_64;
+#define HYPERVISOR_TEXT \
+ ALIGN_FUNCTION(); \
+ VMLINUX_SYMBOL(__hyp_idmap_text_start) = .; \
+ *(.hyp.idmap.text) \
+ VMLINUX_SYMBOL(__hyp_idmap_text_end) = .; \
+ VMLINUX_SYMBOL(__hyp_text_start) = .; \
+ *(.hyp.text) \
+ VMLINUX_SYMBOL(__hyp_text_end) = .;
+
SECTIONS
{
/*
@@ -49,6 +58,7 @@ SECTIONS
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
+ HYPERVISOR_TEXT
*(.fixup)
*(.gnu.warning)
. = ALIGN(16);
diff --git a/arch/arm64/kvm/idmap.c b/arch/arm64/kvm/idmap.c
new file mode 100644
index 0000000..68a55d4
--- /dev/null
+++ b/arch/arm64/kvm/idmap.c
@@ -0,0 +1,141 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+
+#include <asm/cputype.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/sections.h>
+#include <asm/virt.h>
+
+#include "idmap.h"
+
+pgd_t *hyp_pgd;
+
+/*
+ * We always use a 2-level mapping for hyp-idmap:
+ * - Section mapped for 4kB pages
+ * - Page mapped for 64kB pages
+ */
+#ifdef CONFIG_ARM64_64K_PAGES
+static void idmap_add_pte(pmd_t *pmd, unsigned long addr, unsigned long end)
+{
+ struct page *page;
+ pte_t *pte;
+ unsigned long next;
+
+ if (pmd_none(*pmd)) {
+ pte = pte_alloc_one_kernel(NULL, addr);
+ if (!pte) {
+ pr_warning("Failed to allocate identity pte.\n");
+ return;
+ }
+ pmd_populate_kernel(NULL, pmd, pte);
+ }
+
+ pte = pte_offset_kernel(pmd, addr);
+
+ do {
+ page = phys_to_page(addr);
+ next = (addr & PAGE_MASK) + PAGE_SIZE;
+ set_pte(pte, mk_pte(page, PAGE_HYP));
+ } while (pte++, addr = next, addr < end);
+}
+#else
+#define HYP_SECT_PROT (PMD_TYPE_SECT | PMD_SECT_AF | \
+ PMD_ATTRINDX(MT_NORMAL) | PMD_HYP)
+
+/*
+ * For 4kB pages, we use a section to perform the identity mapping,
+ * hence the direct call to __pmd_populate().
+ */
+static void idmap_add_pte(pmd_t *pmd, unsigned long addr, unsigned long end)
+{
+ __pmd_populate(pmd, addr & PMD_MASK, HYP_SECT_PROT);
+}
+#endif
+
+static void idmap_add_pmd(pud_t *pud, unsigned long addr, unsigned long end)
+{
+ pmd_t *pmd;
+ unsigned long next;
+
+ if (pud_none_or_clear_bad(pud)) {
+ pmd = pmd_alloc_one(NULL, addr);
+ if (!pmd) {
+ pr_warning("Failed to allocate identity pmd.\n");
+ return;
+ }
+ pud_populate(NULL, pud, pmd);
+ }
+
+ pmd = pmd_offset(pud, addr);
+
+ do {
+ next = pmd_addr_end(addr, end);
+ idmap_add_pte(pmd, addr, next);
+ } while (pmd++, addr = next, addr != end);
+}
+
+static void idmap_add_pud(pgd_t *pgd, unsigned long addr, unsigned long end)
+{
+ pud_t *pud = pud_offset(pgd, addr);
+ unsigned long next;
+
+ do {
+ next = pud_addr_end(addr, end);
+ idmap_add_pmd(pud, addr, next);
+ } while (pud++, addr = next, addr != end);
+}
+
+extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
+
+static int __init hyp_idmap_setup(void)
+{
+ unsigned long addr, end;
+ unsigned long next;
+ pgd_t *pgd;
+
+ if (!is_hyp_mode_available()) {
+ hyp_pgd = NULL;
+ return 0;
+ }
+
+ hyp_pgd = pgd_alloc(NULL);
+ if (!hyp_pgd)
+ return -ENOMEM;
+
+ addr = virt_to_phys(__hyp_idmap_text_start);
+ end = virt_to_phys(__hyp_idmap_text_end);
+
+ pr_info("Setting up static HYP identity map for 0x%lx - 0x%lx\n",
+ addr, end);
+
+ pgd = hyp_pgd + pgd_index(addr);
+ do {
+ next = pgd_addr_end(addr, end);
+ idmap_add_pud(pgd, addr, next);
+ } while (pgd++, addr = next, addr != end);
+
+ dsb();
+
+ return 0;
+}
+early_initcall(hyp_idmap_setup);
diff --git a/arch/arm64/kvm/idmap.h b/arch/arm64/kvm/idmap.h
new file mode 100644
index 0000000..fcb130f
--- /dev/null
+++ b/arch/arm64/kvm/idmap.h
@@ -0,0 +1,8 @@
+#ifndef __KVM_IDMAP_H
+#define __KVM_IDMAP_H
+
+#include <asm/pgtable.h>
+
+extern pgd_t *hyp_pgd;
+
+#endif /* __KVM_IDMAP_H */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 03/29] arm64: KVM: EL2 register definitions
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Define all the useful bitfields for EL2 registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_arm.h | 243 +++++++++++++++++++++++++++++++++++++++
1 file changed, 243 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_arm.h
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
new file mode 100644
index 0000000..6561507
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -0,0 +1,243 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_ARM_H__
+#define __ARM64_KVM_ARM_H__
+
+#include <asm/types.h>
+
+/* Hyp Configuration Register (HCR) bits */
+#define HCR_ID (1 << 33)
+#define HCR_CD (1 << 32)
+#define HCR_RW_SHIFT 31
+#define HCR_RW (1 << HCR_RW_SHIFT)
+#define HCR_TRVM (1 << 30)
+#define HCR_HCD (1 << 29)
+#define HCR_TDZ (1 << 28)
+#define HCR_TGE (1 << 27)
+#define HCR_TVM (1 << 26)
+#define HCR_TTLB (1 << 25)
+#define HCR_TPU (1 << 24)
+#define HCR_TPC (1 << 23)
+#define HCR_TSW (1 << 22)
+#define HCR_TAC (1 << 21)
+#define HCR_TIDCP (1 << 20)
+#define HCR_TSC (1 << 19)
+#define HCR_TID3 (1 << 18)
+#define HCR_TID2 (1 << 17)
+#define HCR_TID1 (1 << 16)
+#define HCR_TID0 (1 << 15)
+#define HCR_TWE (1 << 14)
+#define HCR_TWI (1 << 13)
+#define HCR_DC (1 << 12)
+#define HCR_BSU (3 << 10)
+#define HCR_BSU_IS (1 << 10)
+#define HCR_FB (1 << 9)
+#define HCR_VA (1 << 8)
+#define HCR_VI (1 << 7)
+#define HCR_VF (1 << 6)
+#define HCR_AMO (1 << 5)
+#define HCR_IMO (1 << 4)
+#define HCR_FMO (1 << 3)
+#define HCR_PTW (1 << 2)
+#define HCR_SWIO (1 << 1)
+#define HCR_VM (1)
+
+/*
+ * The bits we set in HCR:
+ * RW: 64bit by default, can be overriden for 32bit VMs
+ * TAC: Trap ACTLR
+ * TSC: Trap SMC
+ * TSW: Trap cache operations by set/way
+ * TWI: Trap WFI
+ * TIDCP: Trap L2CTLR/L2ECTLR
+ * BSU_IS: Upgrade barriers to the inner shareable domain
+ * FB: Force broadcast of all maintainance operations
+ * AMO: Override CPSR.A and enable signaling with VA
+ * IMO: Override CPSR.I and enable signaling with VI
+ * FMO: Override CPSR.F and enable signaling with VF
+ * SWIO: Turn set/way invalidates into set/way clean+invalidate
+ */
+#define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWI | HCR_VM | HCR_BSU_IS | \
+ HCR_FB | HCR_TAC | HCR_AMO | HCR_IMO | HCR_FMO | \
+ HCR_SWIO | HCR_TIDCP | HCR_RW)
+#define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
+
+/* Hyp System Control Register (SCTLR_EL2) bits */
+#define SCTLR_EL2_EE (1 << 25)
+#define SCTLR_EL2_WXN (1 << 19)
+#define SCTLR_EL2_I (1 << 12)
+#define SCTLR_EL2_SA (1 << 3)
+#define SCTLR_EL2_C (1 << 2)
+#define SCTLR_EL2_A (1 << 1)
+#define SCTLR_EL2_M 1
+#define SCTLR_EL2_FLAGS (SCTLR_EL2_M | SCTLR_EL2_A | SCTLR_EL2_C | \
+ SCTLR_EL2_SA | SCTLR_EL2_I)
+
+/* TCR_EL2 Registers bits */
+#define TCR_EL2_TBI (1 << 20)
+#define TCR_EL2_PS (7 << 16)
+#define TCR_EL2_PS_40B (2 << 16)
+#define TCR_EL2_TG0 (1 << 14)
+#define TCR_EL2_SH0 (3 << 12)
+#define TCR_EL2_ORGN0 (3 << 10)
+#define TCR_EL2_IRGN0 (3 << 8)
+#define TCR_EL2_T0SZ 0x3f
+#define TCR_EL2_MASK (TCR_EL2_TG0 | TCR_EL2_SH0 | \
+ TCR_EL2_ORGN0 | TCR_EL2_IRGN0 | TCR_EL2_T0SZ)
+
+#define TCR_EL2_FLAGS (TCR_EL2_PS_40B)
+
+/* VTCR_EL2 Registers bits */
+#define VTCR_EL2_PS_MASK (7 << 16)
+#define VTCR_EL2_PS_40B (2 << 16)
+#define VTCR_EL2_TG0_MASK (1 << 14)
+#define VTCR_EL2_TG0_4K (0 << 14)
+#define VTCR_EL2_TG0_64K (1 << 14)
+#define VTCR_EL2_SH0_MASK (3 << 12)
+#define VTCR_EL2_SH0_INNER (3 << 12)
+#define VTCR_EL2_ORGN0_MASK (3 << 10)
+#define VTCR_EL2_ORGN0_WBWA (3 << 10)
+#define VTCR_EL2_IRGN0_MASK (3 << 8)
+#define VTCR_EL2_IRGN0_WBWA (3 << 8)
+#define VTCR_EL2_SL0_MASK (3 << 6)
+#define VTCR_EL2_SL0_LVL1 (1 << 6)
+#define VTCR_EL2_T0SZ_MASK 0x3f
+#define VTCR_EL2_T0SZ_40B 24
+
+#ifdef CONFIG_ARM64_64K_PAGES
+/*
+ * Stage2 translation configuration:
+ * 40bits output (PS = 2)
+ * 40bits input (T0SZ = 24)
+ * 64kB pages (TG0 = 1)
+ * 2 level page tables (SL = 1)
+ */
+#define VTCR_EL2_FLAGS (VTCR_EL2_PS_40B | VTCR_EL2_TG0_64K | \
+ VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
+ VTCR_EL2_IRGN0_WBWA | VTCR_EL2_SL0_LVL1 | \
+ VTCR_EL2_T0SZ_40B)
+#define VTTBR_X (38 - VTCR_EL2_T0SZ_40B)
+#else
+/*
+ * Stage2 translation configuration:
+ * 40bits output (PS = 2)
+ * 40bits input (T0SZ = 24)
+ * 4kB pages (TG0 = 0)
+ * 3 level page tables (SL = 1)
+ */
+#define VTCR_EL2_FLAGS (VTCR_EL2_PS_40B | VTCR_EL2_TG0_4K | \
+ VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
+ VTCR_EL2_IRGN0_WBWA | VTCR_EL2_SL0_LVL1 | \
+ VTCR_EL2_T0SZ_40B)
+#define VTTBR_X (37 - VTCR_EL2_T0SZ_40B)
+#endif
+
+#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
+#define VTTBR_BADDR_MASK (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
+#define VTTBR_VMID_SHIFT (48LLU)
+#define VTTBR_VMID_MASK (0xffLLU << VTTBR_VMID_SHIFT)
+
+/* Hyp System Trap Register */
+#define HSTR_EL2_TTEE (1 << 16)
+#define HSTR_EL2_T(x) (1 << x)
+
+/* Hyp Coprocessor Trap Register */
+#define CPTR_EL2_TCPAC (1 << 31)
+#define CPTR_EL2_TTA (1 << 20)
+#define CPTR_EL2_TFP (1 << 10)
+
+/* Hyp Debug Configuration Register bits */
+#define MDCR_EL2_TDRA (1 << 11)
+#define MDCR_EL2_TDOSA (1 << 10)
+#define MDCR_EL2_TDA (1 << 9)
+#define MDCR_EL2_TDE (1 << 8)
+#define MDCR_EL2_HPME (1 << 7)
+#define MDCR_EL2_TPM (1 << 6)
+#define MDCR_EL2_TPMCR (1 << 5)
+#define MDCR_EL2_HPMN_MASK (0x1F)
+
+/* Hyp Syndrome Register (HSR) bits */
+#define ESR_EL2_EC_SHIFT (26)
+#define ESR_EL2_EC (0x3fU << ESR_EL2_EC_SHIFT)
+#define ESR_EL2_IL (1U << 25)
+#define ESR_EL2_ISS (ESR_EL2_IL - 1)
+#define ESR_EL2_ISV_SHIFT (24)
+#define ESR_EL2_ISV (1U << ESR_EL2_ISV_SHIFT)
+#define ESR_EL2_SAS_SHIFT (22)
+#define ESR_EL2_SAS (3U << ESR_EL2_SAS_SHIFT)
+#define ESR_EL2_SSE (1 << 21)
+#define ESR_EL2_SRT_SHIFT (16)
+#define ESR_EL2_SRT_MASK (0x1f << ESR_EL2_SRT_SHIFT)
+#define ESR_EL2_SF (1 << 15)
+#define ESR_EL2_AR (1 << 14)
+#define ESR_EL2_EA (1 << 9)
+#define ESR_EL2_CM (1 << 8)
+#define ESR_EL2_S1PTW (1 << 7)
+#define ESR_EL2_WNR (1 << 6)
+#define ESR_EL2_FSC (0x3f)
+#define ESR_EL2_FSC_TYPE (0x3c)
+
+#define ESR_EL2_CV_SHIFT (24)
+#define ESR_EL2_CV (1U << ESR_EL2_CV_SHIFT)
+#define ESR_EL2_COND_SHIFT (20)
+#define ESR_EL2_COND (0xfU << ESR_EL2_COND_SHIFT)
+
+
+#define FSC_FAULT (0x04)
+#define FSC_PERM (0x0c)
+
+/* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
+#define HPFAR_MASK (~0xFUL)
+
+#define ESR_EL2_EC_UNKNOWN (0x00)
+#define ESR_EL2_EC_WFI (0x01)
+#define ESR_EL2_EC_CP15_32 (0x03)
+#define ESR_EL2_EC_CP15_64 (0x04)
+#define ESR_EL2_EC_CP14_MR (0x05)
+#define ESR_EL2_EC_CP14_LS (0x06)
+#define ESR_EL2_EC_SIMD_FP (0x07)
+#define ESR_EL2_EC_CP10_ID (0x08)
+#define ESR_EL2_EC_CP14_64 (0x0C)
+#define ESR_EL2_EC_ILL_ISS (0x0E)
+#define ESR_EL2_EC_SVC32 (0x11)
+#define ESR_EL2_EC_HVC32 (0x12)
+#define ESR_EL2_EC_SMC32 (0x13)
+#define ESR_EL2_EC_SVC64 (0x14)
+#define ESR_EL2_EC_HVC64 (0x16)
+#define ESR_EL2_EC_SMC64 (0x17)
+#define ESR_EL2_EC_SYS64 (0x18)
+#define ESR_EL2_EC_IABT (0x20)
+#define ESR_EL2_EC_IABT_HYP (0x21)
+#define ESR_EL2_EC_PC_ALIGN (0x22)
+#define ESR_EL2_EC_DABT (0x24)
+#define ESR_EL2_EC_DABT_HYP (0x25)
+#define ESR_EL2_EC_SP_ALIGN (0x26)
+#define ESR_EL2_EC_FP32 (0x28)
+#define ESR_EL2_EC_FP64 (0x2C)
+#define ESR_EL2_EC_SERRROR (0x2F)
+#define ESR_EL2_EC_BREAKPT (0x30)
+#define ESR_EL2_EC_BREAKPT_HYP (0x31)
+#define ESR_EL2_EC_SOFTSTP (0x32)
+#define ESR_EL2_EC_SOFTSTP_HYP (0x33)
+#define ESR_EL2_EC_WATCHPT (0x34)
+#define ESR_EL2_EC_WATCHPT_HYP (0x35)
+#define ESR_EL2_EC_BKPT32 (0x38)
+#define ESR_EL2_EC_VECTOR32 (0x3A)
+#define ESR_EL2_EC_BKPT64 (0x3C)
+
+#endif /* __ARM64_KVM_ARM_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 03/29] arm64: KVM: EL2 register definitions
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Define all the useful bitfields for EL2 registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_arm.h | 243 +++++++++++++++++++++++++++++++++++++++
1 file changed, 243 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_arm.h
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
new file mode 100644
index 0000000..6561507
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -0,0 +1,243 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_ARM_H__
+#define __ARM64_KVM_ARM_H__
+
+#include <asm/types.h>
+
+/* Hyp Configuration Register (HCR) bits */
+#define HCR_ID (1 << 33)
+#define HCR_CD (1 << 32)
+#define HCR_RW_SHIFT 31
+#define HCR_RW (1 << HCR_RW_SHIFT)
+#define HCR_TRVM (1 << 30)
+#define HCR_HCD (1 << 29)
+#define HCR_TDZ (1 << 28)
+#define HCR_TGE (1 << 27)
+#define HCR_TVM (1 << 26)
+#define HCR_TTLB (1 << 25)
+#define HCR_TPU (1 << 24)
+#define HCR_TPC (1 << 23)
+#define HCR_TSW (1 << 22)
+#define HCR_TAC (1 << 21)
+#define HCR_TIDCP (1 << 20)
+#define HCR_TSC (1 << 19)
+#define HCR_TID3 (1 << 18)
+#define HCR_TID2 (1 << 17)
+#define HCR_TID1 (1 << 16)
+#define HCR_TID0 (1 << 15)
+#define HCR_TWE (1 << 14)
+#define HCR_TWI (1 << 13)
+#define HCR_DC (1 << 12)
+#define HCR_BSU (3 << 10)
+#define HCR_BSU_IS (1 << 10)
+#define HCR_FB (1 << 9)
+#define HCR_VA (1 << 8)
+#define HCR_VI (1 << 7)
+#define HCR_VF (1 << 6)
+#define HCR_AMO (1 << 5)
+#define HCR_IMO (1 << 4)
+#define HCR_FMO (1 << 3)
+#define HCR_PTW (1 << 2)
+#define HCR_SWIO (1 << 1)
+#define HCR_VM (1)
+
+/*
+ * The bits we set in HCR:
+ * RW: 64bit by default, can be overriden for 32bit VMs
+ * TAC: Trap ACTLR
+ * TSC: Trap SMC
+ * TSW: Trap cache operations by set/way
+ * TWI: Trap WFI
+ * TIDCP: Trap L2CTLR/L2ECTLR
+ * BSU_IS: Upgrade barriers to the inner shareable domain
+ * FB: Force broadcast of all maintainance operations
+ * AMO: Override CPSR.A and enable signaling with VA
+ * IMO: Override CPSR.I and enable signaling with VI
+ * FMO: Override CPSR.F and enable signaling with VF
+ * SWIO: Turn set/way invalidates into set/way clean+invalidate
+ */
+#define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWI | HCR_VM | HCR_BSU_IS | \
+ HCR_FB | HCR_TAC | HCR_AMO | HCR_IMO | HCR_FMO | \
+ HCR_SWIO | HCR_TIDCP | HCR_RW)
+#define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
+
+/* Hyp System Control Register (SCTLR_EL2) bits */
+#define SCTLR_EL2_EE (1 << 25)
+#define SCTLR_EL2_WXN (1 << 19)
+#define SCTLR_EL2_I (1 << 12)
+#define SCTLR_EL2_SA (1 << 3)
+#define SCTLR_EL2_C (1 << 2)
+#define SCTLR_EL2_A (1 << 1)
+#define SCTLR_EL2_M 1
+#define SCTLR_EL2_FLAGS (SCTLR_EL2_M | SCTLR_EL2_A | SCTLR_EL2_C | \
+ SCTLR_EL2_SA | SCTLR_EL2_I)
+
+/* TCR_EL2 Registers bits */
+#define TCR_EL2_TBI (1 << 20)
+#define TCR_EL2_PS (7 << 16)
+#define TCR_EL2_PS_40B (2 << 16)
+#define TCR_EL2_TG0 (1 << 14)
+#define TCR_EL2_SH0 (3 << 12)
+#define TCR_EL2_ORGN0 (3 << 10)
+#define TCR_EL2_IRGN0 (3 << 8)
+#define TCR_EL2_T0SZ 0x3f
+#define TCR_EL2_MASK (TCR_EL2_TG0 | TCR_EL2_SH0 | \
+ TCR_EL2_ORGN0 | TCR_EL2_IRGN0 | TCR_EL2_T0SZ)
+
+#define TCR_EL2_FLAGS (TCR_EL2_PS_40B)
+
+/* VTCR_EL2 Registers bits */
+#define VTCR_EL2_PS_MASK (7 << 16)
+#define VTCR_EL2_PS_40B (2 << 16)
+#define VTCR_EL2_TG0_MASK (1 << 14)
+#define VTCR_EL2_TG0_4K (0 << 14)
+#define VTCR_EL2_TG0_64K (1 << 14)
+#define VTCR_EL2_SH0_MASK (3 << 12)
+#define VTCR_EL2_SH0_INNER (3 << 12)
+#define VTCR_EL2_ORGN0_MASK (3 << 10)
+#define VTCR_EL2_ORGN0_WBWA (3 << 10)
+#define VTCR_EL2_IRGN0_MASK (3 << 8)
+#define VTCR_EL2_IRGN0_WBWA (3 << 8)
+#define VTCR_EL2_SL0_MASK (3 << 6)
+#define VTCR_EL2_SL0_LVL1 (1 << 6)
+#define VTCR_EL2_T0SZ_MASK 0x3f
+#define VTCR_EL2_T0SZ_40B 24
+
+#ifdef CONFIG_ARM64_64K_PAGES
+/*
+ * Stage2 translation configuration:
+ * 40bits output (PS = 2)
+ * 40bits input (T0SZ = 24)
+ * 64kB pages (TG0 = 1)
+ * 2 level page tables (SL = 1)
+ */
+#define VTCR_EL2_FLAGS (VTCR_EL2_PS_40B | VTCR_EL2_TG0_64K | \
+ VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
+ VTCR_EL2_IRGN0_WBWA | VTCR_EL2_SL0_LVL1 | \
+ VTCR_EL2_T0SZ_40B)
+#define VTTBR_X (38 - VTCR_EL2_T0SZ_40B)
+#else
+/*
+ * Stage2 translation configuration:
+ * 40bits output (PS = 2)
+ * 40bits input (T0SZ = 24)
+ * 4kB pages (TG0 = 0)
+ * 3 level page tables (SL = 1)
+ */
+#define VTCR_EL2_FLAGS (VTCR_EL2_PS_40B | VTCR_EL2_TG0_4K | \
+ VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
+ VTCR_EL2_IRGN0_WBWA | VTCR_EL2_SL0_LVL1 | \
+ VTCR_EL2_T0SZ_40B)
+#define VTTBR_X (37 - VTCR_EL2_T0SZ_40B)
+#endif
+
+#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
+#define VTTBR_BADDR_MASK (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
+#define VTTBR_VMID_SHIFT (48LLU)
+#define VTTBR_VMID_MASK (0xffLLU << VTTBR_VMID_SHIFT)
+
+/* Hyp System Trap Register */
+#define HSTR_EL2_TTEE (1 << 16)
+#define HSTR_EL2_T(x) (1 << x)
+
+/* Hyp Coprocessor Trap Register */
+#define CPTR_EL2_TCPAC (1 << 31)
+#define CPTR_EL2_TTA (1 << 20)
+#define CPTR_EL2_TFP (1 << 10)
+
+/* Hyp Debug Configuration Register bits */
+#define MDCR_EL2_TDRA (1 << 11)
+#define MDCR_EL2_TDOSA (1 << 10)
+#define MDCR_EL2_TDA (1 << 9)
+#define MDCR_EL2_TDE (1 << 8)
+#define MDCR_EL2_HPME (1 << 7)
+#define MDCR_EL2_TPM (1 << 6)
+#define MDCR_EL2_TPMCR (1 << 5)
+#define MDCR_EL2_HPMN_MASK (0x1F)
+
+/* Hyp Syndrome Register (HSR) bits */
+#define ESR_EL2_EC_SHIFT (26)
+#define ESR_EL2_EC (0x3fU << ESR_EL2_EC_SHIFT)
+#define ESR_EL2_IL (1U << 25)
+#define ESR_EL2_ISS (ESR_EL2_IL - 1)
+#define ESR_EL2_ISV_SHIFT (24)
+#define ESR_EL2_ISV (1U << ESR_EL2_ISV_SHIFT)
+#define ESR_EL2_SAS_SHIFT (22)
+#define ESR_EL2_SAS (3U << ESR_EL2_SAS_SHIFT)
+#define ESR_EL2_SSE (1 << 21)
+#define ESR_EL2_SRT_SHIFT (16)
+#define ESR_EL2_SRT_MASK (0x1f << ESR_EL2_SRT_SHIFT)
+#define ESR_EL2_SF (1 << 15)
+#define ESR_EL2_AR (1 << 14)
+#define ESR_EL2_EA (1 << 9)
+#define ESR_EL2_CM (1 << 8)
+#define ESR_EL2_S1PTW (1 << 7)
+#define ESR_EL2_WNR (1 << 6)
+#define ESR_EL2_FSC (0x3f)
+#define ESR_EL2_FSC_TYPE (0x3c)
+
+#define ESR_EL2_CV_SHIFT (24)
+#define ESR_EL2_CV (1U << ESR_EL2_CV_SHIFT)
+#define ESR_EL2_COND_SHIFT (20)
+#define ESR_EL2_COND (0xfU << ESR_EL2_COND_SHIFT)
+
+
+#define FSC_FAULT (0x04)
+#define FSC_PERM (0x0c)
+
+/* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
+#define HPFAR_MASK (~0xFUL)
+
+#define ESR_EL2_EC_UNKNOWN (0x00)
+#define ESR_EL2_EC_WFI (0x01)
+#define ESR_EL2_EC_CP15_32 (0x03)
+#define ESR_EL2_EC_CP15_64 (0x04)
+#define ESR_EL2_EC_CP14_MR (0x05)
+#define ESR_EL2_EC_CP14_LS (0x06)
+#define ESR_EL2_EC_SIMD_FP (0x07)
+#define ESR_EL2_EC_CP10_ID (0x08)
+#define ESR_EL2_EC_CP14_64 (0x0C)
+#define ESR_EL2_EC_ILL_ISS (0x0E)
+#define ESR_EL2_EC_SVC32 (0x11)
+#define ESR_EL2_EC_HVC32 (0x12)
+#define ESR_EL2_EC_SMC32 (0x13)
+#define ESR_EL2_EC_SVC64 (0x14)
+#define ESR_EL2_EC_HVC64 (0x16)
+#define ESR_EL2_EC_SMC64 (0x17)
+#define ESR_EL2_EC_SYS64 (0x18)
+#define ESR_EL2_EC_IABT (0x20)
+#define ESR_EL2_EC_IABT_HYP (0x21)
+#define ESR_EL2_EC_PC_ALIGN (0x22)
+#define ESR_EL2_EC_DABT (0x24)
+#define ESR_EL2_EC_DABT_HYP (0x25)
+#define ESR_EL2_EC_SP_ALIGN (0x26)
+#define ESR_EL2_EC_FP32 (0x28)
+#define ESR_EL2_EC_FP64 (0x2C)
+#define ESR_EL2_EC_SERRROR (0x2F)
+#define ESR_EL2_EC_BREAKPT (0x30)
+#define ESR_EL2_EC_BREAKPT_HYP (0x31)
+#define ESR_EL2_EC_SOFTSTP (0x32)
+#define ESR_EL2_EC_SOFTSTP_HYP (0x33)
+#define ESR_EL2_EC_WATCHPT (0x34)
+#define ESR_EL2_EC_WATCHPT_HYP (0x35)
+#define ESR_EL2_EC_BKPT32 (0x38)
+#define ESR_EL2_EC_VECTOR32 (0x3A)
+#define ESR_EL2_EC_BKPT64 (0x3C)
+
+#endif /* __ARM64_KVM_ARM_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Define the saved/restored registers for 64bit guests.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_asm.h | 68 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_asm.h
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
new file mode 100644
index 0000000..851fee5
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_KVM_ASM_H__
+#define __ARM_KVM_ASM_H__
+
+/*
+ * 0 is reserved as an invalid value.
+ * Order *must* be kept in sync with the hyp switch code.
+ */
+#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
+#define CSSELR_EL1 2 /* Cache Size Selection Register */
+#define SCTLR_EL1 3 /* System Control Register */
+#define ACTLR_EL1 4 /* Auxilliary Control Register */
+#define CPACR_EL1 5 /* Coprocessor Access Control */
+#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
+#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
+#define TCR_EL1 8 /* Translation Control Register */
+#define ESR_EL1 9 /* Exception Syndrome Register */
+#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
+#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
+#define FAR_EL1 12 /* Fault Address Register */
+#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
+#define VBAR_EL1 14 /* Vector Base Address Register */
+#define CONTEXTIDR_EL1 15 /* Context ID Register */
+#define TPIDR_EL0 16 /* Thread ID, User R/W */
+#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
+#define TPIDR_EL1 18 /* Thread ID, Privileged */
+#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
+#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
+#define NR_SYS_REGS 21
+
+#define ARM_EXCEPTION_IRQ 0
+#define ARM_EXCEPTION_TRAP 1
+
+#ifndef __ASSEMBLY__
+struct kvm;
+struct kvm_vcpu;
+
+extern char __kvm_hyp_init[];
+extern char __kvm_hyp_init_end[];
+
+extern char __kvm_hyp_vector[];
+
+extern char __kvm_hyp_code_start[];
+extern char __kvm_hyp_code_end[];
+
+extern void __kvm_flush_vm_context(void);
+extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
+
+extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+#endif
+
+#endif /* __ARM_KVM_ASM_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Define the saved/restored registers for 64bit guests.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_asm.h | 68 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_asm.h
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
new file mode 100644
index 0000000..851fee5
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_KVM_ASM_H__
+#define __ARM_KVM_ASM_H__
+
+/*
+ * 0 is reserved as an invalid value.
+ * Order *must* be kept in sync with the hyp switch code.
+ */
+#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
+#define CSSELR_EL1 2 /* Cache Size Selection Register */
+#define SCTLR_EL1 3 /* System Control Register */
+#define ACTLR_EL1 4 /* Auxilliary Control Register */
+#define CPACR_EL1 5 /* Coprocessor Access Control */
+#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
+#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
+#define TCR_EL1 8 /* Translation Control Register */
+#define ESR_EL1 9 /* Exception Syndrome Register */
+#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
+#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
+#define FAR_EL1 12 /* Fault Address Register */
+#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
+#define VBAR_EL1 14 /* Vector Base Address Register */
+#define CONTEXTIDR_EL1 15 /* Context ID Register */
+#define TPIDR_EL0 16 /* Thread ID, User R/W */
+#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
+#define TPIDR_EL1 18 /* Thread ID, Privileged */
+#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
+#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
+#define NR_SYS_REGS 21
+
+#define ARM_EXCEPTION_IRQ 0
+#define ARM_EXCEPTION_TRAP 1
+
+#ifndef __ASSEMBLY__
+struct kvm;
+struct kvm_vcpu;
+
+extern char __kvm_hyp_init[];
+extern char __kvm_hyp_init_end[];
+
+extern char __kvm_hyp_vector[];
+
+extern char __kvm_hyp_code_start[];
+extern char __kvm_hyp_code_end[];
+
+extern void __kvm_flush_vm_context(void);
+extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
+
+extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+#endif
+
+#endif /* __ARM_KVM_ASM_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 05/29] arm64: KVM: Basic ESR_EL2 helpers and vcpu register access
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Implements helpers for dealing with the EL2 syndrome register as
well as accessing the vcpu registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_emulate.h | 159 +++++++++++++++++++++++++++++++++++
1 file changed, 159 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_emulate.h
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
new file mode 100644
index 0000000..16a343b
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -0,0 +1,159 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/include/kvm_emulate.h
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_EMULATE_H__
+#define __ARM64_KVM_EMULATE_H__
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_mmio.h>
+#include <asm/ptrace.h>
+
+void kvm_inject_undefined(struct kvm_vcpu *vcpu);
+void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
+void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
+
+static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu)
+{
+ return (unsigned long *)&vcpu->arch.regs.regs.pc;
+}
+
+static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
+{
+ return (unsigned long *)&vcpu->arch.regs.regs.pstate;
+}
+
+static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
+{
+ return false; /* 32bit? Bahhh... */
+}
+
+static inline bool kvm_condition_valid(struct kvm_vcpu *vcpu)
+{
+ return true; /* No conditionals on arm64 */
+}
+
+static inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
+{
+ *vcpu_pc(vcpu) += 4;
+}
+
+static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
+{
+}
+
+static inline unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
+{
+ return (unsigned long *)&vcpu->arch.regs.regs.regs[reg_num];
+
+}
+
+/* Get vcpu SPSR for current mode */
+static inline unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
+{
+ return &vcpu->arch.regs.spsr[KVM_SPSR_EL1];
+}
+
+static inline bool kvm_vcpu_reg_is_pc(struct kvm_vcpu *vcpu, int reg)
+{
+ return false;
+}
+
+static inline bool vcpu_mode_priv(struct kvm_vcpu *vcpu)
+{
+ u32 mode = *vcpu_cpsr(vcpu) & PSR_MODE_MASK;
+
+ return mode != PSR_MODE_EL0t;
+}
+
+static inline u32 kvm_vcpu_get_hsr(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.fault.esr_el2;
+}
+
+static inline unsigned long kvm_vcpu_get_hfar(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.fault.far_el2;
+}
+
+static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
+{
+ return ((phys_addr_t)vcpu->arch.fault.hpfar_el2 & HPFAR_MASK) << 8;
+}
+
+static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_ISV);
+}
+
+static inline bool kvm_vcpu_dabt_iswrite(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_WNR);
+}
+
+static inline bool kvm_vcpu_dabt_issext(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SSE);
+}
+
+static inline int kvm_vcpu_dabt_get_rd(struct kvm_vcpu *vcpu)
+{
+ return (kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SRT_MASK) >> ESR_EL2_SRT_SHIFT;
+}
+
+static inline bool kvm_vcpu_dabt_isextabt(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_EA);
+}
+
+static inline bool kvm_vcpu_dabt_iss1tw(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_S1PTW);
+}
+
+static inline int kvm_vcpu_dabt_get_as(struct kvm_vcpu *vcpu)
+{
+ return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SAS) >> ESR_EL2_SAS_SHIFT);
+}
+
+/* This one is not specific to Data Abort */
+static inline bool kvm_vcpu_trap_il_is32bit(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_IL);
+}
+
+static inline u8 kvm_vcpu_trap_get_class(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) >> ESR_EL2_EC_SHIFT;
+}
+
+static inline bool kvm_vcpu_trap_is_iabt(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_trap_get_class(vcpu) == ESR_EL2_EC_IABT;
+}
+
+static inline u8 kvm_vcpu_trap_get_fault(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & ESR_EL2_FSC_TYPE;
+}
+
+#endif /* __ARM64_KVM_EMULATE_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 05/29] arm64: KVM: Basic ESR_EL2 helpers and vcpu register access
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Implements helpers for dealing with the EL2 syndrome register as
well as accessing the vcpu registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_emulate.h | 159 +++++++++++++++++++++++++++++++++++
1 file changed, 159 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_emulate.h
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
new file mode 100644
index 0000000..16a343b
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -0,0 +1,159 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/include/kvm_emulate.h
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_EMULATE_H__
+#define __ARM64_KVM_EMULATE_H__
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_mmio.h>
+#include <asm/ptrace.h>
+
+void kvm_inject_undefined(struct kvm_vcpu *vcpu);
+void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
+void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
+
+static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu)
+{
+ return (unsigned long *)&vcpu->arch.regs.regs.pc;
+}
+
+static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
+{
+ return (unsigned long *)&vcpu->arch.regs.regs.pstate;
+}
+
+static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
+{
+ return false; /* 32bit? Bahhh... */
+}
+
+static inline bool kvm_condition_valid(struct kvm_vcpu *vcpu)
+{
+ return true; /* No conditionals on arm64 */
+}
+
+static inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
+{
+ *vcpu_pc(vcpu) += 4;
+}
+
+static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
+{
+}
+
+static inline unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
+{
+ return (unsigned long *)&vcpu->arch.regs.regs.regs[reg_num];
+
+}
+
+/* Get vcpu SPSR for current mode */
+static inline unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
+{
+ return &vcpu->arch.regs.spsr[KVM_SPSR_EL1];
+}
+
+static inline bool kvm_vcpu_reg_is_pc(struct kvm_vcpu *vcpu, int reg)
+{
+ return false;
+}
+
+static inline bool vcpu_mode_priv(struct kvm_vcpu *vcpu)
+{
+ u32 mode = *vcpu_cpsr(vcpu) & PSR_MODE_MASK;
+
+ return mode != PSR_MODE_EL0t;
+}
+
+static inline u32 kvm_vcpu_get_hsr(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.fault.esr_el2;
+}
+
+static inline unsigned long kvm_vcpu_get_hfar(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.fault.far_el2;
+}
+
+static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
+{
+ return ((phys_addr_t)vcpu->arch.fault.hpfar_el2 & HPFAR_MASK) << 8;
+}
+
+static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_ISV);
+}
+
+static inline bool kvm_vcpu_dabt_iswrite(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_WNR);
+}
+
+static inline bool kvm_vcpu_dabt_issext(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SSE);
+}
+
+static inline int kvm_vcpu_dabt_get_rd(struct kvm_vcpu *vcpu)
+{
+ return (kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SRT_MASK) >> ESR_EL2_SRT_SHIFT;
+}
+
+static inline bool kvm_vcpu_dabt_isextabt(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_EA);
+}
+
+static inline bool kvm_vcpu_dabt_iss1tw(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_S1PTW);
+}
+
+static inline int kvm_vcpu_dabt_get_as(struct kvm_vcpu *vcpu)
+{
+ return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SAS) >> ESR_EL2_SAS_SHIFT);
+}
+
+/* This one is not specific to Data Abort */
+static inline bool kvm_vcpu_trap_il_is32bit(struct kvm_vcpu *vcpu)
+{
+ return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_IL);
+}
+
+static inline u8 kvm_vcpu_trap_get_class(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) >> ESR_EL2_EC_SHIFT;
+}
+
+static inline bool kvm_vcpu_trap_is_iabt(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_trap_get_class(vcpu) == ESR_EL2_EC_IABT;
+}
+
+static inline u8 kvm_vcpu_trap_get_fault(struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_get_hsr(vcpu) & ESR_EL2_FSC_TYPE;
+}
+
+#endif /* __ARM64_KVM_EMULATE_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 06/29] arm64: KVM: fault injection into a guest
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Implement the injection of a fault (undefined, data abort or
prefetch abort) into a 64bit guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/inject_fault.c | 117 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)
create mode 100644 arch/arm64/kvm/inject_fault.c
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
new file mode 100644
index 0000000..80b245f
--- /dev/null
+++ b/arch/arm64/kvm/inject_fault.c
@@ -0,0 +1,117 @@
+/*
+ * Fault injection for 64bit guests.
+ *
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Based on arch/arm/kvm/emulate.c
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+
+static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
+{
+ unsigned long cpsr = *vcpu_cpsr(vcpu);
+ int is_aarch32;
+ u32 esr = 0;
+
+ is_aarch32 = vcpu_mode_is_32bit(vcpu);
+
+ *vcpu_spsr(vcpu) = cpsr;
+ vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
+
+ *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | PSR_I_BIT;
+ *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
+
+ vcpu->arch.sys_regs[FAR_EL1] = addr;
+
+ /*
+ * Build an {i,d}abort, depending on the level and the
+ * instruction set. Report an external synchronous abort.
+ */
+ if (kvm_vcpu_trap_il_is32bit(vcpu))
+ esr |= (1 << 25);
+
+ if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t)
+ esr |= (0x20 << 26);
+ else
+ esr |= (0x21 << 26);
+
+ if (!is_iabt)
+ esr |= (1 << 28);
+
+ vcpu->arch.sys_regs[ESR_EL1] = esr | 0x10;
+}
+
+static void inject_undef64(struct kvm_vcpu *vcpu)
+{
+ unsigned long cpsr = *vcpu_cpsr(vcpu);
+ u32 esr = 0;
+
+ *vcpu_spsr(vcpu) = cpsr;
+ vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
+
+ *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_F_BIT | PSR_I_BIT;
+ *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
+
+ /*
+ * Build an unknown exception, depending on the instruction
+ * set.
+ */
+ if (kvm_vcpu_trap_il_is32bit(vcpu))
+ esr |= (1 << 25);
+
+ vcpu->arch.sys_regs[ESR_EL1] = esr;
+}
+
+/**
+ * kvm_inject_dabt - inject a data abort into the guest
+ * @vcpu: The VCPU to receive the undefined exception
+ * @addr: The address to report in the DFAR
+ *
+ * It is assumed that this code is called from the VCPU thread and that the
+ * VCPU therefore is not currently executing guest code.
+ */
+void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr)
+{
+ inject_abt64(vcpu, false, addr);
+}
+
+/**
+ * kvm_inject_pabt - inject a prefetch abort into the guest
+ * @vcpu: The VCPU to receive the undefined exception
+ * @addr: The address to report in the DFAR
+ *
+ * It is assumed that this code is called from the VCPU thread and that the
+ * VCPU therefore is not currently executing guest code.
+ */
+void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr)
+{
+ inject_abt64(vcpu, true, addr);
+}
+
+/**
+ * kvm_inject_undefined - inject a undefined instruction into the guest
+ *
+ * It is assumed that this code is called from the VCPU thread and that the
+ * VCPU therefore is not currently executing guest code.
+ */
+void kvm_inject_undefined(struct kvm_vcpu *vcpu)
+{
+ inject_undef64(vcpu);
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 06/29] arm64: KVM: fault injection into a guest
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Implement the injection of a fault (undefined, data abort or
prefetch abort) into a 64bit guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/inject_fault.c | 117 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)
create mode 100644 arch/arm64/kvm/inject_fault.c
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
new file mode 100644
index 0000000..80b245f
--- /dev/null
+++ b/arch/arm64/kvm/inject_fault.c
@@ -0,0 +1,117 @@
+/*
+ * Fault injection for 64bit guests.
+ *
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Based on arch/arm/kvm/emulate.c
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+
+static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
+{
+ unsigned long cpsr = *vcpu_cpsr(vcpu);
+ int is_aarch32;
+ u32 esr = 0;
+
+ is_aarch32 = vcpu_mode_is_32bit(vcpu);
+
+ *vcpu_spsr(vcpu) = cpsr;
+ vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
+
+ *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | PSR_I_BIT;
+ *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
+
+ vcpu->arch.sys_regs[FAR_EL1] = addr;
+
+ /*
+ * Build an {i,d}abort, depending on the level and the
+ * instruction set. Report an external synchronous abort.
+ */
+ if (kvm_vcpu_trap_il_is32bit(vcpu))
+ esr |= (1 << 25);
+
+ if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t)
+ esr |= (0x20 << 26);
+ else
+ esr |= (0x21 << 26);
+
+ if (!is_iabt)
+ esr |= (1 << 28);
+
+ vcpu->arch.sys_regs[ESR_EL1] = esr | 0x10;
+}
+
+static void inject_undef64(struct kvm_vcpu *vcpu)
+{
+ unsigned long cpsr = *vcpu_cpsr(vcpu);
+ u32 esr = 0;
+
+ *vcpu_spsr(vcpu) = cpsr;
+ vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
+
+ *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_F_BIT | PSR_I_BIT;
+ *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
+
+ /*
+ * Build an unknown exception, depending on the instruction
+ * set.
+ */
+ if (kvm_vcpu_trap_il_is32bit(vcpu))
+ esr |= (1 << 25);
+
+ vcpu->arch.sys_regs[ESR_EL1] = esr;
+}
+
+/**
+ * kvm_inject_dabt - inject a data abort into the guest
+ * @vcpu: The VCPU to receive the undefined exception
+ * @addr: The address to report in the DFAR
+ *
+ * It is assumed that this code is called from the VCPU thread and that the
+ * VCPU therefore is not currently executing guest code.
+ */
+void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr)
+{
+ inject_abt64(vcpu, false, addr);
+}
+
+/**
+ * kvm_inject_pabt - inject a prefetch abort into the guest
+ * @vcpu: The VCPU to receive the undefined exception
+ * @addr: The address to report in the DFAR
+ *
+ * It is assumed that this code is called from the VCPU thread and that the
+ * VCPU therefore is not currently executing guest code.
+ */
+void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr)
+{
+ inject_abt64(vcpu, true, addr);
+}
+
+/**
+ * kvm_inject_undefined - inject a undefined instruction into the guest
+ *
+ * It is assumed that this code is called from the VCPU thread and that the
+ * VCPU therefore is not currently executing guest code.
+ */
+void kvm_inject_undefined(struct kvm_vcpu *vcpu)
+{
+ inject_undef64(vcpu);
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 07/29] arm64: KVM: architecture specific MMU backend
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Define the arm64 specific MMU backend:
- HYP/kernel VA offset
- S2 4/64kB definitions
- S2 page table populating and flushing
- icache cleaning
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_mmu.h | 126 +++++++++++++++++++++++++++++++++++++++
1 file changed, 126 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_mmu.h
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
new file mode 100644
index 0000000..2975627
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -0,0 +1,126 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_MMU_H__
+#define __ARM64_KVM_MMU_H__
+
+#include <asm/page.h>
+#include <asm/memory.h>
+
+/*
+ * As we only have the TTBR0_EL2 register, we cannot express
+ * "negative" addresses. This makes it impossible to directly share
+ * mappings with the kernel.
+ *
+ * Instead, give the HYP mode its own VA region at a fixed offset from
+ * the kernel by just masking the top bits (which are all ones for a
+ * kernel address).
+ */
+#define HYP_PAGE_OFFSET_SHIFT VA_BITS
+#define HYP_PAGE_OFFSET_MASK ((UL(1) << HYP_PAGE_OFFSET_SHIFT) - 1)
+#define HYP_PAGE_OFFSET (PAGE_OFFSET & HYP_PAGE_OFFSET_MASK)
+
+#ifdef __ASSEMBLY__
+
+/*
+ * Convert a kernel VA into a HYP VA.
+ * reg: VA to be converted.
+ */
+.macro kern_hyp_va reg
+ and \reg, \reg, #HYP_PAGE_OFFSET_MASK
+.endm
+
+#else
+
+#include <asm/cacheflush.h>
+#include "idmap.h"
+
+#define KERN_TO_HYP(kva) ((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
+
+/*
+ * Align KVM with the kernel's view of physical memory. Should be
+ * 40bit IPA, with PGD being 8kB aligned.
+ */
+#define KVM_PHYS_SHIFT PHYS_MASK_SHIFT
+#define KVM_PHYS_SIZE (1UL << KVM_PHYS_SHIFT)
+#define KVM_PHYS_MASK (KVM_PHYS_SIZE - 1UL)
+
+#ifdef CONFIG_ARM64_64K_PAGES
+#define PAGE_LEVELS 2
+#define BITS_PER_LEVEL 13
+#else /* 4kB pages */
+#define PAGE_LEVELS 3
+#define BITS_PER_LEVEL 9
+#endif
+
+/* Make sure we get the right size, and thus the right alignment */
+#define BITS_PER_S2_PGD (KVM_PHYS_SHIFT - (PAGE_LEVELS - 1) * BITS_PER_LEVEL - PAGE_SHIFT)
+#define PTRS_PER_S2_PGD (1 << max(BITS_PER_LEVEL, BITS_PER_S2_PGD))
+#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+#define S2_PGD_SIZE (1 << S2_PGD_ORDER)
+
+int create_hyp_mappings(void *from, void *to);
+int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+void free_hyp_pmds(void);
+
+int kvm_alloc_stage2_pgd(struct kvm *kvm);
+void kvm_free_stage2_pgd(struct kvm *kvm);
+int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
+ phys_addr_t pa, unsigned long size);
+
+int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run);
+
+void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
+
+phys_addr_t kvm_mmu_get_httbr(void);
+int kvm_mmu_init(void);
+void kvm_clear_hyp_idmap(void);
+
+#define kvm_set_pte(ptep, pte) set_pte(ptep, pte)
+
+static inline bool kvm_is_write_fault(unsigned long esr)
+{
+ unsigned long esr_ec = esr >> ESR_EL2_EC_SHIFT;
+
+ if (esr_ec == ESR_EL2_EC_IABT)
+ return false;
+
+ if ((esr & ESR_EL2_ISV) && !(esr & ESR_EL2_WNR))
+ return false;
+
+ return true;
+}
+
+static inline void kvm_clean_pgd(pgd_t *pgd) {}
+static inline void kvm_clean_pmd_entry(pmd_t *pmd) {}
+static inline void kvm_clean_pte(pte_t *pte) {}
+
+static inline void kvm_set_s2pte_writable(pte_t *pte)
+{
+ pte_val(*pte) |= PTE_S2_RDWR;
+}
+
+struct kvm;
+
+static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
+{
+ unsigned long hva = gfn_to_hva(kvm, gfn);
+ flush_icache_range(hva, hva + PAGE_SIZE);
+}
+
+#endif /* __ASSEMBLY__ */
+#endif /* __ARM64_KVM_MMU_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 07/29] arm64: KVM: architecture specific MMU backend
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Define the arm64 specific MMU backend:
- HYP/kernel VA offset
- S2 4/64kB definitions
- S2 page table populating and flushing
- icache cleaning
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_mmu.h | 126 +++++++++++++++++++++++++++++++++++++++
1 file changed, 126 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_mmu.h
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
new file mode 100644
index 0000000..2975627
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -0,0 +1,126 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_MMU_H__
+#define __ARM64_KVM_MMU_H__
+
+#include <asm/page.h>
+#include <asm/memory.h>
+
+/*
+ * As we only have the TTBR0_EL2 register, we cannot express
+ * "negative" addresses. This makes it impossible to directly share
+ * mappings with the kernel.
+ *
+ * Instead, give the HYP mode its own VA region at a fixed offset from
+ * the kernel by just masking the top bits (which are all ones for a
+ * kernel address).
+ */
+#define HYP_PAGE_OFFSET_SHIFT VA_BITS
+#define HYP_PAGE_OFFSET_MASK ((UL(1) << HYP_PAGE_OFFSET_SHIFT) - 1)
+#define HYP_PAGE_OFFSET (PAGE_OFFSET & HYP_PAGE_OFFSET_MASK)
+
+#ifdef __ASSEMBLY__
+
+/*
+ * Convert a kernel VA into a HYP VA.
+ * reg: VA to be converted.
+ */
+.macro kern_hyp_va reg
+ and \reg, \reg, #HYP_PAGE_OFFSET_MASK
+.endm
+
+#else
+
+#include <asm/cacheflush.h>
+#include "idmap.h"
+
+#define KERN_TO_HYP(kva) ((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
+
+/*
+ * Align KVM with the kernel's view of physical memory. Should be
+ * 40bit IPA, with PGD being 8kB aligned.
+ */
+#define KVM_PHYS_SHIFT PHYS_MASK_SHIFT
+#define KVM_PHYS_SIZE (1UL << KVM_PHYS_SHIFT)
+#define KVM_PHYS_MASK (KVM_PHYS_SIZE - 1UL)
+
+#ifdef CONFIG_ARM64_64K_PAGES
+#define PAGE_LEVELS 2
+#define BITS_PER_LEVEL 13
+#else /* 4kB pages */
+#define PAGE_LEVELS 3
+#define BITS_PER_LEVEL 9
+#endif
+
+/* Make sure we get the right size, and thus the right alignment */
+#define BITS_PER_S2_PGD (KVM_PHYS_SHIFT - (PAGE_LEVELS - 1) * BITS_PER_LEVEL - PAGE_SHIFT)
+#define PTRS_PER_S2_PGD (1 << max(BITS_PER_LEVEL, BITS_PER_S2_PGD))
+#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
+#define S2_PGD_SIZE (1 << S2_PGD_ORDER)
+
+int create_hyp_mappings(void *from, void *to);
+int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
+void free_hyp_pmds(void);
+
+int kvm_alloc_stage2_pgd(struct kvm *kvm);
+void kvm_free_stage2_pgd(struct kvm *kvm);
+int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
+ phys_addr_t pa, unsigned long size);
+
+int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run);
+
+void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
+
+phys_addr_t kvm_mmu_get_httbr(void);
+int kvm_mmu_init(void);
+void kvm_clear_hyp_idmap(void);
+
+#define kvm_set_pte(ptep, pte) set_pte(ptep, pte)
+
+static inline bool kvm_is_write_fault(unsigned long esr)
+{
+ unsigned long esr_ec = esr >> ESR_EL2_EC_SHIFT;
+
+ if (esr_ec == ESR_EL2_EC_IABT)
+ return false;
+
+ if ((esr & ESR_EL2_ISV) && !(esr & ESR_EL2_WNR))
+ return false;
+
+ return true;
+}
+
+static inline void kvm_clean_pgd(pgd_t *pgd) {}
+static inline void kvm_clean_pmd_entry(pmd_t *pmd) {}
+static inline void kvm_clean_pte(pte_t *pte) {}
+
+static inline void kvm_set_s2pte_writable(pte_t *pte)
+{
+ pte_val(*pte) |= PTE_S2_RDWR;
+}
+
+struct kvm;
+
+static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
+{
+ unsigned long hva = gfn_to_hva(kvm, gfn);
+ flush_icache_range(hva, hva + PAGE_SIZE);
+}
+
+#endif /* __ASSEMBLY__ */
+#endif /* __ARM64_KVM_MMU_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 08/29] arm64: KVM: user space interface
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Provide the kvm.h file that defines the user space visible
interface.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/uapi/asm/kvm.h | 112 ++++++++++++++++++++++++++++++++++++++
1 file changed, 112 insertions(+)
create mode 100644 arch/arm64/include/uapi/asm/kvm.h
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
new file mode 100644
index 0000000..f5525f1
--- /dev/null
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -0,0 +1,112 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/include/uapi/asm/kvm.h:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_KVM_H__
+#define __ARM_KVM_H__
+
+#define KVM_SPSR_EL1 0
+#define KVM_NR_SPSR 1
+
+#ifndef __ASSEMBLY__
+#include <asm/types.h>
+#include <asm/ptrace.h>
+
+#define __KVM_HAVE_GUEST_DEBUG
+#define __KVM_HAVE_IRQ_LINE
+
+#define KVM_REG_SIZE(id) \
+ (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
+
+struct kvm_regs {
+ struct user_pt_regs regs; /* sp = sp_el0 */
+
+ unsigned long sp_el1;
+ unsigned long elr_el1;
+
+ unsigned long spsr[KVM_NR_SPSR];
+};
+
+/* Supported Processor Types */
+#define KVM_ARM_TARGET_CORTEX_A57 0
+#define KVM_ARM_NUM_TARGETS 1
+
+/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
+#define KVM_ARM_DEVICE_TYPE_SHIFT 0
+#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
+#define KVM_ARM_DEVICE_ID_SHIFT 16
+#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
+
+/* Supported device IDs */
+#define KVM_ARM_DEVICE_VGIC_V2 0
+
+/* Supported VGIC address types */
+#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
+#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
+
+#define KVM_VGIC_V2_DIST_SIZE 0x1000
+#define KVM_VGIC_V2_CPU_SIZE 0x2000
+
+struct kvm_vcpu_init {
+ __u32 target;
+ __u32 features[7];
+};
+
+struct kvm_sregs {
+};
+
+struct kvm_fpu {
+};
+
+struct kvm_guest_debug_arch {
+};
+
+struct kvm_debug_exit_arch {
+};
+
+struct kvm_sync_regs {
+};
+
+struct kvm_arch_memory_slot {
+};
+
+/* KVM_IRQ_LINE irq field index values */
+#define KVM_ARM_IRQ_TYPE_SHIFT 24
+#define KVM_ARM_IRQ_TYPE_MASK 0xff
+#define KVM_ARM_IRQ_VCPU_SHIFT 16
+#define KVM_ARM_IRQ_VCPU_MASK 0xff
+#define KVM_ARM_IRQ_NUM_SHIFT 0
+#define KVM_ARM_IRQ_NUM_MASK 0xffff
+
+/* irq_type field */
+#define KVM_ARM_IRQ_TYPE_CPU 0
+#define KVM_ARM_IRQ_TYPE_SPI 1
+#define KVM_ARM_IRQ_TYPE_PPI 2
+
+/* out-of-kernel GIC cpu interrupt injection irq_number field */
+#define KVM_ARM_IRQ_CPU_IRQ 0
+#define KVM_ARM_IRQ_CPU_FIQ 1
+
+/* Highest supported SPI, from VGIC_NR_IRQS */
+#define KVM_ARM_IRQ_GIC_MAX 127
+
+#endif
+
+#endif /* __ARM_KVM_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 08/29] arm64: KVM: user space interface
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Provide the kvm.h file that defines the user space visible
interface.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/uapi/asm/kvm.h | 112 ++++++++++++++++++++++++++++++++++++++
1 file changed, 112 insertions(+)
create mode 100644 arch/arm64/include/uapi/asm/kvm.h
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
new file mode 100644
index 0000000..f5525f1
--- /dev/null
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -0,0 +1,112 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/include/uapi/asm/kvm.h:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_KVM_H__
+#define __ARM_KVM_H__
+
+#define KVM_SPSR_EL1 0
+#define KVM_NR_SPSR 1
+
+#ifndef __ASSEMBLY__
+#include <asm/types.h>
+#include <asm/ptrace.h>
+
+#define __KVM_HAVE_GUEST_DEBUG
+#define __KVM_HAVE_IRQ_LINE
+
+#define KVM_REG_SIZE(id) \
+ (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
+
+struct kvm_regs {
+ struct user_pt_regs regs; /* sp = sp_el0 */
+
+ unsigned long sp_el1;
+ unsigned long elr_el1;
+
+ unsigned long spsr[KVM_NR_SPSR];
+};
+
+/* Supported Processor Types */
+#define KVM_ARM_TARGET_CORTEX_A57 0
+#define KVM_ARM_NUM_TARGETS 1
+
+/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
+#define KVM_ARM_DEVICE_TYPE_SHIFT 0
+#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
+#define KVM_ARM_DEVICE_ID_SHIFT 16
+#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
+
+/* Supported device IDs */
+#define KVM_ARM_DEVICE_VGIC_V2 0
+
+/* Supported VGIC address types */
+#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
+#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
+
+#define KVM_VGIC_V2_DIST_SIZE 0x1000
+#define KVM_VGIC_V2_CPU_SIZE 0x2000
+
+struct kvm_vcpu_init {
+ __u32 target;
+ __u32 features[7];
+};
+
+struct kvm_sregs {
+};
+
+struct kvm_fpu {
+};
+
+struct kvm_guest_debug_arch {
+};
+
+struct kvm_debug_exit_arch {
+};
+
+struct kvm_sync_regs {
+};
+
+struct kvm_arch_memory_slot {
+};
+
+/* KVM_IRQ_LINE irq field index values */
+#define KVM_ARM_IRQ_TYPE_SHIFT 24
+#define KVM_ARM_IRQ_TYPE_MASK 0xff
+#define KVM_ARM_IRQ_VCPU_SHIFT 16
+#define KVM_ARM_IRQ_VCPU_MASK 0xff
+#define KVM_ARM_IRQ_NUM_SHIFT 0
+#define KVM_ARM_IRQ_NUM_MASK 0xffff
+
+/* irq_type field */
+#define KVM_ARM_IRQ_TYPE_CPU 0
+#define KVM_ARM_IRQ_TYPE_SPI 1
+#define KVM_ARM_IRQ_TYPE_PPI 2
+
+/* out-of-kernel GIC cpu interrupt injection irq_number field */
+#define KVM_ARM_IRQ_CPU_IRQ 0
+#define KVM_ARM_IRQ_CPU_FIQ 1
+
+/* Highest supported SPI, from VGIC_NR_IRQS */
+#define KVM_ARM_IRQ_GIC_MAX 127
+
+#endif
+
+#endif /* __ARM_KVM_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 09/29] arm64: KVM: system register handling
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Provide 64bit system register handling, modeled after the cp15
handling for ARM.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_coproc.h | 51 ++
arch/arm64/include/uapi/asm/kvm.h | 56 +++
arch/arm64/kvm/sys_regs.c | 962 ++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.h | 141 ++++++
include/uapi/linux/kvm.h | 1 +
5 files changed, 1211 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_coproc.h
create mode 100644 arch/arm64/kvm/sys_regs.c
create mode 100644 arch/arm64/kvm/sys_regs.h
diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
new file mode 100644
index 0000000..e791894
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_coproc.h
@@ -0,0 +1,51 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/include/asm/kvm_coproc.h
+ * Copyright (C) 2012 Rusty Russell IBM Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_COPROC_H__
+#define __ARM64_KVM_COPROC_H__
+
+#include <linux/kvm_host.h>
+
+void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
+
+struct kvm_sys_reg_table {
+ const struct sys_reg_desc *table;
+ size_t num;
+};
+
+struct kvm_sys_reg_target_table {
+ unsigned target;
+ struct kvm_sys_reg_table table64;
+};
+
+void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table);
+
+int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run);
+
+#define kvm_coproc_table_init kvm_sys_reg_table_init
+void kvm_sys_reg_table_init(void);
+
+struct kvm_one_reg;
+int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
+int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
+int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
+unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu);
+
+#endif /* __ARM64_KVM_COPROC_H__ */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index f5525f1..fffeb11 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -87,6 +87,62 @@ struct kvm_sync_regs {
struct kvm_arch_memory_slot {
};
+/* If you need to interpret the index values, here is the key: */
+#define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000
+#define KVM_REG_ARM_COPROC_SHIFT 16
+#define KVM_REG_ARM_32_OPC2_MASK 0x0000000000000007
+#define KVM_REG_ARM_32_OPC2_SHIFT 0
+#define KVM_REG_ARM_OPC1_MASK 0x0000000000000078
+#define KVM_REG_ARM_OPC1_SHIFT 3
+#define KVM_REG_ARM_CRM_MASK 0x0000000000000780
+#define KVM_REG_ARM_CRM_SHIFT 7
+#define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800
+#define KVM_REG_ARM_32_CRN_SHIFT 11
+
+/* Normal registers are mapped as coprocessor 16. */
+#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / sizeof(unsigned long))
+
+/* Some registers need more space to represent values. */
+#define KVM_REG_ARM_DEMUX (0x0011 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM_DEMUX_ID_MASK 0x000000000000FF00
+#define KVM_REG_ARM_DEMUX_ID_SHIFT 8
+#define KVM_REG_ARM_DEMUX_ID_CCSIDR (0x00 << KVM_REG_ARM_DEMUX_ID_SHIFT)
+#define KVM_REG_ARM_DEMUX_VAL_MASK 0x00000000000000FF
+#define KVM_REG_ARM_DEMUX_VAL_SHIFT 0
+
+/* VFP registers: we could overload CP10 like ARM does, but that's ugly. */
+#define KVM_REG_ARM_VFP (0x0012 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM_VFP_MASK 0x000000000000FFFF
+#define KVM_REG_ARM_VFP_BASE_REG 0x0
+#define KVM_REG_ARM_VFP_FPSID 0x1000
+#define KVM_REG_ARM_VFP_FPSCR 0x1001
+#define KVM_REG_ARM_VFP_MVFR1 0x1006
+#define KVM_REG_ARM_VFP_MVFR0 0x1007
+#define KVM_REG_ARM_VFP_FPEXC 0x1008
+#define KVM_REG_ARM_VFP_FPINST 0x1009
+#define KVM_REG_ARM_VFP_FPINST2 0x100A
+
+/* AArch64 system registers */
+#define KVM_REG_ARM64_SYSREG (0x0013 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM64_SYSREG_OP0_MASK 0x000000000000c000
+#define KVM_REG_ARM64_SYSREG_OP0_SHIFT 14
+#define KVM_REG_ARM64_SYSREG_OP1_MASK 0x0000000000003800
+#define KVM_REG_ARM64_SYSREG_OP1_SHIFT 11
+#define KVM_REG_ARM64_SYSREG_CRN_MASK 0x0000000000000780
+#define KVM_REG_ARM64_SYSREG_CRN_SHIFT 7
+#define KVM_REG_ARM64_SYSREG_CRM_MASK 0x0000000000000078
+#define KVM_REG_ARM64_SYSREG_CRM_SHIFT 3
+#define KVM_REG_ARM64_SYSREG_OP2_MASK 0x0000000000000007
+#define KVM_REG_ARM64_SYSREG_OP2_SHIFT 0
+
+/* FP-SIMD registers */
+#define KVM_REG_ARM64_FP_SIMD (0x0014 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM64_FP_SIMD_MASK 0x000000000000FFFF
+#define KVM_REG_ARM64_FP_SIMD_BASE_REG 0x0
+#define KVM_REG_ARM64_FP_SIMD_FPSR 0x1000
+#define KVM_REG_ARM64_FP_SIMD_FPCR 0x1001
+
/* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24
#define KVM_ARM_IRQ_TYPE_MASK 0xff
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
new file mode 100644
index 0000000..9fc8c17
--- /dev/null
+++ b/arch/arm64/kvm/sys_regs.c
@@ -0,0 +1,962 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/coproc.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Authors: Rusty Russell <rusty@rustcorp.com.au>
+ * Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/kvm_host.h>
+#include <linux/uaccess.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_coproc.h>
+#include <asm/cacheflush.h>
+#include <asm/cputype.h>
+#include <trace/events/kvm.h>
+
+#include "sys_regs.h"
+
+/*
+ * All of this file is extremly similar to the ARM coproc.c, but the
+ * types are different. My gut feeling is that it should be pretty
+ * easy to merge, but that would be an ABI breakage -- again. VFP
+ * would also need to be abstracted.
+ */
+
+/* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
+static u32 cache_levels;
+
+/* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
+#define CSSELR_MAX 12
+
+/* Which cache CCSIDR represents depends on CSSELR value. */
+static u32 get_ccsidr(u32 csselr)
+{
+ u32 ccsidr;
+
+ /* Make sure noone else changes CSSELR during this! */
+ local_irq_disable();
+ /* Put value into CSSELR */
+ asm volatile("msr csselr_el1, %x0" : : "r" (csselr));
+ /* Read result out of CCSIDR */
+ asm volatile("mrs %0, ccsidr_el1" : "=r" (ccsidr));
+ local_irq_enable();
+
+ return ccsidr;
+}
+
+static void do_dc_cisw(u32 val)
+{
+ asm volatile("dc cisw, %x0" : : "r" (val));
+}
+
+static void do_dc_csw(u32 val)
+{
+ asm volatile("dc csw, %x0" : : "r" (val));
+}
+
+/* See note at ARM ARM B1.14.4 */
+static bool access_dcsw(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ unsigned long val;
+ int cpu;
+
+ cpu = get_cpu();
+
+ if (!p->is_write)
+ return read_from_write_only(vcpu, p);
+
+ cpumask_setall(&vcpu->arch.require_dcache_flush);
+ cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
+
+ /* If we were already preempted, take the long way around */
+ if (cpu != vcpu->arch.last_pcpu) {
+ flush_cache_all();
+ goto done;
+ }
+
+ val = *vcpu_reg(vcpu, p->Rt);
+
+ switch (p->CRm) {
+ case 6: /* Upgrade DCISW to DCCISW, as per HCR.SWIO */
+ case 14: /* DCCISW */
+ do_dc_cisw(val);
+ break;
+
+ case 10: /* DCCSW */
+ do_dc_csw(val);
+ break;
+ }
+
+done:
+ put_cpu();
+
+ return true;
+}
+
+/*
+ * We could trap ID_DFR0 and tell the guest we don't support performance
+ * monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
+ * NAKed, so it will read the PMCR anyway.
+ *
+ * Therefore we tell the guest we have 0 counters. Unfortunately, we
+ * must always support PMCCNTR (the cycle counter): we just RAZ/WI for
+ * all PM registers, which doesn't crash the guest kernel at least.
+ */
+static bool pm_fake(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return ignore_write(vcpu, p);
+ else
+ return read_zero(vcpu, p);
+}
+
+static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ u64 amair;
+
+ asm volatile("mrs %0, amair_el1\n" : "=r" (amair));
+ vcpu->arch.sys_regs[AMAIR_EL1] = amair;
+}
+
+/*
+ * Architected system registers.
+ * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
+ */
+static const struct sys_reg_desc sys_reg_descs[] = {
+ /* DC ISW */
+ { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b0110), Op2(0b010),
+ access_dcsw },
+ /* DC CSW */
+ { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1010), Op2(0b010),
+ access_dcsw },
+ /* DC CISW */
+ { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b010),
+ access_dcsw },
+
+ /* TTBR0_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
+ NULL, reset_unknown, TTBR0_EL1 },
+ /* TTBR1_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
+ NULL, reset_unknown, TTBR1_EL1 },
+ /* TCR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
+ NULL, reset_val, TCR_EL1, 0 },
+
+ /* AFSR0_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
+ NULL, reset_unknown, AFSR0_EL1 },
+ /* AFSR1_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
+ NULL, reset_unknown, AFSR1_EL1 },
+ /* ESR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
+ NULL, reset_unknown, ESR_EL1 },
+ /* FAR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
+ NULL, reset_unknown, FAR_EL1 },
+
+ /* PMINTENSET_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
+ pm_fake },
+ /* PMINTENCLR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
+ pm_fake },
+
+ /* MAIR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
+ NULL, reset_unknown, MAIR_EL1 },
+ /* AMAIR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
+ NULL, reset_amair_el1, AMAIR_EL1 },
+
+ /* VBAR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
+ NULL, reset_val, VBAR_EL1, 0 },
+ /* CONTEXTIDR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
+ NULL, reset_val, CONTEXTIDR_EL1, 0 },
+ /* TPIDR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
+ NULL, reset_unknown, TPIDR_EL1 },
+
+ /* CNTKCTL_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1110), CRm(0b0001), Op2(0b000),
+ NULL, reset_val, CNTKCTL_EL1, 0},
+
+ /* CSSELR_EL1 */
+ { Op0(0b11), Op1(0b010), CRn(0b0000), CRm(0b0000), Op2(0b000),
+ NULL, reset_unknown, CSSELR_EL1 },
+
+ /* PMCR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
+ pm_fake },
+ /* PMCNTENSET_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
+ pm_fake },
+ /* PMCNTENCLR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
+ pm_fake },
+ /* PMOVSCLR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
+ pm_fake },
+ /* PMSWINC_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
+ pm_fake },
+ /* PMSELR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
+ pm_fake },
+ /* PMCEID0_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
+ pm_fake },
+ /* PMCEID1_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
+ pm_fake },
+ /* PMCCNTR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
+ pm_fake },
+ /* PMXEVTYPER_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
+ pm_fake },
+ /* PMXEVCNTR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
+ pm_fake },
+ /* PMUSERENR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
+ pm_fake },
+ /* PMOVSSET_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
+ pm_fake },
+
+ /* TPIDR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
+ NULL, reset_unknown, TPIDR_EL0 },
+ /* TPIDRRO_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
+ NULL, reset_unknown, TPIDRRO_EL0 },
+};
+
+/* Target specific emulation tables */
+static struct kvm_sys_reg_target_table *target_tables[KVM_ARM_NUM_TARGETS];
+
+void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table)
+{
+ target_tables[table->target] = table;
+}
+
+/* Get specific register table for this target. */
+static const struct sys_reg_desc *get_target_table(unsigned target, size_t *num)
+{
+ struct kvm_sys_reg_target_table *table;
+
+ table = target_tables[target];
+ *num = table->table64.num;
+ return table->table64.table;
+}
+
+static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
+ const struct sys_reg_desc table[],
+ unsigned int num)
+{
+ unsigned int i;
+
+ for (i = 0; i < num; i++) {
+ const struct sys_reg_desc *r = &table[i];
+
+ if (params->Op0 != r->Op0)
+ continue;
+ if (params->Op1 != r->Op1)
+ continue;
+ if (params->CRn != r->CRn)
+ continue;
+ if (params->CRm != r->CRm)
+ continue;
+ if (params->Op2 != r->Op2)
+ continue;
+
+ return r;
+ }
+ return NULL;
+}
+
+static int emulate_sys_reg(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *params)
+{
+ size_t num;
+ const struct sys_reg_desc *table, *r;
+
+ table = get_target_table(vcpu->arch.target, &num);
+
+ /* Search target-specific then generic table. */
+ r = find_reg(params, table, num);
+ if (!r)
+ r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+
+ if (likely(r)) {
+ /* If we don't have an accessor, we should never get here! */
+ BUG_ON(!r->access);
+
+ if (likely(r->access(vcpu, params, r))) {
+ /* Skip instruction, since it was emulated */
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ return 1;
+ }
+ /* If access function fails, it should complain. */
+ } else {
+ kvm_err("Unsupported guest sys_reg access at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(params);
+ }
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+static void reset_sys_reg_descs(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *table, size_t num)
+{
+ unsigned long i;
+
+ for (i = 0; i < num; i++)
+ if (table[i].reset)
+ table[i].reset(vcpu, &table[i]);
+}
+
+/**
+ * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
+ * @vcpu: The VCPU pointer
+ * @run: The kvm_run struct
+ */
+int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ struct sys_reg_params params;
+ unsigned long esr = kvm_vcpu_get_hsr(vcpu);
+
+ params.Op0 = (esr >> 20) & 3;
+ params.Op1 = (esr >> 14) & 0x7;
+ params.CRn = (esr >> 10) & 0xf;
+ params.CRm = (esr >> 1) & 0xf;
+ params.Op2 = (esr >> 17) & 0x7;
+ params.Rt = (esr >> 5) & 0x1f;
+ params.is_write = !(esr & 1);
+
+ return emulate_sys_reg(vcpu, ¶ms);
+}
+
+/******************************************************************************
+ * Userspace API
+ *****************************************************************************/
+
+static bool index_to_params(u64 id, struct sys_reg_params *params)
+{
+ switch (id & KVM_REG_SIZE_MASK) {
+ case KVM_REG_SIZE_U64:
+ /* Any unused index bits means it's not valid. */
+ if (id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK
+ | KVM_REG_ARM_COPROC_MASK
+ | KVM_REG_ARM64_SYSREG_OP0_MASK
+ | KVM_REG_ARM64_SYSREG_OP1_MASK
+ | KVM_REG_ARM64_SYSREG_CRN_MASK
+ | KVM_REG_ARM64_SYSREG_CRM_MASK
+ | KVM_REG_ARM64_SYSREG_OP2_MASK))
+ return false;
+ params->Op0 = ((id & KVM_REG_ARM64_SYSREG_OP0_MASK)
+ >> KVM_REG_ARM64_SYSREG_OP0_SHIFT);
+ params->Op1 = ((id & KVM_REG_ARM64_SYSREG_OP1_MASK)
+ >> KVM_REG_ARM64_SYSREG_OP1_SHIFT);
+ params->CRn = ((id & KVM_REG_ARM64_SYSREG_CRN_MASK)
+ >> KVM_REG_ARM64_SYSREG_CRN_SHIFT);
+ params->CRm = ((id & KVM_REG_ARM64_SYSREG_CRM_MASK)
+ >> KVM_REG_ARM64_SYSREG_CRM_SHIFT);
+ params->Op2 = ((id & KVM_REG_ARM64_SYSREG_OP2_MASK)
+ >> KVM_REG_ARM64_SYSREG_OP2_SHIFT);
+ return true;
+ default:
+ return false;
+ }
+}
+
+/* Decode an index value, and find the sys_reg_desc entry. */
+static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
+ u64 id)
+{
+ size_t num;
+ const struct sys_reg_desc *table, *r;
+ struct sys_reg_params params;
+
+ /* We only do sys_reg for now. */
+ if ((id & KVM_REG_ARM_COPROC_MASK) != KVM_REG_ARM64_SYSREG)
+ return NULL;
+
+ if (!index_to_params(id, ¶ms))
+ return NULL;
+
+ table = get_target_table(vcpu->arch.target, &num);
+ r = find_reg(¶ms, table, num);
+ if (!r)
+ r = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+
+ /* Not saved in the sys_reg array? */
+ if (r && !r->reg)
+ r = NULL;
+
+ return r;
+}
+
+/*
+ * These are the invariant sys_reg registers: we let the guest see the
+ * host versions of these, so they're part of the guest state.
+ *
+ * A future CPU may provide a mechanism to present different values to
+ * the guest, or a future kvm may trap them.
+ */
+
+#define FUNCTION_INVARIANT(reg) \
+ static void get_##reg(struct kvm_vcpu *v, \
+ const struct sys_reg_desc *r) \
+ { \
+ u64 val; \
+ \
+ asm volatile("mrs %0, " __stringify(reg) "\n" \
+ : "=r" (val)); \
+ ((struct sys_reg_desc *)r)->val = val; \
+ }
+
+FUNCTION_INVARIANT(midr_el1)
+FUNCTION_INVARIANT(ctr_el0)
+FUNCTION_INVARIANT(revidr_el1)
+FUNCTION_INVARIANT(id_pfr0_el1)
+FUNCTION_INVARIANT(id_pfr1_el1)
+FUNCTION_INVARIANT(id_dfr0_el1)
+FUNCTION_INVARIANT(id_afr0_el1)
+FUNCTION_INVARIANT(id_mmfr0_el1)
+FUNCTION_INVARIANT(id_mmfr1_el1)
+FUNCTION_INVARIANT(id_mmfr2_el1)
+FUNCTION_INVARIANT(id_mmfr3_el1)
+FUNCTION_INVARIANT(id_isar0_el1)
+FUNCTION_INVARIANT(id_isar1_el1)
+FUNCTION_INVARIANT(id_isar2_el1)
+FUNCTION_INVARIANT(id_isar3_el1)
+FUNCTION_INVARIANT(id_isar4_el1)
+FUNCTION_INVARIANT(id_isar5_el1)
+FUNCTION_INVARIANT(clidr_el1)
+FUNCTION_INVARIANT(aidr_el1)
+
+/* ->val is filled in by kvm_invariant_sys_reg_table_init() */
+static struct sys_reg_desc invariant_sys_regs[] = {
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
+ NULL, get_midr_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
+ NULL, get_revidr_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
+ NULL, get_id_pfr0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
+ NULL, get_id_pfr1_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
+ NULL, get_id_dfr0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
+ NULL, get_id_afr0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
+ NULL, get_id_mmfr0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
+ NULL, get_id_mmfr1_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
+ NULL, get_id_mmfr2_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
+ NULL, get_id_mmfr3_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
+ NULL, get_id_isar0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
+ NULL, get_id_isar1_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
+ NULL, get_id_isar2_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
+ NULL, get_id_isar3_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
+ NULL, get_id_isar4_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
+ NULL, get_id_isar5_el1 },
+ { Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
+ NULL, get_clidr_el1 },
+ { Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
+ NULL, get_aidr_el1 },
+ { Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
+ NULL, get_ctr_el0 },
+};
+
+static int reg_from_user(void *val, const void __user *uaddr, u64 id)
+{
+ /* This Just Works because we are little endian. */
+ if (copy_from_user(val, uaddr, KVM_REG_SIZE(id)) != 0)
+ return -EFAULT;
+ return 0;
+}
+
+static int reg_to_user(void __user *uaddr, const void *val, u64 id)
+{
+ /* This Just Works because we are little endian. */
+ if (copy_to_user(uaddr, val, KVM_REG_SIZE(id)) != 0)
+ return -EFAULT;
+ return 0;
+}
+
+static int get_invariant_sys_reg(u64 id, void __user *uaddr)
+{
+ struct sys_reg_params params;
+ const struct sys_reg_desc *r;
+
+ if (!index_to_params(id, ¶ms))
+ return -ENOENT;
+
+ r = find_reg(¶ms, invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs));
+ if (!r)
+ return -ENOENT;
+
+ return reg_to_user(uaddr, &r->val, id);
+}
+
+static int set_invariant_sys_reg(u64 id, void __user *uaddr)
+{
+ struct sys_reg_params params;
+ const struct sys_reg_desc *r;
+ int err;
+ u64 val = 0; /* Make sure high bits are 0 for 32-bit regs */
+
+ if (!index_to_params(id, ¶ms))
+ return -ENOENT;
+ r = find_reg(¶ms, invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs));
+ if (!r)
+ return -ENOENT;
+
+ err = reg_from_user(&val, uaddr, id);
+ if (err)
+ return err;
+
+ /* This is what we mean by invariant: you can't change it. */
+ if (r->val != val)
+ return -EINVAL;
+
+ return 0;
+}
+
+static bool is_valid_cache(u32 val)
+{
+ u32 level, ctype;
+
+ if (val >= CSSELR_MAX)
+ return -ENOENT;
+
+ /* Bottom bit is Instruction or Data bit. Next 3 bits are level. */
+ level = (val >> 1);
+ ctype = (cache_levels >> (level * 3)) & 7;
+
+ switch (ctype) {
+ case 0: /* No cache */
+ return false;
+ case 1: /* Instruction cache only */
+ return (val & 1);
+ case 2: /* Data cache only */
+ case 4: /* Unified cache */
+ return !(val & 1);
+ case 3: /* Separate instruction and data caches */
+ return true;
+ default: /* Reserved: we can't know instruction or data. */
+ return false;
+ }
+}
+
+static int demux_c15_get(u64 id, void __user *uaddr)
+{
+ u32 val;
+ u32 __user *uval = uaddr;
+
+ /* Fail if we have unknown bits set. */
+ if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
+ | ((1 << KVM_REG_ARM_COPROC_SHIFT)-1)))
+ return -ENOENT;
+
+ switch (id & KVM_REG_ARM_DEMUX_ID_MASK) {
+ case KVM_REG_ARM_DEMUX_ID_CCSIDR:
+ if (KVM_REG_SIZE(id) != 4)
+ return -ENOENT;
+ val = (id & KVM_REG_ARM_DEMUX_VAL_MASK)
+ >> KVM_REG_ARM_DEMUX_VAL_SHIFT;
+ if (!is_valid_cache(val))
+ return -ENOENT;
+
+ return put_user(get_ccsidr(val), uval);
+ default:
+ return -ENOENT;
+ }
+}
+
+static int demux_c15_set(u64 id, void __user *uaddr)
+{
+ u32 val, newval;
+ u32 __user *uval = uaddr;
+
+ /* Fail if we have unknown bits set. */
+ if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
+ | ((1 << KVM_REG_ARM_COPROC_SHIFT)-1)))
+ return -ENOENT;
+
+ switch (id & KVM_REG_ARM_DEMUX_ID_MASK) {
+ case KVM_REG_ARM_DEMUX_ID_CCSIDR:
+ if (KVM_REG_SIZE(id) != 4)
+ return -ENOENT;
+ val = (id & KVM_REG_ARM_DEMUX_VAL_MASK)
+ >> KVM_REG_ARM_DEMUX_VAL_SHIFT;
+ if (!is_valid_cache(val))
+ return -ENOENT;
+
+ if (get_user(newval, uval))
+ return -EFAULT;
+
+ /* This is also invariant: you can't change it. */
+ if (newval != get_ccsidr(val))
+ return -EINVAL;
+ return 0;
+ default:
+ return -ENOENT;
+ }
+}
+
+static const int fpsimd_sysregs[] = {
+ KVM_REG_ARM64_FP_SIMD_FPSR,
+ KVM_REG_ARM64_FP_SIMD_FPCR,
+};
+
+static int num_fpsimd_data_regs(void)
+{
+ return 32;
+}
+
+static int num_fpsimd_regs(void)
+{
+ return num_fpsimd_data_regs() + ARRAY_SIZE(fpsimd_sysregs);
+}
+
+static int copy_fpsimd_regids(u64 __user *uindices)
+{
+ unsigned int i;
+ const u64 u32reg = KVM_REG_ARM64 | KVM_REG_SIZE_U32 | KVM_REG_ARM64_FP_SIMD;
+ const u64 u128reg = KVM_REG_ARM64 | KVM_REG_SIZE_U128 | KVM_REG_ARM64_FP_SIMD;
+
+ for (i = 0; i < num_fpsimd_data_regs(); i++) {
+ if (put_user((u128reg | KVM_REG_ARM64_FP_SIMD_BASE_REG) + i,
+ uindices))
+ return -EFAULT;
+ uindices++;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(fpsimd_sysregs); i++) {
+ if (put_user(u32reg | fpsimd_sysregs[i], uindices))
+ return -EFAULT;
+ uindices++;
+ }
+
+ return num_fpsimd_regs();
+}
+
+static int fpsimd_get_reg(const struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
+{
+ u32 regid = (id & KVM_REG_ARM64_FP_SIMD_MASK);
+
+ /* Fail if we have unknown bits set. */
+ if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
+ | ((1 << KVM_REG_ARM_COPROC_SHIFT)-1)))
+ return -ENOENT;
+
+ if (regid < num_fpsimd_data_regs()) {
+ if (KVM_REG_SIZE(id) != 16)
+ return -ENOENT;
+ return reg_to_user(uaddr, &vcpu->arch.vfp_guest.vregs[regid],
+ id);
+ }
+
+ /* FP control registers are all 32 bit. */
+ if (KVM_REG_SIZE(id) != 4)
+ return -ENOENT;
+
+ switch (regid) {
+ case KVM_REG_ARM64_FP_SIMD_FPSR:
+ return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpsr, id);
+ case KVM_REG_ARM64_FP_SIMD_FPCR:
+ return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpcr, id);
+ default:
+ return -ENOENT;
+ }
+}
+
+static int fpsimd_set_reg(struct kvm_vcpu *vcpu, u64 id, const void __user *uaddr)
+{
+ u32 regid = (id & KVM_REG_ARM64_FP_SIMD_MASK);
+
+ /* Fail if we have unknown bits set. */
+ if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
+ | ((1 << KVM_REG_ARM_COPROC_SHIFT)-1)))
+ return -ENOENT;
+
+ if (regid < num_fpsimd_data_regs()) {
+ if (KVM_REG_SIZE(id) != 16)
+ return -ENOENT;
+ return reg_from_user(&vcpu->arch.vfp_guest.vregs[regid],
+ uaddr, id);
+ }
+
+ /* FP control registers are all 32 bit. */
+ if (KVM_REG_SIZE(id) != 4)
+ return -ENOENT;
+
+ switch (regid) {
+ case KVM_REG_ARM64_FP_SIMD_FPSR:
+ return reg_from_user(&vcpu->arch.vfp_guest.fpsr, uaddr, id);
+ case KVM_REG_ARM64_FP_SIMD_FPCR:
+ return reg_from_user(&vcpu->arch.vfp_guest.fpcr, uaddr, id);
+ default:
+ return -ENOENT;
+ }
+}
+
+int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ const struct sys_reg_desc *r;
+ void __user *uaddr = (void __user *)(unsigned long)reg->addr;
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
+ return demux_c15_get(reg->id, uaddr);
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM64_FP_SIMD)
+ return fpsimd_get_reg(vcpu, reg->id, uaddr);
+
+ r = index_to_sys_reg_desc(vcpu, reg->id);
+ if (!r)
+ return get_invariant_sys_reg(reg->id, uaddr);
+
+ /* Note: copies two regs if size is 64 bit. */
+ return reg_to_user(uaddr, &vcpu->arch.sys_regs[r->reg], reg->id);
+}
+
+int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ const struct sys_reg_desc *r;
+ void __user *uaddr = (void __user *)(unsigned long)reg->addr;
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
+ return demux_c15_set(reg->id, uaddr);
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM64_FP_SIMD)
+ return fpsimd_set_reg(vcpu, reg->id, uaddr);
+
+ r = index_to_sys_reg_desc(vcpu, reg->id);
+ if (!r)
+ return set_invariant_sys_reg(reg->id, uaddr);
+
+ /* Note: copies two regs if size is 64 bit */
+ return reg_from_user(&vcpu->arch.sys_regs[r->reg], uaddr, reg->id);
+}
+
+static unsigned int num_demux_regs(void)
+{
+ unsigned int i, count = 0;
+
+ for (i = 0; i < CSSELR_MAX; i++)
+ if (is_valid_cache(i))
+ count++;
+
+ return count;
+}
+
+static int write_demux_regids(u64 __user *uindices)
+{
+ u64 val = KVM_REG_ARM | KVM_REG_SIZE_U32 | KVM_REG_ARM_DEMUX;
+ unsigned int i;
+
+ val |= KVM_REG_ARM_DEMUX_ID_CCSIDR;
+ for (i = 0; i < CSSELR_MAX; i++) {
+ if (!is_valid_cache(i))
+ continue;
+ if (put_user(val | i, uindices))
+ return -EFAULT;
+ uindices++;
+ }
+ return 0;
+}
+
+static u64 sys_reg_to_index(const struct sys_reg_desc *reg)
+{
+ return (KVM_REG_ARM64 | KVM_REG_SIZE_U64 |
+ KVM_REG_ARM64_SYSREG |
+ (reg->Op0 << KVM_REG_ARM64_SYSREG_OP0_SHIFT) |
+ (reg->Op1 << KVM_REG_ARM64_SYSREG_OP1_SHIFT) |
+ (reg->CRn << KVM_REG_ARM64_SYSREG_CRN_SHIFT) |
+ (reg->CRm << KVM_REG_ARM64_SYSREG_CRM_SHIFT) |
+ (reg->Op2 << KVM_REG_ARM64_SYSREG_OP2_SHIFT));
+}
+
+static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind)
+{
+ if (!*uind)
+ return true;
+
+ if (put_user(sys_reg_to_index(reg), *uind))
+ return false;
+
+ (*uind)++;
+ return true;
+}
+
+/* Assumed ordered tables, see kvm_sys_reg_table_init. */
+static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+{
+ const struct sys_reg_desc *i1, *i2, *end1, *end2;
+ unsigned int total = 0;
+ size_t num;
+
+ /* We check for duplicates here, to allow arch-specific overrides. */
+ i1 = get_target_table(vcpu->arch.target, &num);
+ end1 = i1 + num;
+ i2 = sys_reg_descs;
+ end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
+
+ BUG_ON(i1 == end1 || i2 == end2);
+
+ /* Walk carefully, as both tables may refer to the same register. */
+ while (i1 || i2) {
+ int cmp = cmp_sys_reg(i1, i2);
+ /* target-specific overrides generic entry. */
+ if (cmp <= 0) {
+ /* Ignore registers we trap but don't save. */
+ if (i1->reg) {
+ if (!copy_reg_to_user(i1, &uind))
+ return -EFAULT;
+ total++;
+ }
+ } else {
+ /* Ignore registers we trap but don't save. */
+ if (i2->reg) {
+ if (!copy_reg_to_user(i2, &uind))
+ return -EFAULT;
+ total++;
+ }
+ }
+
+ if (cmp <= 0 && ++i1 == end1)
+ i1 = NULL;
+ if (cmp >= 0 && ++i2 == end2)
+ i2 = NULL;
+ }
+ return total;
+}
+
+unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu)
+{
+ return ARRAY_SIZE(invariant_sys_regs)
+ + num_demux_regs()
+ + num_fpsimd_regs()
+ + walk_sys_regs(vcpu, (u64 __user *)NULL);
+}
+
+int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
+{
+ unsigned int i;
+ int err;
+
+ /* Then give them all the invariant registers' indices. */
+ for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++) {
+ if (put_user(sys_reg_to_index(&invariant_sys_regs[i]), uindices))
+ return -EFAULT;
+ uindices++;
+ }
+
+ err = walk_sys_regs(vcpu, uindices);
+ if (err < 0)
+ return err;
+ uindices += err;
+
+ err = copy_fpsimd_regids(uindices);
+ if (err < 0)
+ return err;
+ uindices += err;
+
+ return write_demux_regids(uindices);
+}
+
+void kvm_sys_reg_table_init(void)
+{
+ unsigned int i;
+ struct sys_reg_desc clidr;
+
+ /* Make sure tables are unique and in order. */
+ for (i = 1; i < ARRAY_SIZE(sys_reg_descs); i++)
+ BUG_ON(cmp_sys_reg(&sys_reg_descs[i-1], &sys_reg_descs[i]) >= 0);
+
+ /* We abuse the reset function to overwrite the table itself. */
+ for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
+ invariant_sys_regs[i].reset(NULL, &invariant_sys_regs[i]);
+
+ /*
+ * CLIDR format is awkward, so clean it up. See ARM B4.1.20:
+ *
+ * If software reads the Cache Type fields from Ctype1
+ * upwards, once it has seen a value of 0b000, no caches
+ * exist at further-out levels of the hierarchy. So, for
+ * example, if Ctype3 is the first Cache Type field with a
+ * value of 0b000, the values of Ctype4 to Ctype7 must be
+ * ignored.
+ */
+ get_clidr_el1(NULL, &clidr); /* Ugly... */
+ cache_levels = clidr.val;
+ for (i = 0; i < 7; i++)
+ if (((cache_levels >> (i*3)) & 7) == 0)
+ break;
+ /* Clear all higher bits. */
+ cache_levels &= (1 << (i*3))-1;
+}
+
+/**
+ * kvm_reset_sys_regs - sets system registers to reset value
+ * @vcpu: The VCPU pointer
+ *
+ * This function finds the right table above and sets the registers on the
+ * virtual CPU struct to their architecturally defined reset values.
+ */
+void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
+{
+ size_t num;
+ const struct sys_reg_desc *table;
+
+ /* Catch someone adding a register without putting in reset entry. */
+ memset(vcpu->arch.sys_regs, 0x42, sizeof(vcpu->arch.sys_regs));
+
+ /* Generic chip reset first (so target could override). */
+ reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+
+ table = get_target_table(vcpu->arch.target, &num);
+ reset_sys_reg_descs(vcpu, table, num);
+
+ for (num = 1; num < NR_SYS_REGS; num++)
+ if (vcpu->arch.sys_regs[num] == 0x42424242)
+ panic("Didn't reset vcpu->arch.sys_regs[%zi]", num);
+}
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
new file mode 100644
index 0000000..c0ac420
--- /dev/null
+++ b/arch/arm64/kvm/sys_regs.h
@@ -0,0 +1,141 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/coproc.h
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Authors: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
+#define __ARM64_KVM_SYS_REGS_LOCAL_H__
+
+struct sys_reg_params {
+ u8 Op0;
+ u8 Op1;
+ u8 CRn;
+ u8 CRm;
+ u8 Op2;
+ u8 Rt;
+ bool is_write;
+};
+
+struct sys_reg_desc {
+ /* MRS/MSR instruction which accesses it. */
+ u8 Op0;
+ u8 Op1;
+ u8 CRn;
+ u8 CRm;
+ u8 Op2;
+
+ /* Trapped access from guest, if non-NULL. */
+ bool (*access)(struct kvm_vcpu *,
+ const struct sys_reg_params *,
+ const struct sys_reg_desc *);
+
+ /* Initialization for vcpu. */
+ void (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *);
+
+ /*
+ * Index into vcpu->arch.sys_regs[], or 0 if we don't need to
+ * save it.
+ */
+ int reg;
+
+ /* Value (usually reset value) */
+ u64 val;
+};
+
+static inline void print_sys_reg_instr(const struct sys_reg_params *p)
+{
+ /* Look, we even formatted it for you to paste into the table! */
+ kvm_pr_unimpl(" { Op0(%2u), Op1(%2u), CRn(%2u), CRm(%2u), Op2(%2u), func_%s },\n",
+ p->Op0, p->Op1, p->CRn, p->CRm, p->Op2, p->is_write ? "write" : "read");
+}
+
+static inline bool ignore_write(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p)
+{
+ return true;
+}
+
+static inline bool read_zero(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p)
+{
+ *vcpu_reg(vcpu, p->Rt) = 0;
+ return true;
+}
+
+static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *params)
+{
+ kvm_debug("sys_reg write to read-only register at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(params);
+ return false;
+}
+
+static inline bool read_from_write_only(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *params)
+{
+ kvm_debug("sys_reg read to write-only register at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(params);
+ return false;
+}
+
+/* Reset functions */
+static inline void reset_unknown(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *r)
+{
+ BUG_ON(!r->reg);
+ BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.sys_regs));
+ vcpu->arch.sys_regs[r->reg] = 0x1de7ec7edbadc0de;
+}
+
+static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ BUG_ON(!r->reg);
+ BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.sys_regs));
+ vcpu->arch.sys_regs[r->reg] = r->val;
+}
+
+static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
+ const struct sys_reg_desc *i2)
+{
+ BUG_ON(i1 == i2);
+ if (!i1)
+ return 1;
+ else if (!i2)
+ return -1;
+ if (i1->Op0 != i2->Op0)
+ return i1->Op0 - i2->Op0;
+ if (i1->Op1 != i2->Op1)
+ return i1->Op1 - i2->Op1;
+ if (i1->CRn != i2->CRn)
+ return i1->CRn - i2->CRn;
+ if (i1->CRm != i2->CRm)
+ return i1->CRm - i2->CRm;
+ return i1->Op2 - i2->Op2;
+}
+
+
+#define Op0(_x) .Op0 = _x
+#define Op1(_x) .Op1 = _x
+#define CRn(_x) .CRn = _x
+#define CRm(_x) .CRm = _x
+#define Op2(_x) .Op2 = _x
+
+#endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 3c56ba3..2bf42b0 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -782,6 +782,7 @@ struct kvm_dirty_tlb {
#define KVM_REG_IA64 0x3000000000000000ULL
#define KVM_REG_ARM 0x4000000000000000ULL
#define KVM_REG_S390 0x5000000000000000ULL
+#define KVM_REG_ARM64 0x6000000000000000ULL
#define KVM_REG_SIZE_SHIFT 52
#define KVM_REG_SIZE_MASK 0x00f0000000000000ULL
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 09/29] arm64: KVM: system register handling
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Provide 64bit system register handling, modeled after the cp15
handling for ARM.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_coproc.h | 51 ++
arch/arm64/include/uapi/asm/kvm.h | 56 +++
arch/arm64/kvm/sys_regs.c | 962 ++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.h | 141 ++++++
include/uapi/linux/kvm.h | 1 +
5 files changed, 1211 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_coproc.h
create mode 100644 arch/arm64/kvm/sys_regs.c
create mode 100644 arch/arm64/kvm/sys_regs.h
diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
new file mode 100644
index 0000000..e791894
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_coproc.h
@@ -0,0 +1,51 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/include/asm/kvm_coproc.h
+ * Copyright (C) 2012 Rusty Russell IBM Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_COPROC_H__
+#define __ARM64_KVM_COPROC_H__
+
+#include <linux/kvm_host.h>
+
+void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
+
+struct kvm_sys_reg_table {
+ const struct sys_reg_desc *table;
+ size_t num;
+};
+
+struct kvm_sys_reg_target_table {
+ unsigned target;
+ struct kvm_sys_reg_table table64;
+};
+
+void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table);
+
+int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run);
+
+#define kvm_coproc_table_init kvm_sys_reg_table_init
+void kvm_sys_reg_table_init(void);
+
+struct kvm_one_reg;
+int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
+int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
+int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
+unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu);
+
+#endif /* __ARM64_KVM_COPROC_H__ */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index f5525f1..fffeb11 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -87,6 +87,62 @@ struct kvm_sync_regs {
struct kvm_arch_memory_slot {
};
+/* If you need to interpret the index values, here is the key: */
+#define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000
+#define KVM_REG_ARM_COPROC_SHIFT 16
+#define KVM_REG_ARM_32_OPC2_MASK 0x0000000000000007
+#define KVM_REG_ARM_32_OPC2_SHIFT 0
+#define KVM_REG_ARM_OPC1_MASK 0x0000000000000078
+#define KVM_REG_ARM_OPC1_SHIFT 3
+#define KVM_REG_ARM_CRM_MASK 0x0000000000000780
+#define KVM_REG_ARM_CRM_SHIFT 7
+#define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800
+#define KVM_REG_ARM_32_CRN_SHIFT 11
+
+/* Normal registers are mapped as coprocessor 16. */
+#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / sizeof(unsigned long))
+
+/* Some registers need more space to represent values. */
+#define KVM_REG_ARM_DEMUX (0x0011 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM_DEMUX_ID_MASK 0x000000000000FF00
+#define KVM_REG_ARM_DEMUX_ID_SHIFT 8
+#define KVM_REG_ARM_DEMUX_ID_CCSIDR (0x00 << KVM_REG_ARM_DEMUX_ID_SHIFT)
+#define KVM_REG_ARM_DEMUX_VAL_MASK 0x00000000000000FF
+#define KVM_REG_ARM_DEMUX_VAL_SHIFT 0
+
+/* VFP registers: we could overload CP10 like ARM does, but that's ugly. */
+#define KVM_REG_ARM_VFP (0x0012 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM_VFP_MASK 0x000000000000FFFF
+#define KVM_REG_ARM_VFP_BASE_REG 0x0
+#define KVM_REG_ARM_VFP_FPSID 0x1000
+#define KVM_REG_ARM_VFP_FPSCR 0x1001
+#define KVM_REG_ARM_VFP_MVFR1 0x1006
+#define KVM_REG_ARM_VFP_MVFR0 0x1007
+#define KVM_REG_ARM_VFP_FPEXC 0x1008
+#define KVM_REG_ARM_VFP_FPINST 0x1009
+#define KVM_REG_ARM_VFP_FPINST2 0x100A
+
+/* AArch64 system registers */
+#define KVM_REG_ARM64_SYSREG (0x0013 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM64_SYSREG_OP0_MASK 0x000000000000c000
+#define KVM_REG_ARM64_SYSREG_OP0_SHIFT 14
+#define KVM_REG_ARM64_SYSREG_OP1_MASK 0x0000000000003800
+#define KVM_REG_ARM64_SYSREG_OP1_SHIFT 11
+#define KVM_REG_ARM64_SYSREG_CRN_MASK 0x0000000000000780
+#define KVM_REG_ARM64_SYSREG_CRN_SHIFT 7
+#define KVM_REG_ARM64_SYSREG_CRM_MASK 0x0000000000000078
+#define KVM_REG_ARM64_SYSREG_CRM_SHIFT 3
+#define KVM_REG_ARM64_SYSREG_OP2_MASK 0x0000000000000007
+#define KVM_REG_ARM64_SYSREG_OP2_SHIFT 0
+
+/* FP-SIMD registers */
+#define KVM_REG_ARM64_FP_SIMD (0x0014 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM64_FP_SIMD_MASK 0x000000000000FFFF
+#define KVM_REG_ARM64_FP_SIMD_BASE_REG 0x0
+#define KVM_REG_ARM64_FP_SIMD_FPSR 0x1000
+#define KVM_REG_ARM64_FP_SIMD_FPCR 0x1001
+
/* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24
#define KVM_ARM_IRQ_TYPE_MASK 0xff
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
new file mode 100644
index 0000000..9fc8c17
--- /dev/null
+++ b/arch/arm64/kvm/sys_regs.c
@@ -0,0 +1,962 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/coproc.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Authors: Rusty Russell <rusty@rustcorp.com.au>
+ * Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/kvm_host.h>
+#include <linux/uaccess.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_coproc.h>
+#include <asm/cacheflush.h>
+#include <asm/cputype.h>
+#include <trace/events/kvm.h>
+
+#include "sys_regs.h"
+
+/*
+ * All of this file is extremly similar to the ARM coproc.c, but the
+ * types are different. My gut feeling is that it should be pretty
+ * easy to merge, but that would be an ABI breakage -- again. VFP
+ * would also need to be abstracted.
+ */
+
+/* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
+static u32 cache_levels;
+
+/* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
+#define CSSELR_MAX 12
+
+/* Which cache CCSIDR represents depends on CSSELR value. */
+static u32 get_ccsidr(u32 csselr)
+{
+ u32 ccsidr;
+
+ /* Make sure noone else changes CSSELR during this! */
+ local_irq_disable();
+ /* Put value into CSSELR */
+ asm volatile("msr csselr_el1, %x0" : : "r" (csselr));
+ /* Read result out of CCSIDR */
+ asm volatile("mrs %0, ccsidr_el1" : "=r" (ccsidr));
+ local_irq_enable();
+
+ return ccsidr;
+}
+
+static void do_dc_cisw(u32 val)
+{
+ asm volatile("dc cisw, %x0" : : "r" (val));
+}
+
+static void do_dc_csw(u32 val)
+{
+ asm volatile("dc csw, %x0" : : "r" (val));
+}
+
+/* See note at ARM ARM B1.14.4 */
+static bool access_dcsw(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ unsigned long val;
+ int cpu;
+
+ cpu = get_cpu();
+
+ if (!p->is_write)
+ return read_from_write_only(vcpu, p);
+
+ cpumask_setall(&vcpu->arch.require_dcache_flush);
+ cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
+
+ /* If we were already preempted, take the long way around */
+ if (cpu != vcpu->arch.last_pcpu) {
+ flush_cache_all();
+ goto done;
+ }
+
+ val = *vcpu_reg(vcpu, p->Rt);
+
+ switch (p->CRm) {
+ case 6: /* Upgrade DCISW to DCCISW, as per HCR.SWIO */
+ case 14: /* DCCISW */
+ do_dc_cisw(val);
+ break;
+
+ case 10: /* DCCSW */
+ do_dc_csw(val);
+ break;
+ }
+
+done:
+ put_cpu();
+
+ return true;
+}
+
+/*
+ * We could trap ID_DFR0 and tell the guest we don't support performance
+ * monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
+ * NAKed, so it will read the PMCR anyway.
+ *
+ * Therefore we tell the guest we have 0 counters. Unfortunately, we
+ * must always support PMCCNTR (the cycle counter): we just RAZ/WI for
+ * all PM registers, which doesn't crash the guest kernel at least.
+ */
+static bool pm_fake(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return ignore_write(vcpu, p);
+ else
+ return read_zero(vcpu, p);
+}
+
+static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ u64 amair;
+
+ asm volatile("mrs %0, amair_el1\n" : "=r" (amair));
+ vcpu->arch.sys_regs[AMAIR_EL1] = amair;
+}
+
+/*
+ * Architected system registers.
+ * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
+ */
+static const struct sys_reg_desc sys_reg_descs[] = {
+ /* DC ISW */
+ { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b0110), Op2(0b010),
+ access_dcsw },
+ /* DC CSW */
+ { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1010), Op2(0b010),
+ access_dcsw },
+ /* DC CISW */
+ { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b010),
+ access_dcsw },
+
+ /* TTBR0_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
+ NULL, reset_unknown, TTBR0_EL1 },
+ /* TTBR1_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
+ NULL, reset_unknown, TTBR1_EL1 },
+ /* TCR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
+ NULL, reset_val, TCR_EL1, 0 },
+
+ /* AFSR0_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
+ NULL, reset_unknown, AFSR0_EL1 },
+ /* AFSR1_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
+ NULL, reset_unknown, AFSR1_EL1 },
+ /* ESR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
+ NULL, reset_unknown, ESR_EL1 },
+ /* FAR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
+ NULL, reset_unknown, FAR_EL1 },
+
+ /* PMINTENSET_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
+ pm_fake },
+ /* PMINTENCLR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
+ pm_fake },
+
+ /* MAIR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
+ NULL, reset_unknown, MAIR_EL1 },
+ /* AMAIR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
+ NULL, reset_amair_el1, AMAIR_EL1 },
+
+ /* VBAR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
+ NULL, reset_val, VBAR_EL1, 0 },
+ /* CONTEXTIDR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
+ NULL, reset_val, CONTEXTIDR_EL1, 0 },
+ /* TPIDR_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
+ NULL, reset_unknown, TPIDR_EL1 },
+
+ /* CNTKCTL_EL1 */
+ { Op0(0b11), Op1(0b000), CRn(0b1110), CRm(0b0001), Op2(0b000),
+ NULL, reset_val, CNTKCTL_EL1, 0},
+
+ /* CSSELR_EL1 */
+ { Op0(0b11), Op1(0b010), CRn(0b0000), CRm(0b0000), Op2(0b000),
+ NULL, reset_unknown, CSSELR_EL1 },
+
+ /* PMCR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
+ pm_fake },
+ /* PMCNTENSET_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
+ pm_fake },
+ /* PMCNTENCLR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
+ pm_fake },
+ /* PMOVSCLR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
+ pm_fake },
+ /* PMSWINC_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
+ pm_fake },
+ /* PMSELR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
+ pm_fake },
+ /* PMCEID0_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
+ pm_fake },
+ /* PMCEID1_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
+ pm_fake },
+ /* PMCCNTR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
+ pm_fake },
+ /* PMXEVTYPER_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
+ pm_fake },
+ /* PMXEVCNTR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
+ pm_fake },
+ /* PMUSERENR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
+ pm_fake },
+ /* PMOVSSET_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
+ pm_fake },
+
+ /* TPIDR_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
+ NULL, reset_unknown, TPIDR_EL0 },
+ /* TPIDRRO_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
+ NULL, reset_unknown, TPIDRRO_EL0 },
+};
+
+/* Target specific emulation tables */
+static struct kvm_sys_reg_target_table *target_tables[KVM_ARM_NUM_TARGETS];
+
+void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table)
+{
+ target_tables[table->target] = table;
+}
+
+/* Get specific register table for this target. */
+static const struct sys_reg_desc *get_target_table(unsigned target, size_t *num)
+{
+ struct kvm_sys_reg_target_table *table;
+
+ table = target_tables[target];
+ *num = table->table64.num;
+ return table->table64.table;
+}
+
+static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
+ const struct sys_reg_desc table[],
+ unsigned int num)
+{
+ unsigned int i;
+
+ for (i = 0; i < num; i++) {
+ const struct sys_reg_desc *r = &table[i];
+
+ if (params->Op0 != r->Op0)
+ continue;
+ if (params->Op1 != r->Op1)
+ continue;
+ if (params->CRn != r->CRn)
+ continue;
+ if (params->CRm != r->CRm)
+ continue;
+ if (params->Op2 != r->Op2)
+ continue;
+
+ return r;
+ }
+ return NULL;
+}
+
+static int emulate_sys_reg(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *params)
+{
+ size_t num;
+ const struct sys_reg_desc *table, *r;
+
+ table = get_target_table(vcpu->arch.target, &num);
+
+ /* Search target-specific then generic table. */
+ r = find_reg(params, table, num);
+ if (!r)
+ r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+
+ if (likely(r)) {
+ /* If we don't have an accessor, we should never get here! */
+ BUG_ON(!r->access);
+
+ if (likely(r->access(vcpu, params, r))) {
+ /* Skip instruction, since it was emulated */
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ return 1;
+ }
+ /* If access function fails, it should complain. */
+ } else {
+ kvm_err("Unsupported guest sys_reg access at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(params);
+ }
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+static void reset_sys_reg_descs(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *table, size_t num)
+{
+ unsigned long i;
+
+ for (i = 0; i < num; i++)
+ if (table[i].reset)
+ table[i].reset(vcpu, &table[i]);
+}
+
+/**
+ * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
+ * @vcpu: The VCPU pointer
+ * @run: The kvm_run struct
+ */
+int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ struct sys_reg_params params;
+ unsigned long esr = kvm_vcpu_get_hsr(vcpu);
+
+ params.Op0 = (esr >> 20) & 3;
+ params.Op1 = (esr >> 14) & 0x7;
+ params.CRn = (esr >> 10) & 0xf;
+ params.CRm = (esr >> 1) & 0xf;
+ params.Op2 = (esr >> 17) & 0x7;
+ params.Rt = (esr >> 5) & 0x1f;
+ params.is_write = !(esr & 1);
+
+ return emulate_sys_reg(vcpu, ¶ms);
+}
+
+/******************************************************************************
+ * Userspace API
+ *****************************************************************************/
+
+static bool index_to_params(u64 id, struct sys_reg_params *params)
+{
+ switch (id & KVM_REG_SIZE_MASK) {
+ case KVM_REG_SIZE_U64:
+ /* Any unused index bits means it's not valid. */
+ if (id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK
+ | KVM_REG_ARM_COPROC_MASK
+ | KVM_REG_ARM64_SYSREG_OP0_MASK
+ | KVM_REG_ARM64_SYSREG_OP1_MASK
+ | KVM_REG_ARM64_SYSREG_CRN_MASK
+ | KVM_REG_ARM64_SYSREG_CRM_MASK
+ | KVM_REG_ARM64_SYSREG_OP2_MASK))
+ return false;
+ params->Op0 = ((id & KVM_REG_ARM64_SYSREG_OP0_MASK)
+ >> KVM_REG_ARM64_SYSREG_OP0_SHIFT);
+ params->Op1 = ((id & KVM_REG_ARM64_SYSREG_OP1_MASK)
+ >> KVM_REG_ARM64_SYSREG_OP1_SHIFT);
+ params->CRn = ((id & KVM_REG_ARM64_SYSREG_CRN_MASK)
+ >> KVM_REG_ARM64_SYSREG_CRN_SHIFT);
+ params->CRm = ((id & KVM_REG_ARM64_SYSREG_CRM_MASK)
+ >> KVM_REG_ARM64_SYSREG_CRM_SHIFT);
+ params->Op2 = ((id & KVM_REG_ARM64_SYSREG_OP2_MASK)
+ >> KVM_REG_ARM64_SYSREG_OP2_SHIFT);
+ return true;
+ default:
+ return false;
+ }
+}
+
+/* Decode an index value, and find the sys_reg_desc entry. */
+static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
+ u64 id)
+{
+ size_t num;
+ const struct sys_reg_desc *table, *r;
+ struct sys_reg_params params;
+
+ /* We only do sys_reg for now. */
+ if ((id & KVM_REG_ARM_COPROC_MASK) != KVM_REG_ARM64_SYSREG)
+ return NULL;
+
+ if (!index_to_params(id, ¶ms))
+ return NULL;
+
+ table = get_target_table(vcpu->arch.target, &num);
+ r = find_reg(¶ms, table, num);
+ if (!r)
+ r = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+
+ /* Not saved in the sys_reg array? */
+ if (r && !r->reg)
+ r = NULL;
+
+ return r;
+}
+
+/*
+ * These are the invariant sys_reg registers: we let the guest see the
+ * host versions of these, so they're part of the guest state.
+ *
+ * A future CPU may provide a mechanism to present different values to
+ * the guest, or a future kvm may trap them.
+ */
+
+#define FUNCTION_INVARIANT(reg) \
+ static void get_##reg(struct kvm_vcpu *v, \
+ const struct sys_reg_desc *r) \
+ { \
+ u64 val; \
+ \
+ asm volatile("mrs %0, " __stringify(reg) "\n" \
+ : "=r" (val)); \
+ ((struct sys_reg_desc *)r)->val = val; \
+ }
+
+FUNCTION_INVARIANT(midr_el1)
+FUNCTION_INVARIANT(ctr_el0)
+FUNCTION_INVARIANT(revidr_el1)
+FUNCTION_INVARIANT(id_pfr0_el1)
+FUNCTION_INVARIANT(id_pfr1_el1)
+FUNCTION_INVARIANT(id_dfr0_el1)
+FUNCTION_INVARIANT(id_afr0_el1)
+FUNCTION_INVARIANT(id_mmfr0_el1)
+FUNCTION_INVARIANT(id_mmfr1_el1)
+FUNCTION_INVARIANT(id_mmfr2_el1)
+FUNCTION_INVARIANT(id_mmfr3_el1)
+FUNCTION_INVARIANT(id_isar0_el1)
+FUNCTION_INVARIANT(id_isar1_el1)
+FUNCTION_INVARIANT(id_isar2_el1)
+FUNCTION_INVARIANT(id_isar3_el1)
+FUNCTION_INVARIANT(id_isar4_el1)
+FUNCTION_INVARIANT(id_isar5_el1)
+FUNCTION_INVARIANT(clidr_el1)
+FUNCTION_INVARIANT(aidr_el1)
+
+/* ->val is filled in by kvm_invariant_sys_reg_table_init() */
+static struct sys_reg_desc invariant_sys_regs[] = {
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
+ NULL, get_midr_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
+ NULL, get_revidr_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
+ NULL, get_id_pfr0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
+ NULL, get_id_pfr1_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
+ NULL, get_id_dfr0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
+ NULL, get_id_afr0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
+ NULL, get_id_mmfr0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
+ NULL, get_id_mmfr1_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
+ NULL, get_id_mmfr2_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
+ NULL, get_id_mmfr3_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
+ NULL, get_id_isar0_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
+ NULL, get_id_isar1_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
+ NULL, get_id_isar2_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
+ NULL, get_id_isar3_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
+ NULL, get_id_isar4_el1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
+ NULL, get_id_isar5_el1 },
+ { Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
+ NULL, get_clidr_el1 },
+ { Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
+ NULL, get_aidr_el1 },
+ { Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
+ NULL, get_ctr_el0 },
+};
+
+static int reg_from_user(void *val, const void __user *uaddr, u64 id)
+{
+ /* This Just Works because we are little endian. */
+ if (copy_from_user(val, uaddr, KVM_REG_SIZE(id)) != 0)
+ return -EFAULT;
+ return 0;
+}
+
+static int reg_to_user(void __user *uaddr, const void *val, u64 id)
+{
+ /* This Just Works because we are little endian. */
+ if (copy_to_user(uaddr, val, KVM_REG_SIZE(id)) != 0)
+ return -EFAULT;
+ return 0;
+}
+
+static int get_invariant_sys_reg(u64 id, void __user *uaddr)
+{
+ struct sys_reg_params params;
+ const struct sys_reg_desc *r;
+
+ if (!index_to_params(id, ¶ms))
+ return -ENOENT;
+
+ r = find_reg(¶ms, invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs));
+ if (!r)
+ return -ENOENT;
+
+ return reg_to_user(uaddr, &r->val, id);
+}
+
+static int set_invariant_sys_reg(u64 id, void __user *uaddr)
+{
+ struct sys_reg_params params;
+ const struct sys_reg_desc *r;
+ int err;
+ u64 val = 0; /* Make sure high bits are 0 for 32-bit regs */
+
+ if (!index_to_params(id, ¶ms))
+ return -ENOENT;
+ r = find_reg(¶ms, invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs));
+ if (!r)
+ return -ENOENT;
+
+ err = reg_from_user(&val, uaddr, id);
+ if (err)
+ return err;
+
+ /* This is what we mean by invariant: you can't change it. */
+ if (r->val != val)
+ return -EINVAL;
+
+ return 0;
+}
+
+static bool is_valid_cache(u32 val)
+{
+ u32 level, ctype;
+
+ if (val >= CSSELR_MAX)
+ return -ENOENT;
+
+ /* Bottom bit is Instruction or Data bit. Next 3 bits are level. */
+ level = (val >> 1);
+ ctype = (cache_levels >> (level * 3)) & 7;
+
+ switch (ctype) {
+ case 0: /* No cache */
+ return false;
+ case 1: /* Instruction cache only */
+ return (val & 1);
+ case 2: /* Data cache only */
+ case 4: /* Unified cache */
+ return !(val & 1);
+ case 3: /* Separate instruction and data caches */
+ return true;
+ default: /* Reserved: we can't know instruction or data. */
+ return false;
+ }
+}
+
+static int demux_c15_get(u64 id, void __user *uaddr)
+{
+ u32 val;
+ u32 __user *uval = uaddr;
+
+ /* Fail if we have unknown bits set. */
+ if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
+ | ((1 << KVM_REG_ARM_COPROC_SHIFT)-1)))
+ return -ENOENT;
+
+ switch (id & KVM_REG_ARM_DEMUX_ID_MASK) {
+ case KVM_REG_ARM_DEMUX_ID_CCSIDR:
+ if (KVM_REG_SIZE(id) != 4)
+ return -ENOENT;
+ val = (id & KVM_REG_ARM_DEMUX_VAL_MASK)
+ >> KVM_REG_ARM_DEMUX_VAL_SHIFT;
+ if (!is_valid_cache(val))
+ return -ENOENT;
+
+ return put_user(get_ccsidr(val), uval);
+ default:
+ return -ENOENT;
+ }
+}
+
+static int demux_c15_set(u64 id, void __user *uaddr)
+{
+ u32 val, newval;
+ u32 __user *uval = uaddr;
+
+ /* Fail if we have unknown bits set. */
+ if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
+ | ((1 << KVM_REG_ARM_COPROC_SHIFT)-1)))
+ return -ENOENT;
+
+ switch (id & KVM_REG_ARM_DEMUX_ID_MASK) {
+ case KVM_REG_ARM_DEMUX_ID_CCSIDR:
+ if (KVM_REG_SIZE(id) != 4)
+ return -ENOENT;
+ val = (id & KVM_REG_ARM_DEMUX_VAL_MASK)
+ >> KVM_REG_ARM_DEMUX_VAL_SHIFT;
+ if (!is_valid_cache(val))
+ return -ENOENT;
+
+ if (get_user(newval, uval))
+ return -EFAULT;
+
+ /* This is also invariant: you can't change it. */
+ if (newval != get_ccsidr(val))
+ return -EINVAL;
+ return 0;
+ default:
+ return -ENOENT;
+ }
+}
+
+static const int fpsimd_sysregs[] = {
+ KVM_REG_ARM64_FP_SIMD_FPSR,
+ KVM_REG_ARM64_FP_SIMD_FPCR,
+};
+
+static int num_fpsimd_data_regs(void)
+{
+ return 32;
+}
+
+static int num_fpsimd_regs(void)
+{
+ return num_fpsimd_data_regs() + ARRAY_SIZE(fpsimd_sysregs);
+}
+
+static int copy_fpsimd_regids(u64 __user *uindices)
+{
+ unsigned int i;
+ const u64 u32reg = KVM_REG_ARM64 | KVM_REG_SIZE_U32 | KVM_REG_ARM64_FP_SIMD;
+ const u64 u128reg = KVM_REG_ARM64 | KVM_REG_SIZE_U128 | KVM_REG_ARM64_FP_SIMD;
+
+ for (i = 0; i < num_fpsimd_data_regs(); i++) {
+ if (put_user((u128reg | KVM_REG_ARM64_FP_SIMD_BASE_REG) + i,
+ uindices))
+ return -EFAULT;
+ uindices++;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(fpsimd_sysregs); i++) {
+ if (put_user(u32reg | fpsimd_sysregs[i], uindices))
+ return -EFAULT;
+ uindices++;
+ }
+
+ return num_fpsimd_regs();
+}
+
+static int fpsimd_get_reg(const struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
+{
+ u32 regid = (id & KVM_REG_ARM64_FP_SIMD_MASK);
+
+ /* Fail if we have unknown bits set. */
+ if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
+ | ((1 << KVM_REG_ARM_COPROC_SHIFT)-1)))
+ return -ENOENT;
+
+ if (regid < num_fpsimd_data_regs()) {
+ if (KVM_REG_SIZE(id) != 16)
+ return -ENOENT;
+ return reg_to_user(uaddr, &vcpu->arch.vfp_guest.vregs[regid],
+ id);
+ }
+
+ /* FP control registers are all 32 bit. */
+ if (KVM_REG_SIZE(id) != 4)
+ return -ENOENT;
+
+ switch (regid) {
+ case KVM_REG_ARM64_FP_SIMD_FPSR:
+ return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpsr, id);
+ case KVM_REG_ARM64_FP_SIMD_FPCR:
+ return reg_to_user(uaddr, &vcpu->arch.vfp_guest.fpcr, id);
+ default:
+ return -ENOENT;
+ }
+}
+
+static int fpsimd_set_reg(struct kvm_vcpu *vcpu, u64 id, const void __user *uaddr)
+{
+ u32 regid = (id & KVM_REG_ARM64_FP_SIMD_MASK);
+
+ /* Fail if we have unknown bits set. */
+ if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
+ | ((1 << KVM_REG_ARM_COPROC_SHIFT)-1)))
+ return -ENOENT;
+
+ if (regid < num_fpsimd_data_regs()) {
+ if (KVM_REG_SIZE(id) != 16)
+ return -ENOENT;
+ return reg_from_user(&vcpu->arch.vfp_guest.vregs[regid],
+ uaddr, id);
+ }
+
+ /* FP control registers are all 32 bit. */
+ if (KVM_REG_SIZE(id) != 4)
+ return -ENOENT;
+
+ switch (regid) {
+ case KVM_REG_ARM64_FP_SIMD_FPSR:
+ return reg_from_user(&vcpu->arch.vfp_guest.fpsr, uaddr, id);
+ case KVM_REG_ARM64_FP_SIMD_FPCR:
+ return reg_from_user(&vcpu->arch.vfp_guest.fpcr, uaddr, id);
+ default:
+ return -ENOENT;
+ }
+}
+
+int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ const struct sys_reg_desc *r;
+ void __user *uaddr = (void __user *)(unsigned long)reg->addr;
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
+ return demux_c15_get(reg->id, uaddr);
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM64_FP_SIMD)
+ return fpsimd_get_reg(vcpu, reg->id, uaddr);
+
+ r = index_to_sys_reg_desc(vcpu, reg->id);
+ if (!r)
+ return get_invariant_sys_reg(reg->id, uaddr);
+
+ /* Note: copies two regs if size is 64 bit. */
+ return reg_to_user(uaddr, &vcpu->arch.sys_regs[r->reg], reg->id);
+}
+
+int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ const struct sys_reg_desc *r;
+ void __user *uaddr = (void __user *)(unsigned long)reg->addr;
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
+ return demux_c15_set(reg->id, uaddr);
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM64_FP_SIMD)
+ return fpsimd_set_reg(vcpu, reg->id, uaddr);
+
+ r = index_to_sys_reg_desc(vcpu, reg->id);
+ if (!r)
+ return set_invariant_sys_reg(reg->id, uaddr);
+
+ /* Note: copies two regs if size is 64 bit */
+ return reg_from_user(&vcpu->arch.sys_regs[r->reg], uaddr, reg->id);
+}
+
+static unsigned int num_demux_regs(void)
+{
+ unsigned int i, count = 0;
+
+ for (i = 0; i < CSSELR_MAX; i++)
+ if (is_valid_cache(i))
+ count++;
+
+ return count;
+}
+
+static int write_demux_regids(u64 __user *uindices)
+{
+ u64 val = KVM_REG_ARM | KVM_REG_SIZE_U32 | KVM_REG_ARM_DEMUX;
+ unsigned int i;
+
+ val |= KVM_REG_ARM_DEMUX_ID_CCSIDR;
+ for (i = 0; i < CSSELR_MAX; i++) {
+ if (!is_valid_cache(i))
+ continue;
+ if (put_user(val | i, uindices))
+ return -EFAULT;
+ uindices++;
+ }
+ return 0;
+}
+
+static u64 sys_reg_to_index(const struct sys_reg_desc *reg)
+{
+ return (KVM_REG_ARM64 | KVM_REG_SIZE_U64 |
+ KVM_REG_ARM64_SYSREG |
+ (reg->Op0 << KVM_REG_ARM64_SYSREG_OP0_SHIFT) |
+ (reg->Op1 << KVM_REG_ARM64_SYSREG_OP1_SHIFT) |
+ (reg->CRn << KVM_REG_ARM64_SYSREG_CRN_SHIFT) |
+ (reg->CRm << KVM_REG_ARM64_SYSREG_CRM_SHIFT) |
+ (reg->Op2 << KVM_REG_ARM64_SYSREG_OP2_SHIFT));
+}
+
+static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind)
+{
+ if (!*uind)
+ return true;
+
+ if (put_user(sys_reg_to_index(reg), *uind))
+ return false;
+
+ (*uind)++;
+ return true;
+}
+
+/* Assumed ordered tables, see kvm_sys_reg_table_init. */
+static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+{
+ const struct sys_reg_desc *i1, *i2, *end1, *end2;
+ unsigned int total = 0;
+ size_t num;
+
+ /* We check for duplicates here, to allow arch-specific overrides. */
+ i1 = get_target_table(vcpu->arch.target, &num);
+ end1 = i1 + num;
+ i2 = sys_reg_descs;
+ end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
+
+ BUG_ON(i1 == end1 || i2 == end2);
+
+ /* Walk carefully, as both tables may refer to the same register. */
+ while (i1 || i2) {
+ int cmp = cmp_sys_reg(i1, i2);
+ /* target-specific overrides generic entry. */
+ if (cmp <= 0) {
+ /* Ignore registers we trap but don't save. */
+ if (i1->reg) {
+ if (!copy_reg_to_user(i1, &uind))
+ return -EFAULT;
+ total++;
+ }
+ } else {
+ /* Ignore registers we trap but don't save. */
+ if (i2->reg) {
+ if (!copy_reg_to_user(i2, &uind))
+ return -EFAULT;
+ total++;
+ }
+ }
+
+ if (cmp <= 0 && ++i1 == end1)
+ i1 = NULL;
+ if (cmp >= 0 && ++i2 == end2)
+ i2 = NULL;
+ }
+ return total;
+}
+
+unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu)
+{
+ return ARRAY_SIZE(invariant_sys_regs)
+ + num_demux_regs()
+ + num_fpsimd_regs()
+ + walk_sys_regs(vcpu, (u64 __user *)NULL);
+}
+
+int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
+{
+ unsigned int i;
+ int err;
+
+ /* Then give them all the invariant registers' indices. */
+ for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++) {
+ if (put_user(sys_reg_to_index(&invariant_sys_regs[i]), uindices))
+ return -EFAULT;
+ uindices++;
+ }
+
+ err = walk_sys_regs(vcpu, uindices);
+ if (err < 0)
+ return err;
+ uindices += err;
+
+ err = copy_fpsimd_regids(uindices);
+ if (err < 0)
+ return err;
+ uindices += err;
+
+ return write_demux_regids(uindices);
+}
+
+void kvm_sys_reg_table_init(void)
+{
+ unsigned int i;
+ struct sys_reg_desc clidr;
+
+ /* Make sure tables are unique and in order. */
+ for (i = 1; i < ARRAY_SIZE(sys_reg_descs); i++)
+ BUG_ON(cmp_sys_reg(&sys_reg_descs[i-1], &sys_reg_descs[i]) >= 0);
+
+ /* We abuse the reset function to overwrite the table itself. */
+ for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
+ invariant_sys_regs[i].reset(NULL, &invariant_sys_regs[i]);
+
+ /*
+ * CLIDR format is awkward, so clean it up. See ARM B4.1.20:
+ *
+ * If software reads the Cache Type fields from Ctype1
+ * upwards, once it has seen a value of 0b000, no caches
+ * exist@further-out levels of the hierarchy. So, for
+ * example, if Ctype3 is the first Cache Type field with a
+ * value of 0b000, the values of Ctype4 to Ctype7 must be
+ * ignored.
+ */
+ get_clidr_el1(NULL, &clidr); /* Ugly... */
+ cache_levels = clidr.val;
+ for (i = 0; i < 7; i++)
+ if (((cache_levels >> (i*3)) & 7) == 0)
+ break;
+ /* Clear all higher bits. */
+ cache_levels &= (1 << (i*3))-1;
+}
+
+/**
+ * kvm_reset_sys_regs - sets system registers to reset value
+ * @vcpu: The VCPU pointer
+ *
+ * This function finds the right table above and sets the registers on the
+ * virtual CPU struct to their architecturally defined reset values.
+ */
+void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
+{
+ size_t num;
+ const struct sys_reg_desc *table;
+
+ /* Catch someone adding a register without putting in reset entry. */
+ memset(vcpu->arch.sys_regs, 0x42, sizeof(vcpu->arch.sys_regs));
+
+ /* Generic chip reset first (so target could override). */
+ reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+
+ table = get_target_table(vcpu->arch.target, &num);
+ reset_sys_reg_descs(vcpu, table, num);
+
+ for (num = 1; num < NR_SYS_REGS; num++)
+ if (vcpu->arch.sys_regs[num] == 0x42424242)
+ panic("Didn't reset vcpu->arch.sys_regs[%zi]", num);
+}
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
new file mode 100644
index 0000000..c0ac420
--- /dev/null
+++ b/arch/arm64/kvm/sys_regs.h
@@ -0,0 +1,141 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/coproc.h
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Authors: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
+#define __ARM64_KVM_SYS_REGS_LOCAL_H__
+
+struct sys_reg_params {
+ u8 Op0;
+ u8 Op1;
+ u8 CRn;
+ u8 CRm;
+ u8 Op2;
+ u8 Rt;
+ bool is_write;
+};
+
+struct sys_reg_desc {
+ /* MRS/MSR instruction which accesses it. */
+ u8 Op0;
+ u8 Op1;
+ u8 CRn;
+ u8 CRm;
+ u8 Op2;
+
+ /* Trapped access from guest, if non-NULL. */
+ bool (*access)(struct kvm_vcpu *,
+ const struct sys_reg_params *,
+ const struct sys_reg_desc *);
+
+ /* Initialization for vcpu. */
+ void (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *);
+
+ /*
+ * Index into vcpu->arch.sys_regs[], or 0 if we don't need to
+ * save it.
+ */
+ int reg;
+
+ /* Value (usually reset value) */
+ u64 val;
+};
+
+static inline void print_sys_reg_instr(const struct sys_reg_params *p)
+{
+ /* Look, we even formatted it for you to paste into the table! */
+ kvm_pr_unimpl(" { Op0(%2u), Op1(%2u), CRn(%2u), CRm(%2u), Op2(%2u), func_%s },\n",
+ p->Op0, p->Op1, p->CRn, p->CRm, p->Op2, p->is_write ? "write" : "read");
+}
+
+static inline bool ignore_write(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p)
+{
+ return true;
+}
+
+static inline bool read_zero(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p)
+{
+ *vcpu_reg(vcpu, p->Rt) = 0;
+ return true;
+}
+
+static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *params)
+{
+ kvm_debug("sys_reg write to read-only register at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(params);
+ return false;
+}
+
+static inline bool read_from_write_only(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *params)
+{
+ kvm_debug("sys_reg read to write-only register at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(params);
+ return false;
+}
+
+/* Reset functions */
+static inline void reset_unknown(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *r)
+{
+ BUG_ON(!r->reg);
+ BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.sys_regs));
+ vcpu->arch.sys_regs[r->reg] = 0x1de7ec7edbadc0de;
+}
+
+static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ BUG_ON(!r->reg);
+ BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.sys_regs));
+ vcpu->arch.sys_regs[r->reg] = r->val;
+}
+
+static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
+ const struct sys_reg_desc *i2)
+{
+ BUG_ON(i1 == i2);
+ if (!i1)
+ return 1;
+ else if (!i2)
+ return -1;
+ if (i1->Op0 != i2->Op0)
+ return i1->Op0 - i2->Op0;
+ if (i1->Op1 != i2->Op1)
+ return i1->Op1 - i2->Op1;
+ if (i1->CRn != i2->CRn)
+ return i1->CRn - i2->CRn;
+ if (i1->CRm != i2->CRm)
+ return i1->CRm - i2->CRm;
+ return i1->Op2 - i2->Op2;
+}
+
+
+#define Op0(_x) .Op0 = _x
+#define Op1(_x) .Op1 = _x
+#define CRn(_x) .CRn = _x
+#define CRm(_x) .CRm = _x
+#define Op2(_x) .Op2 = _x
+
+#endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 3c56ba3..2bf42b0 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -782,6 +782,7 @@ struct kvm_dirty_tlb {
#define KVM_REG_IA64 0x3000000000000000ULL
#define KVM_REG_ARM 0x4000000000000000ULL
#define KVM_REG_S390 0x5000000000000000ULL
+#define KVM_REG_ARM64 0x6000000000000000ULL
#define KVM_REG_SIZE_SHIFT 52
#define KVM_REG_SIZE_MASK 0x00f0000000000000ULL
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 10/29] arm64: KVM: Cortex-A57 specific system registers handling
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Add the support code for Cortex-A57 specific system registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs_a57.c | 96 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 96 insertions(+)
create mode 100644 arch/arm64/kvm/sys_regs_a57.c
diff --git a/arch/arm64/kvm/sys_regs_a57.c b/arch/arm64/kvm/sys_regs_a57.c
new file mode 100644
index 0000000..dcc88fe
--- /dev/null
+++ b/arch/arm64/kvm/sys_regs_a57.c
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Based on arch/arm/kvm/coproc_a15.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Authors: Rusty Russell <rusty@rustcorp.au>
+ * Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/kvm_host.h>
+#include <asm/cputype.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_coproc.h>
+#include <linux/init.h>
+
+#include "sys_regs.h"
+
+#define MPIDR_EL1_AFF0_MASK 0xff
+
+static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ /*
+ * Simply map the vcpu_id into the Aff0 field of the MPIDR.
+ */
+ vcpu->arch.sys_regs[MPIDR_EL1] = (1 << 31) | (vcpu->vcpu_id & MPIDR_EL1_AFF0_MASK);
+}
+
+static bool access_actlr(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return ignore_write(vcpu, p);
+
+ *vcpu_reg(vcpu, p->Rt) = vcpu->arch.sys_regs[ACTLR_EL1];
+ return true;
+}
+
+static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ u64 actlr;
+
+ asm volatile("mrs %0, actlr_el1\n" : "=r" (actlr));
+ vcpu->arch.sys_regs[ACTLR_EL1] = actlr;
+}
+
+/*
+ * A57-specific sys-reg registers.
+ * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
+ */
+static const struct sys_reg_desc a57_sys_regs[] = {
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101), /* MPIDR_EL1 */
+ NULL, reset_mpidr, MPIDR_EL1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000), /* SCTLR_EL1 */
+ NULL, reset_val, SCTLR_EL1, 0x00C50078 },
+ { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b001), /* ACTLR_EL1 */
+ access_actlr, reset_actlr, ACTLR_EL1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010), /* CPACR_EL1 */
+ NULL, reset_val, CPACR_EL1, 0 },
+};
+
+static struct kvm_sys_reg_target_table a57_target_table = {
+ .target = KVM_ARM_TARGET_CORTEX_A57,
+ .table64 = {
+ .table = a57_sys_regs,
+ .num = ARRAY_SIZE(a57_sys_regs),
+ },
+};
+
+static int __init sys_reg_a57_init(void)
+{
+ unsigned int i;
+
+ for (i = 1; i < ARRAY_SIZE(a57_sys_regs); i++)
+ BUG_ON(cmp_sys_reg(&a57_sys_regs[i-1],
+ &a57_sys_regs[i]) >= 0);
+
+ kvm_register_target_sys_reg_table(&a57_target_table);
+ return 0;
+}
+late_initcall(sys_reg_a57_init);
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 10/29] arm64: KVM: Cortex-A57 specific system registers handling
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Add the support code for Cortex-A57 specific system registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs_a57.c | 96 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 96 insertions(+)
create mode 100644 arch/arm64/kvm/sys_regs_a57.c
diff --git a/arch/arm64/kvm/sys_regs_a57.c b/arch/arm64/kvm/sys_regs_a57.c
new file mode 100644
index 0000000..dcc88fe
--- /dev/null
+++ b/arch/arm64/kvm/sys_regs_a57.c
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Based on arch/arm/kvm/coproc_a15.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Authors: Rusty Russell <rusty@rustcorp.au>
+ * Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/kvm_host.h>
+#include <asm/cputype.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_coproc.h>
+#include <linux/init.h>
+
+#include "sys_regs.h"
+
+#define MPIDR_EL1_AFF0_MASK 0xff
+
+static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ /*
+ * Simply map the vcpu_id into the Aff0 field of the MPIDR.
+ */
+ vcpu->arch.sys_regs[MPIDR_EL1] = (1 << 31) | (vcpu->vcpu_id & MPIDR_EL1_AFF0_MASK);
+}
+
+static bool access_actlr(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return ignore_write(vcpu, p);
+
+ *vcpu_reg(vcpu, p->Rt) = vcpu->arch.sys_regs[ACTLR_EL1];
+ return true;
+}
+
+static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+ u64 actlr;
+
+ asm volatile("mrs %0, actlr_el1\n" : "=r" (actlr));
+ vcpu->arch.sys_regs[ACTLR_EL1] = actlr;
+}
+
+/*
+ * A57-specific sys-reg registers.
+ * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
+ */
+static const struct sys_reg_desc a57_sys_regs[] = {
+ { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101), /* MPIDR_EL1 */
+ NULL, reset_mpidr, MPIDR_EL1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000), /* SCTLR_EL1 */
+ NULL, reset_val, SCTLR_EL1, 0x00C50078 },
+ { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b001), /* ACTLR_EL1 */
+ access_actlr, reset_actlr, ACTLR_EL1 },
+ { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010), /* CPACR_EL1 */
+ NULL, reset_val, CPACR_EL1, 0 },
+};
+
+static struct kvm_sys_reg_target_table a57_target_table = {
+ .target = KVM_ARM_TARGET_CORTEX_A57,
+ .table64 = {
+ .table = a57_sys_regs,
+ .num = ARRAY_SIZE(a57_sys_regs),
+ },
+};
+
+static int __init sys_reg_a57_init(void)
+{
+ unsigned int i;
+
+ for (i = 1; i < ARRAY_SIZE(a57_sys_regs); i++)
+ BUG_ON(cmp_sys_reg(&a57_sys_regs[i-1],
+ &a57_sys_regs[i]) >= 0);
+
+ kvm_register_target_sys_reg_table(&a57_target_table);
+ return 0;
+}
+late_initcall(sys_reg_a57_init);
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 11/29] arm64: KVM: virtual CPU reset
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Provide the reset code for a virtual CPU booted in 64bit mode.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/reset.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
create mode 100644 arch/arm64/kvm/reset.c
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
new file mode 100644
index 0000000..3ac2f20
--- /dev/null
+++ b/arch/arm64/kvm/reset.c
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/reset.c
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/kvm_host.h>
+#include <linux/kvm.h>
+
+#include <asm/cputype.h>
+#include <asm/ptrace.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_coproc.h>
+
+/*
+ * ARMv8 Reset Values
+ */
+static struct kvm_regs default_regs_reset = {
+ .regs.pstate = PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
+};
+
+/**
+ * kvm_reset_vcpu - sets core registers and sys_regs to reset value
+ * @vcpu: The VCPU pointer
+ *
+ * This function finds the right table above and sets the registers on
+ * the virtual CPU struct to their architectually defined reset
+ * values.
+ */
+int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+{
+ struct kvm_regs *cpu_reset;
+
+ switch (vcpu->arch.target) {
+ default:
+ cpu_reset = &default_regs_reset;
+ break;
+ }
+
+ /* Reset core registers */
+ memcpy(&vcpu->arch.regs, cpu_reset, sizeof(vcpu->arch.regs));
+
+ /* Reset system registers */
+ kvm_reset_sys_regs(vcpu);
+
+ return 0;
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 11/29] arm64: KVM: virtual CPU reset
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Provide the reset code for a virtual CPU booted in 64bit mode.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/reset.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
create mode 100644 arch/arm64/kvm/reset.c
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
new file mode 100644
index 0000000..3ac2f20
--- /dev/null
+++ b/arch/arm64/kvm/reset.c
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/reset.c
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/kvm_host.h>
+#include <linux/kvm.h>
+
+#include <asm/cputype.h>
+#include <asm/ptrace.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_coproc.h>
+
+/*
+ * ARMv8 Reset Values
+ */
+static struct kvm_regs default_regs_reset = {
+ .regs.pstate = PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
+};
+
+/**
+ * kvm_reset_vcpu - sets core registers and sys_regs to reset value
+ * @vcpu: The VCPU pointer
+ *
+ * This function finds the right table above and sets the registers on
+ * the virtual CPU struct to their architectually defined reset
+ * values.
+ */
+int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+{
+ struct kvm_regs *cpu_reset;
+
+ switch (vcpu->arch.target) {
+ default:
+ cpu_reset = &default_regs_reset;
+ break;
+ }
+
+ /* Reset core registers */
+ memcpy(&vcpu->arch.regs, cpu_reset, sizeof(vcpu->arch.regs));
+
+ /* Reset system registers */
+ kvm_reset_sys_regs(vcpu);
+
+ return 0;
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 12/29] arm64: KVM: kvm_arch and kvm_vcpu_arch definitions
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Provide the architecture dependent structures for VM and
vcpu abstractions.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 178 ++++++++++++++++++++++++++++++++++++++
1 file changed, 178 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_host.h
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
new file mode 100644
index 0000000..d1095d1
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -0,0 +1,178 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/include/asm/kvm_host.h:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_HOST_H__
+#define __ARM64_KVM_HOST_H__
+
+#include <asm/kvm.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_mmio.h>
+
+#define KVM_MAX_VCPUS 4
+#define KVM_USER_MEM_SLOTS 32
+#define KVM_PRIVATE_MEM_SLOTS 4
+#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
+
+#include <asm/kvm_vgic.h>
+#include <asm/kvm_arch_timer.h>
+
+#define KVM_VCPU_MAX_FEATURES 0
+
+/* We don't currently support large pages. */
+#define KVM_HPAGE_GFN_SHIFT(x) 0
+#define KVM_NR_PAGE_SIZES 1
+#define KVM_PAGES_PER_HPAGE(x) (1UL<<31)
+
+struct kvm_vcpu;
+int kvm_target_cpu(void);
+int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
+
+struct kvm_arch {
+ /* The VMID generation used for the virt. memory system */
+ u64 vmid_gen;
+ u32 vmid;
+
+ /* 1-level 2nd stage table and lock */
+ spinlock_t pgd_lock;
+ pgd_t *pgd;
+
+ /* VTTBR value associated with above pgd and vmid */
+ u64 vttbr;
+
+ /* Interrupt controller */
+ struct vgic_dist vgic;
+
+ /* Timer */
+ struct arch_timer_kvm timer;
+};
+
+#define KVM_NR_MEM_OBJS 40
+
+/*
+ * We don't want allocation failures within the mmu code, so we preallocate
+ * enough memory for a single page fault in a cache.
+ */
+struct kvm_mmu_memory_cache {
+ int nobjs;
+ void *objects[KVM_NR_MEM_OBJS];
+};
+
+struct kvm_vcpu_fault_info {
+ u32 esr_el2; /* Hyp Syndrom Register */
+ u64 far_el2; /* Hyp Fault Address Register */
+ u64 hpfar_el2; /* Hyp IPA Fault Address Register */
+};
+
+typedef struct user_fpsimd_state kvm_kernel_vfp_t;
+
+struct kvm_vcpu_arch {
+ struct kvm_regs regs;
+ u64 sys_regs[NR_SYS_REGS];
+
+ /* HYP configuration */
+ u64 hcr_el2;
+
+ /* Exception Information */
+ struct kvm_vcpu_fault_info fault;
+
+ /* Floating point registers (VFP and Advanced SIMD/NEON) */
+ kvm_kernel_vfp_t vfp_guest;
+ kvm_kernel_vfp_t *vfp_host;
+
+ /* VGIC state */
+ struct vgic_cpu vgic_cpu;
+ struct arch_timer_cpu timer_cpu;
+
+ /*
+ * Anything that is not used directly from assembly code goes
+ * here.
+ */
+ /* dcache set/way operation pending */
+ int last_pcpu;
+ cpumask_t require_dcache_flush;
+
+ /* Don't run the guest */
+ bool pause;
+
+ /* IO related fields */
+ struct kvm_decode mmio_decode;
+
+ /* Interrupt related fields */
+ u64 irq_lines; /* IRQ and FIQ levels */
+
+ /* Cache some mmu pages needed inside spinlock regions */
+ struct kvm_mmu_memory_cache mmu_page_cache;
+
+ /* Target CPU and feature flags */
+ u32 target;
+ DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
+
+ /* Detect first run of a vcpu */
+ bool has_run_once;
+};
+
+struct kvm_vm_stat {
+ u32 remote_tlb_flush;
+};
+
+struct kvm_vcpu_stat {
+ u32 halt_wakeup;
+};
+
+struct kvm_vcpu_init;
+int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
+ const struct kvm_vcpu_init *init);
+unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
+int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
+struct kvm_one_reg;
+int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
+int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
+
+#define KVM_ARCH_WANT_MMU_NOTIFIER
+struct kvm;
+int kvm_unmap_hva(struct kvm *kvm, unsigned long hva);
+int kvm_unmap_hva_range(struct kvm *kvm,
+ unsigned long start, unsigned long end);
+void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+
+/* We do not have shadow page tables, hence the empty hooks */
+static inline int kvm_age_hva(struct kvm *kvm, unsigned long hva)
+{
+ return 0;
+}
+
+static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
+{
+ return 0;
+}
+
+struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
+struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
+
+u64 kvm_call_hyp(void *hypfn, ...);
+
+int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ int exception_index);
+
+int kvm_perf_init(void);
+int kvm_perf_teardown(void);
+
+#endif /* __ARM64_KVM_HOST_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 12/29] arm64: KVM: kvm_arch and kvm_vcpu_arch definitions
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Provide the architecture dependent structures for VM and
vcpu abstractions.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 178 ++++++++++++++++++++++++++++++++++++++
1 file changed, 178 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_host.h
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
new file mode 100644
index 0000000..d1095d1
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -0,0 +1,178 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/include/asm/kvm_host.h:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_HOST_H__
+#define __ARM64_KVM_HOST_H__
+
+#include <asm/kvm.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_mmio.h>
+
+#define KVM_MAX_VCPUS 4
+#define KVM_USER_MEM_SLOTS 32
+#define KVM_PRIVATE_MEM_SLOTS 4
+#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
+
+#include <asm/kvm_vgic.h>
+#include <asm/kvm_arch_timer.h>
+
+#define KVM_VCPU_MAX_FEATURES 0
+
+/* We don't currently support large pages. */
+#define KVM_HPAGE_GFN_SHIFT(x) 0
+#define KVM_NR_PAGE_SIZES 1
+#define KVM_PAGES_PER_HPAGE(x) (1UL<<31)
+
+struct kvm_vcpu;
+int kvm_target_cpu(void);
+int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
+
+struct kvm_arch {
+ /* The VMID generation used for the virt. memory system */
+ u64 vmid_gen;
+ u32 vmid;
+
+ /* 1-level 2nd stage table and lock */
+ spinlock_t pgd_lock;
+ pgd_t *pgd;
+
+ /* VTTBR value associated with above pgd and vmid */
+ u64 vttbr;
+
+ /* Interrupt controller */
+ struct vgic_dist vgic;
+
+ /* Timer */
+ struct arch_timer_kvm timer;
+};
+
+#define KVM_NR_MEM_OBJS 40
+
+/*
+ * We don't want allocation failures within the mmu code, so we preallocate
+ * enough memory for a single page fault in a cache.
+ */
+struct kvm_mmu_memory_cache {
+ int nobjs;
+ void *objects[KVM_NR_MEM_OBJS];
+};
+
+struct kvm_vcpu_fault_info {
+ u32 esr_el2; /* Hyp Syndrom Register */
+ u64 far_el2; /* Hyp Fault Address Register */
+ u64 hpfar_el2; /* Hyp IPA Fault Address Register */
+};
+
+typedef struct user_fpsimd_state kvm_kernel_vfp_t;
+
+struct kvm_vcpu_arch {
+ struct kvm_regs regs;
+ u64 sys_regs[NR_SYS_REGS];
+
+ /* HYP configuration */
+ u64 hcr_el2;
+
+ /* Exception Information */
+ struct kvm_vcpu_fault_info fault;
+
+ /* Floating point registers (VFP and Advanced SIMD/NEON) */
+ kvm_kernel_vfp_t vfp_guest;
+ kvm_kernel_vfp_t *vfp_host;
+
+ /* VGIC state */
+ struct vgic_cpu vgic_cpu;
+ struct arch_timer_cpu timer_cpu;
+
+ /*
+ * Anything that is not used directly from assembly code goes
+ * here.
+ */
+ /* dcache set/way operation pending */
+ int last_pcpu;
+ cpumask_t require_dcache_flush;
+
+ /* Don't run the guest */
+ bool pause;
+
+ /* IO related fields */
+ struct kvm_decode mmio_decode;
+
+ /* Interrupt related fields */
+ u64 irq_lines; /* IRQ and FIQ levels */
+
+ /* Cache some mmu pages needed inside spinlock regions */
+ struct kvm_mmu_memory_cache mmu_page_cache;
+
+ /* Target CPU and feature flags */
+ u32 target;
+ DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
+
+ /* Detect first run of a vcpu */
+ bool has_run_once;
+};
+
+struct kvm_vm_stat {
+ u32 remote_tlb_flush;
+};
+
+struct kvm_vcpu_stat {
+ u32 halt_wakeup;
+};
+
+struct kvm_vcpu_init;
+int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
+ const struct kvm_vcpu_init *init);
+unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
+int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
+struct kvm_one_reg;
+int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
+int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
+
+#define KVM_ARCH_WANT_MMU_NOTIFIER
+struct kvm;
+int kvm_unmap_hva(struct kvm *kvm, unsigned long hva);
+int kvm_unmap_hva_range(struct kvm *kvm,
+ unsigned long start, unsigned long end);
+void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+
+/* We do not have shadow page tables, hence the empty hooks */
+static inline int kvm_age_hva(struct kvm *kvm, unsigned long hva)
+{
+ return 0;
+}
+
+static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
+{
+ return 0;
+}
+
+struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
+struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
+
+u64 kvm_call_hyp(void *hypfn, ...);
+
+int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ int exception_index);
+
+int kvm_perf_init(void);
+int kvm_perf_teardown(void);
+
+#endif /* __ARM64_KVM_HOST_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 13/29] arm64: KVM: MMIO access backend
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Define the necessary structures to perform an MMIO access.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_mmio.h | 59 +++++++++++++++++++++++++++++++++++++++
1 file changed, 59 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_mmio.h
diff --git a/arch/arm64/include/asm/kvm_mmio.h b/arch/arm64/include/asm/kvm_mmio.h
new file mode 100644
index 0000000..fc2f689
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_mmio.h
@@ -0,0 +1,59 @@
+/*
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_MMIO_H__
+#define __ARM64_KVM_MMIO_H__
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_arm.h>
+
+/*
+ * This is annoying. The mmio code requires this, even if we don't
+ * need any decoding. To be fixed.
+ */
+struct kvm_decode {
+ unsigned long rt;
+ bool sign_extend;
+};
+
+/*
+ * The in-kernel MMIO emulation code wants to use a copy of run->mmio,
+ * which is an anonymous type. Use our own type instead.
+ */
+struct kvm_exit_mmio {
+ phys_addr_t phys_addr;
+ u8 data[8];
+ u32 len;
+ bool is_write;
+};
+
+static inline void kvm_prepare_mmio(struct kvm_run *run,
+ struct kvm_exit_mmio *mmio)
+{
+ run->mmio.phys_addr = mmio->phys_addr;
+ run->mmio.len = mmio->len;
+ run->mmio.is_write = mmio->is_write;
+ memcpy(run->mmio.data, mmio->data, mmio->len);
+ run->exit_reason = KVM_EXIT_MMIO;
+}
+
+int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ phys_addr_t fault_ipa);
+
+#endif /* __ARM64_KVM_MMIO_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 13/29] arm64: KVM: MMIO access backend
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Define the necessary structures to perform an MMIO access.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_mmio.h | 59 +++++++++++++++++++++++++++++++++++++++
1 file changed, 59 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_mmio.h
diff --git a/arch/arm64/include/asm/kvm_mmio.h b/arch/arm64/include/asm/kvm_mmio.h
new file mode 100644
index 0000000..fc2f689
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_mmio.h
@@ -0,0 +1,59 @@
+/*
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_MMIO_H__
+#define __ARM64_KVM_MMIO_H__
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_arm.h>
+
+/*
+ * This is annoying. The mmio code requires this, even if we don't
+ * need any decoding. To be fixed.
+ */
+struct kvm_decode {
+ unsigned long rt;
+ bool sign_extend;
+};
+
+/*
+ * The in-kernel MMIO emulation code wants to use a copy of run->mmio,
+ * which is an anonymous type. Use our own type instead.
+ */
+struct kvm_exit_mmio {
+ phys_addr_t phys_addr;
+ u8 data[8];
+ u32 len;
+ bool is_write;
+};
+
+static inline void kvm_prepare_mmio(struct kvm_run *run,
+ struct kvm_exit_mmio *mmio)
+{
+ run->mmio.phys_addr = mmio->phys_addr;
+ run->mmio.len = mmio->len;
+ run->mmio.is_write = mmio->is_write;
+ memcpy(run->mmio.data, mmio->data, mmio->len);
+ run->exit_reason = KVM_EXIT_MMIO;
+}
+
+int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ phys_addr_t fault_ipa);
+
+#endif /* __ARM64_KVM_MMIO_H__ */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 14/29] arm64: KVM: guest one-reg interface
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Let userspace play with the guest registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 240 insertions(+)
create mode 100644 arch/arm64/kvm/guest.c
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
new file mode 100644
index 0000000..2a8aaf8
--- /dev/null
+++ b/arch/arm64/kvm/guest.c
@@ -0,0 +1,240 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/guest.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/fs.h>
+#include <asm/cputype.h>
+#include <asm/uaccess.h>
+#include <asm/kvm.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_coproc.h>
+
+struct kvm_stats_debugfs_item debugfs_entries[] = {
+ { NULL }
+};
+
+int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+ return 0;
+}
+
+static u64 core_reg_offset_from_id(u64 id)
+{
+ return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
+}
+
+static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ unsigned long __user *uaddr = (unsigned long __user *)(unsigned long)reg->addr;
+ struct kvm_regs *regs = &vcpu->arch.regs;
+ u64 off;
+
+ if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
+ return -ENOENT;
+
+ /* Our ID is an index into the kvm_regs struct. */
+ off = core_reg_offset_from_id(reg->id);
+ if (off >= sizeof(*regs) / KVM_REG_SIZE(reg->id))
+ return -ENOENT;
+
+ return put_user(((unsigned long *)regs)[off], uaddr);
+}
+
+static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ unsigned long __user *uaddr = (unsigned long __user *)(unsigned long)reg->addr;
+ struct kvm_regs *regs = &vcpu->arch.regs;
+ u64 off, val;
+
+ if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
+ return -ENOENT;
+
+ /* Our ID is an index into the kvm_regs struct. */
+ off = core_reg_offset_from_id(reg->id);
+ if (off >= sizeof(*regs) / KVM_REG_SIZE(reg->id))
+ return -ENOENT;
+
+ if (get_user(val, uaddr) != 0)
+ return -EFAULT;
+
+ if (off == KVM_REG_ARM_CORE_REG(regs.pstate)) {
+ unsigned long mode = val & COMPAT_PSR_MODE_MASK;
+ switch (mode) {
+ case PSR_MODE_EL0t:
+ case PSR_MODE_EL1t:
+ case PSR_MODE_EL1h:
+ break;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ ((unsigned long *)regs)[off] = val;
+ return 0;
+}
+
+int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+ return -EINVAL;
+}
+
+int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+ return -EINVAL;
+}
+
+static unsigned long num_core_regs(void)
+{
+ return sizeof(struct kvm_regs) / sizeof(unsigned long);
+}
+
+/**
+ * kvm_arm_num_regs - how many registers do we present via KVM_GET_ONE_REG
+ *
+ * This is for all registers.
+ */
+unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
+{
+ return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu);
+}
+
+/**
+ * kvm_arm_copy_reg_indices - get indices of all registers.
+ *
+ * We do core registers right here, then we apppend system regs.
+ */
+int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
+{
+ unsigned int i;
+ const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE;
+
+ for (i = 0; i < sizeof(struct kvm_regs)/sizeof(unsigned long); i++) {
+ if (put_user(core_reg | i, uindices))
+ return -EFAULT;
+ uindices++;
+ }
+
+ return kvm_arm_copy_sys_reg_indices(vcpu, uindices);
+}
+
+int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ /* We currently use nothing arch-specific in upper 32 bits */
+ if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32)
+ return -EINVAL;
+
+ /* Register group 16 means we want a core register. */
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
+ return get_core_reg(vcpu, reg);
+
+ return kvm_arm_sys_reg_get_reg(vcpu, reg);
+}
+
+int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ /* We currently use nothing arch-specific in upper 32 bits */
+ if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32)
+ return -EINVAL;
+
+ /* Register group 16 means we set a core register. */
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
+ return set_core_reg(vcpu, reg);
+
+ return kvm_arm_sys_reg_set_reg(vcpu, reg);
+}
+
+int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
+ struct kvm_sregs *sregs)
+{
+ return -EINVAL;
+}
+
+int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
+ struct kvm_sregs *sregs)
+{
+ return -EINVAL;
+}
+
+int __attribute_const__ kvm_target_cpu(void)
+{
+ unsigned long implementor = read_cpuid_implementor();
+ unsigned long part_number = read_cpuid_part_number();
+
+ if (implementor != ARM_CPU_IMP_ARM)
+ return -EINVAL;
+
+ switch (part_number) {
+ case ARM_CPU_PART_AEM_V8:
+ case ARM_CPU_PART_FOUNDATION:
+ /* Treat the models just as an A57 for the time being */
+ case ARM_CPU_PART_CORTEX_A57:
+ return KVM_ARM_TARGET_CORTEX_A57;
+ default:
+ return -EINVAL;
+ }
+}
+
+int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
+ const struct kvm_vcpu_init *init)
+{
+ unsigned int i;
+
+ /* We can only do a cortex A57 for now. */
+ if (init->target != kvm_target_cpu())
+ return -EINVAL;
+
+ vcpu->arch.target = init->target;
+ bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
+
+ /* -ENOENT for unknown features, -EINVAL for invalid combinations. */
+ for (i = 0; i < sizeof(init->features)*8; i++) {
+ if (init->features[i / 32] & (1 << (i % 32))) {
+ if (i >= KVM_VCPU_MAX_FEATURES)
+ return -ENOENT;
+ set_bit(i, vcpu->arch.features);
+ }
+ }
+
+ /* Now we know what it is, we can reset it. */
+ return kvm_reset_vcpu(vcpu);
+}
+
+int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+ return -EINVAL;
+}
+
+int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+ return -EINVAL;
+}
+
+int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
+ struct kvm_translation *tr)
+{
+ return -EINVAL;
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 14/29] arm64: KVM: guest one-reg interface
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Let userspace play with the guest registers.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 240 insertions(+)
create mode 100644 arch/arm64/kvm/guest.c
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
new file mode 100644
index 0000000..2a8aaf8
--- /dev/null
+++ b/arch/arm64/kvm/guest.c
@@ -0,0 +1,240 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/guest.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/fs.h>
+#include <asm/cputype.h>
+#include <asm/uaccess.h>
+#include <asm/kvm.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_coproc.h>
+
+struct kvm_stats_debugfs_item debugfs_entries[] = {
+ { NULL }
+};
+
+int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+ return 0;
+}
+
+static u64 core_reg_offset_from_id(u64 id)
+{
+ return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
+}
+
+static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ unsigned long __user *uaddr = (unsigned long __user *)(unsigned long)reg->addr;
+ struct kvm_regs *regs = &vcpu->arch.regs;
+ u64 off;
+
+ if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
+ return -ENOENT;
+
+ /* Our ID is an index into the kvm_regs struct. */
+ off = core_reg_offset_from_id(reg->id);
+ if (off >= sizeof(*regs) / KVM_REG_SIZE(reg->id))
+ return -ENOENT;
+
+ return put_user(((unsigned long *)regs)[off], uaddr);
+}
+
+static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ unsigned long __user *uaddr = (unsigned long __user *)(unsigned long)reg->addr;
+ struct kvm_regs *regs = &vcpu->arch.regs;
+ u64 off, val;
+
+ if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
+ return -ENOENT;
+
+ /* Our ID is an index into the kvm_regs struct. */
+ off = core_reg_offset_from_id(reg->id);
+ if (off >= sizeof(*regs) / KVM_REG_SIZE(reg->id))
+ return -ENOENT;
+
+ if (get_user(val, uaddr) != 0)
+ return -EFAULT;
+
+ if (off == KVM_REG_ARM_CORE_REG(regs.pstate)) {
+ unsigned long mode = val & COMPAT_PSR_MODE_MASK;
+ switch (mode) {
+ case PSR_MODE_EL0t:
+ case PSR_MODE_EL1t:
+ case PSR_MODE_EL1h:
+ break;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ ((unsigned long *)regs)[off] = val;
+ return 0;
+}
+
+int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+ return -EINVAL;
+}
+
+int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+ return -EINVAL;
+}
+
+static unsigned long num_core_regs(void)
+{
+ return sizeof(struct kvm_regs) / sizeof(unsigned long);
+}
+
+/**
+ * kvm_arm_num_regs - how many registers do we present via KVM_GET_ONE_REG
+ *
+ * This is for all registers.
+ */
+unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
+{
+ return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu);
+}
+
+/**
+ * kvm_arm_copy_reg_indices - get indices of all registers.
+ *
+ * We do core registers right here, then we apppend system regs.
+ */
+int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
+{
+ unsigned int i;
+ const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE;
+
+ for (i = 0; i < sizeof(struct kvm_regs)/sizeof(unsigned long); i++) {
+ if (put_user(core_reg | i, uindices))
+ return -EFAULT;
+ uindices++;
+ }
+
+ return kvm_arm_copy_sys_reg_indices(vcpu, uindices);
+}
+
+int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ /* We currently use nothing arch-specific in upper 32 bits */
+ if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32)
+ return -EINVAL;
+
+ /* Register group 16 means we want a core register. */
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
+ return get_core_reg(vcpu, reg);
+
+ return kvm_arm_sys_reg_get_reg(vcpu, reg);
+}
+
+int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+ /* We currently use nothing arch-specific in upper 32 bits */
+ if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32)
+ return -EINVAL;
+
+ /* Register group 16 means we set a core register. */
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)
+ return set_core_reg(vcpu, reg);
+
+ return kvm_arm_sys_reg_set_reg(vcpu, reg);
+}
+
+int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
+ struct kvm_sregs *sregs)
+{
+ return -EINVAL;
+}
+
+int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
+ struct kvm_sregs *sregs)
+{
+ return -EINVAL;
+}
+
+int __attribute_const__ kvm_target_cpu(void)
+{
+ unsigned long implementor = read_cpuid_implementor();
+ unsigned long part_number = read_cpuid_part_number();
+
+ if (implementor != ARM_CPU_IMP_ARM)
+ return -EINVAL;
+
+ switch (part_number) {
+ case ARM_CPU_PART_AEM_V8:
+ case ARM_CPU_PART_FOUNDATION:
+ /* Treat the models just as an A57 for the time being */
+ case ARM_CPU_PART_CORTEX_A57:
+ return KVM_ARM_TARGET_CORTEX_A57;
+ default:
+ return -EINVAL;
+ }
+}
+
+int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
+ const struct kvm_vcpu_init *init)
+{
+ unsigned int i;
+
+ /* We can only do a cortex A57 for now. */
+ if (init->target != kvm_target_cpu())
+ return -EINVAL;
+
+ vcpu->arch.target = init->target;
+ bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
+
+ /* -ENOENT for unknown features, -EINVAL for invalid combinations. */
+ for (i = 0; i < sizeof(init->features)*8; i++) {
+ if (init->features[i / 32] & (1 << (i % 32))) {
+ if (i >= KVM_VCPU_MAX_FEATURES)
+ return -ENOENT;
+ set_bit(i, vcpu->arch.features);
+ }
+ }
+
+ /* Now we know what it is, we can reset it. */
+ return kvm_reset_vcpu(vcpu);
+}
+
+int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+ return -EINVAL;
+}
+
+int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+ return -EINVAL;
+}
+
+int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
+ struct kvm_translation *tr)
+{
+ return -EINVAL;
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 15/29] arm64: KVM: hypervisor initialization code
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Provide EL2 with page tables and stack, and set the vectors
to point to the full blown world-switch code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 11 +++++
arch/arm64/kvm/hyp-init.S | 89 +++++++++++++++++++++++++++++++++++++++
2 files changed, 100 insertions(+)
create mode 100644 arch/arm64/kvm/hyp-init.S
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d1095d1..85e706b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -175,4 +175,15 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
int kvm_perf_init(void);
int kvm_perf_teardown(void);
+static inline void __cpu_init_hyp_mode(unsigned long long pgd_ptr,
+ unsigned long hyp_stack_ptr,
+ unsigned long vector_ptr)
+{
+ /*
+ * Call initialization code, and switch to the full blown
+ * HYP code.
+ */
+ kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
+}
+
#endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
new file mode 100644
index 0000000..c881cac
--- /dev/null
+++ b/arch/arm64/kvm/hyp-init.S
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/linkage.h>
+
+#include <asm/assembler.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_mmu.h>
+
+ .text
+ .pushsection .hyp.idmap.text, "ax"
+
+ .align 11
+
+__kvm_hyp_init:
+ .global __kvm_hyp_init
+
+ENTRY(__kvm_hyp_init_vector)
+ ventry __invalid // Synchronous EL2t
+ ventry __invalid // IRQ EL2t
+ ventry __invalid // FIQ EL2t
+ ventry __invalid // Error EL2t
+
+ ventry __invalid // Synchronous EL2h
+ ventry __invalid // IRQ EL2h
+ ventry __invalid // FIQ EL2h
+ ventry __invalid // Error EL2h
+
+ ventry __do_hyp_init // Synchronous 64-bit EL1
+ ventry __invalid // IRQ 64-bit EL1
+ ventry __invalid // FIQ 64-bit EL1
+ ventry __invalid // Error 64-bit EL1
+
+ ventry __invalid // Synchronous 32-bit EL1
+ ventry __invalid // IRQ 32-bit EL1
+ ventry __invalid // FIQ 32-bit EL1
+ ventry __invalid // Error 32-bit EL1
+ENDPROC(__kvm_hyp_init_vector)
+
+__invalid:
+ b .
+
+ /*
+ * x0: HYP pgd
+ * x1: HYP stack
+ * x2: HYP vectors
+ */
+__do_hyp_init:
+
+ msr ttbr0_el2, x0
+ kern_hyp_va x1
+ mov sp, x1
+ kern_hyp_va x2
+ msr vbar_el2, x2
+
+ mrs x0, tcr_el1
+ ldr x1, =TCR_EL2_MASK
+ and x0, x0, x1
+ ldr x1, =TCR_EL2_FLAGS
+ orr x0, x0, x1
+ msr tcr_el2, x0
+
+ ldr x0, =VTCR_EL2_FLAGS
+ msr vtcr_el2, x0
+
+ mrs x0, mair_el1
+ msr mair_el2, x0
+ isb
+
+ mov x0, #SCTLR_EL2_FLAGS
+ msr sctlr_el2, x0
+
+ eret
+
+ .popsection
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 15/29] arm64: KVM: hypervisor initialization code
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Provide EL2 with page tables and stack, and set the vectors
to point to the full blown world-switch code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 11 +++++
arch/arm64/kvm/hyp-init.S | 89 +++++++++++++++++++++++++++++++++++++++
2 files changed, 100 insertions(+)
create mode 100644 arch/arm64/kvm/hyp-init.S
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d1095d1..85e706b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -175,4 +175,15 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
int kvm_perf_init(void);
int kvm_perf_teardown(void);
+static inline void __cpu_init_hyp_mode(unsigned long long pgd_ptr,
+ unsigned long hyp_stack_ptr,
+ unsigned long vector_ptr)
+{
+ /*
+ * Call initialization code, and switch to the full blown
+ * HYP code.
+ */
+ kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
+}
+
#endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
new file mode 100644
index 0000000..c881cac
--- /dev/null
+++ b/arch/arm64/kvm/hyp-init.S
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/linkage.h>
+
+#include <asm/assembler.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_mmu.h>
+
+ .text
+ .pushsection .hyp.idmap.text, "ax"
+
+ .align 11
+
+__kvm_hyp_init:
+ .global __kvm_hyp_init
+
+ENTRY(__kvm_hyp_init_vector)
+ ventry __invalid // Synchronous EL2t
+ ventry __invalid // IRQ EL2t
+ ventry __invalid // FIQ EL2t
+ ventry __invalid // Error EL2t
+
+ ventry __invalid // Synchronous EL2h
+ ventry __invalid // IRQ EL2h
+ ventry __invalid // FIQ EL2h
+ ventry __invalid // Error EL2h
+
+ ventry __do_hyp_init // Synchronous 64-bit EL1
+ ventry __invalid // IRQ 64-bit EL1
+ ventry __invalid // FIQ 64-bit EL1
+ ventry __invalid // Error 64-bit EL1
+
+ ventry __invalid // Synchronous 32-bit EL1
+ ventry __invalid // IRQ 32-bit EL1
+ ventry __invalid // FIQ 32-bit EL1
+ ventry __invalid // Error 32-bit EL1
+ENDPROC(__kvm_hyp_init_vector)
+
+__invalid:
+ b .
+
+ /*
+ * x0: HYP pgd
+ * x1: HYP stack
+ * x2: HYP vectors
+ */
+__do_hyp_init:
+
+ msr ttbr0_el2, x0
+ kern_hyp_va x1
+ mov sp, x1
+ kern_hyp_va x2
+ msr vbar_el2, x2
+
+ mrs x0, tcr_el1
+ ldr x1, =TCR_EL2_MASK
+ and x0, x0, x1
+ ldr x1, =TCR_EL2_FLAGS
+ orr x0, x0, x1
+ msr tcr_el2, x0
+
+ ldr x0, =VTCR_EL2_FLAGS
+ msr vtcr_el2, x0
+
+ mrs x0, mair_el1
+ msr mair_el2, x0
+ isb
+
+ mov x0, #SCTLR_EL2_FLAGS
+ msr sctlr_el2, x0
+
+ eret
+
+ .popsection
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 16/29] arm64: KVM: HYP mode world switch implementation
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
The HYP mode world switch in all its glory.
Implements save/restore of host/guest registers, EL2 trapping,
IPA resolution, and additional services (tlb invalidation).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kernel/asm-offsets.c | 33 ++
arch/arm64/kvm/hyp.S | 756 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 789 insertions(+)
create mode 100644 arch/arm64/kvm/hyp.S
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index a2a4d81..a7f706a 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -21,6 +21,7 @@
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/dma-mapping.h>
+#include <linux/kvm_host.h>
#include <asm/thread_info.h>
#include <asm/memory.h>
#include <asm/cputable.h>
@@ -104,5 +105,37 @@ int main(void)
BLANK();
DEFINE(TZ_MINWEST, offsetof(struct timezone, tz_minuteswest));
DEFINE(TZ_DSTTIME, offsetof(struct timezone, tz_dsttime));
+ BLANK();
+#ifdef CONFIG_KVM_ARM_HOST
+ DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
+ DEFINE(VCPU_USER_PT_REGS, offsetof(struct kvm_regs, regs));
+ DEFINE(VCPU_VFP_GUEST, offsetof(struct kvm_vcpu, arch.vfp_guest));
+ DEFINE(VCPU_VFP_HOST, offsetof(struct kvm_vcpu, arch.vfp_host));
+ DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2));
+ DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
+ DEFINE(VCPU_SP_EL1, offsetof(struct kvm_vcpu, arch.regs.sp_el1));
+ DEFINE(VCPU_ELR_EL1, offsetof(struct kvm_vcpu, arch.regs.elr_el1));
+ DEFINE(VCPU_SPSR, offsetof(struct kvm_vcpu, arch.regs.spsr));
+ DEFINE(VCPU_SYSREGS, offsetof(struct kvm_vcpu, arch.sys_regs));
+ DEFINE(VCPU_ESR_EL2, offsetof(struct kvm_vcpu, arch.fault.esr_el2));
+ DEFINE(VCPU_FAR_EL2, offsetof(struct kvm_vcpu, arch.fault.far_el2));
+ DEFINE(VCPU_HPFAR_EL2, offsetof(struct kvm_vcpu, arch.fault.hpfar_el2));
+ DEFINE(VCPU_TIMER_CNTV_CTL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_ctl));
+ DEFINE(VCPU_TIMER_CNTV_CVAL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_cval));
+ DEFINE(KVM_TIMER_CNTVOFF, offsetof(struct kvm, arch.timer.cntvoff));
+ DEFINE(KVM_TIMER_ENABLED, offsetof(struct kvm, arch.timer.enabled));
+ DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
+ DEFINE(VCPU_VGIC_CPU, offsetof(struct kvm_vcpu, arch.vgic_cpu));
+ DEFINE(VGIC_CPU_HCR, offsetof(struct vgic_cpu, vgic_hcr));
+ DEFINE(VGIC_CPU_VMCR, offsetof(struct vgic_cpu, vgic_vmcr));
+ DEFINE(VGIC_CPU_MISR, offsetof(struct vgic_cpu, vgic_misr));
+ DEFINE(VGIC_CPU_EISR, offsetof(struct vgic_cpu, vgic_eisr));
+ DEFINE(VGIC_CPU_ELRSR, offsetof(struct vgic_cpu, vgic_elrsr));
+ DEFINE(VGIC_CPU_APR, offsetof(struct vgic_cpu, vgic_apr));
+ DEFINE(VGIC_CPU_LR, offsetof(struct vgic_cpu, vgic_lr));
+ DEFINE(VGIC_CPU_NR_LR, offsetof(struct vgic_cpu, nr_lr));
+ DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr));
+ DEFINE(KVM_VGIC_VCTRL, offsetof(struct kvm, arch.vgic.vctrl_base));
+#endif
return 0;
}
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
new file mode 100644
index 0000000..cd7506d
--- /dev/null
+++ b/arch/arm64/kvm/hyp.S
@@ -0,0 +1,756 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/linkage.h>
+#include <linux/irqchip/arm-gic.h>
+
+#include <asm/assembler.h>
+#include <asm/memory.h>
+#include <asm/asm-offsets.h>
+#include <asm/fpsimdmacros.h>
+#include <asm/kvm.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_mmu.h>
+
+#define REG_OFFSET(x) (VCPU_REGS + VCPU_USER_PT_REGS + 8*x)
+#define SPSR_OFFSET(x) (VCPU_SPSR + 8*x)
+#define SYSREG_OFFSET(x) (VCPU_SYSREGS + 8*x)
+
+ .text
+ .pushsection .hyp.text, "ax"
+ .align PAGE_SHIFT
+
+__kvm_hyp_code_start:
+ .globl __kvm_hyp_code_start
+
+.macro save_host_regs
+ push x19, x20
+ push x21, x22
+ push x23, x24
+ push x25, x26
+ push x27, x28
+ push x29, lr
+
+ mrs x19, sp_el0
+ mrs x20, sp_el1
+ mrs x21, elr_el1
+ mrs x22, spsr_el1
+ mrs x23, elr_el2
+ mrs x24, spsr_el2
+
+ push x19, x20
+ push x21, x22
+ push x23, x24
+.endm
+
+.macro restore_host_regs
+ pop x23, x24
+ pop x21, x22
+ pop x19, x20
+
+ msr sp_el0, x19
+ msr sp_el1, x20
+ msr elr_el1, x21
+ msr spsr_el1, x22
+ msr elr_el2, x23
+ msr spsr_el2, x24
+
+ pop x29, lr
+ pop x27, x28
+ pop x25, x26
+ pop x23, x24
+ pop x21, x22
+ pop x19, x20
+.endm
+
+.macro save_host_fpsimd
+ // X0: vcpu address
+ // x2, x3: tmp regs
+ ldr x2, [x0, #VCPU_VFP_HOST]
+ kern_hyp_va x2
+ fpsimd_save x2, 3
+.endm
+
+.macro restore_host_fpsimd
+ // X0: vcpu address
+ // x2, x3: tmp regs
+ ldr x2, [x0, #VCPU_VFP_HOST]
+ kern_hyp_va x2
+ fpsimd_restore x2, 3
+.endm
+
+.macro save_guest_regs
+ // x0 is the vcpu address.
+ // x1 is the return code, do not corrupt!
+ // Guest's x0-x3 are on the stack
+
+ // Compute base to save registers
+ add x2, x0, #REG_OFFSET(4)
+ mrs x3, sp_el0
+ stp x4, x5, [x2], #16
+ stp x6, x7, [x2], #16
+ stp x8, x9, [x2], #16
+ stp x10, x11, [x2], #16
+ stp x12, x13, [x2], #16
+ stp x14, x15, [x2], #16
+ stp x16, x17, [x2], #16
+ stp x18, x19, [x2], #16
+ stp x20, x21, [x2], #16
+ stp x22, x23, [x2], #16
+ stp x24, x25, [x2], #16
+ stp x26, x27, [x2], #16
+ stp x28, x29, [x2], #16
+ stp lr, x3, [x2], #16 // LR, SP_EL0
+
+ mrs x4, elr_el2 // PC
+ mrs x5, spsr_el2 // CPSR
+ stp x4, x5, [x2], #16
+
+ pop x6, x7 // x2, x3
+ pop x4, x5 // x0, x1
+
+ add x2, x0, #REG_OFFSET(0)
+ stp x4, x5, [x2], #16
+ stp x6, x7, [x2], #16
+
+ // EL1 state
+ mrs x4, sp_el1
+ mrs x5, elr_el1
+ mrs x6, spsr_el1
+ str x4, [x0, #VCPU_SP_EL1]
+ str x5, [x0, #VCPU_ELR_EL1]
+ str x6, [x0, #SPSR_OFFSET(KVM_SPSR_EL1)]
+.endm
+
+.macro restore_guest_regs
+ // x0 is the vcpu address.
+
+ // EL1 state
+ ldr x4, [x0, #VCPU_SP_EL1]
+ ldr x5, [x0, #VCPU_ELR_EL1]
+ ldr x6, [x0, #SPSR_OFFSET(KVM_SPSR_EL1)]
+ msr sp_el1, x4
+ msr elr_el1, x5
+ msr spsr_el1, x6
+
+ // Prepare x0-x3 for later restore
+ add x1, x0, #REG_OFFSET(0)
+ ldp x4, x5, [x1], #16
+ ldp x6, x7, [x1], #16
+ push x4, x5 // Push x0-x3 on the stack
+ push x6, x7
+
+ // x4-x29, lr, sp_el0
+ ldp x4, x5, [x1], #16
+ ldp x6, x7, [x1], #16
+ ldp x8, x9, [x1], #16
+ ldp x10, x11, [x1], #16
+ ldp x12, x13, [x1], #16
+ ldp x14, x15, [x1], #16
+ ldp x16, x17, [x1], #16
+ ldp x18, x19, [x1], #16
+ ldp x20, x21, [x1], #16
+ ldp x22, x23, [x1], #16
+ ldp x24, x25, [x1], #16
+ ldp x26, x27, [x1], #16
+ ldp x28, x29, [x1], #16
+ ldp lr, x3, [x1], #16
+ msr sp_el0, x3
+
+ // PC, cpsr
+ ldp x2, x3, [x1]
+ msr elr_el2, x2
+ msr spsr_el2, x3
+
+ // Last bits of the 64bit state
+ pop x2, x3
+ pop x0, x1
+
+ // Do not touch any register after this!
+.endm
+
+.macro save_guest_fpsimd
+ // X0: vcpu address
+ // x2, x3: tmp regs
+ add x2, x0, #VCPU_VFP_GUEST
+ fpsimd_save x2, 3
+.endm
+
+.macro restore_guest_fpsimd
+ // X0: vcpu address
+ // x2, x3: tmp regs
+ add x2, x0, #VCPU_VFP_GUEST
+ fpsimd_restore x2, 3
+.endm
+
+/*
+ * Macros to perform system register save/restore.
+ *
+ * Ordering here is absolutely critical, and must be kept consistent
+ * in dump_sysregs, load_sysregs, {save,restore}_guest_sysregs,
+ * {save,restore}_guest_32bit_state, and in kvm_asm.h.
+ *
+ * In other words, don't touch any of these unless you know what
+ * you are doing.
+ */
+.macro dump_sysregs
+ mrs x4, mpidr_el1
+ mrs x5, csselr_el1
+ mrs x6, sctlr_el1
+ mrs x7, actlr_el1
+ mrs x8, cpacr_el1
+ mrs x9, ttbr0_el1
+ mrs x10, ttbr1_el1
+ mrs x11, tcr_el1
+ mrs x12, esr_el1
+ mrs x13, afsr0_el1
+ mrs x14, afsr1_el1
+ mrs x15, far_el1
+ mrs x16, mair_el1
+ mrs x17, vbar_el1
+ mrs x18, contextidr_el1
+ mrs x19, tpidr_el0
+ mrs x20, tpidrro_el0
+ mrs x21, tpidr_el1
+ mrs x22, amair_el1
+ mrs x23, cntkctl_el1
+.endm
+
+.macro load_sysregs
+ msr vmpidr_el2, x4
+ msr csselr_el1, x5
+ msr sctlr_el1, x6
+ msr actlr_el1, x7
+ msr cpacr_el1, x8
+ msr ttbr0_el1, x9
+ msr ttbr1_el1, x10
+ msr tcr_el1, x11
+ msr esr_el1, x12
+ msr afsr0_el1, x13
+ msr afsr1_el1, x14
+ msr far_el1, x15
+ msr mair_el1, x16
+ msr vbar_el1, x17
+ msr contextidr_el1, x18
+ msr tpidr_el0, x19
+ msr tpidrro_el0, x20
+ msr tpidr_el1, x21
+ msr amair_el1, x22
+ msr cntkctl_el1, x23
+.endm
+
+.macro save_host_sysregs
+ dump_sysregs
+ push x4, x5
+ push x6, x7
+ push x8, x9
+ push x10, x11
+ push x12, x13
+ push x14, x15
+ push x16, x17
+ push x18, x19
+ push x20, x21
+ push x22, x23
+.endm
+
+.macro save_guest_sysregs
+ dump_sysregs
+ add x2, x0, #SYSREG_OFFSET(CSSELR_EL1) // MIPDR_EL2 not written back
+ str x5, [x2], #8
+ stp x6, x7, [x2], #16
+ stp x8, x9, [x2], #16
+ stp x10, x11, [x2], #16
+ stp x12, x13, [x2], #16
+ stp x14, x15, [x2], #16
+ stp x16, x17, [x2], #16
+ stp x18, x19, [x2], #16
+ stp x20, x21, [x2], #16
+ stp x22, x23, [x2], #16
+.endm
+
+.macro restore_host_sysregs
+ pop x22, x23
+ pop x20, x21
+ pop x18, x19
+ pop x16, x17
+ pop x14, x15
+ pop x12, x13
+ pop x10, x11
+ pop x8, x9
+ pop x6, x7
+ pop x4, x5
+ load_sysregs
+.endm
+
+.macro restore_guest_sysregs
+ add x2, x0, #SYSREG_OFFSET(MPIDR_EL1)
+ ldp x4, x5, [x2], #16
+ ldp x6, x7, [x2], #16
+ ldp x8, x9, [x2], #16
+ ldp x10, x11, [x2], #16
+ ldp x12, x13, [x2], #16
+ ldp x14, x15, [x2], #16
+ ldp x16, x17, [x2], #16
+ ldp x18, x19, [x2], #16
+ ldp x20, x21, [x2], #16
+ ldp x22, x23, [x2], #16
+ load_sysregs
+.endm
+
+.macro activate_traps
+ ldr x2, [x0, #VCPU_IRQ_LINES]
+ ldr x1, [x0, #VCPU_HCR_EL2]
+ orr x2, x2, x1
+ msr hcr_el2, x2
+
+ ldr x2, =(CPTR_EL2_TTA)
+ msr cptr_el2, x2
+
+ ldr x2, =(1 << 15) // Trap CP15 Cr=15
+ msr hstr_el2, x2
+
+ mrs x2, mdcr_el2
+ and x2, x2, #MDCR_EL2_HPMN_MASK
+ orr x2, x2, #(MDCR_EL2_TPM | MDCR_EL2_TPMCR)
+ msr mdcr_el2, x2
+.endm
+
+.macro deactivate_traps
+ mov x2, #HCR_RW
+ msr hcr_el2, x2
+ msr cptr_el2, xzr
+ msr hstr_el2, xzr
+
+ mrs x2, mdcr_el2
+ and x2, x2, #MDCR_EL2_HPMN_MASK
+ msr mdcr_el2, x2
+.endm
+
+.macro activate_vm
+ ldr x1, [x0, #VCPU_KVM]
+ kern_hyp_va x1
+ ldr x2, [x1, #KVM_VTTBR]
+ msr vttbr_el2, x2
+.endm
+
+.macro deactivate_vm
+ msr vttbr_el2, xzr
+.endm
+
+/*
+ * Save the VGIC CPU state into memory
+ * x0: Register pointing to VCPU struct
+ * Do not corrupt x1!!!
+ */
+.macro save_vgic_state
+ /* Get VGIC VCTRL base into x2 */
+ ldr x2, [x0, #VCPU_KVM]
+ kern_hyp_va x2
+ ldr x2, [x2, #KVM_VGIC_VCTRL]
+ kern_hyp_va x2
+ cbz x2, 2f // disabled
+
+ /* Compute the address of struct vgic_cpu */
+ add x3, x0, #VCPU_VGIC_CPU
+
+ /* Save all interesting registers */
+ ldr w4, [x2, #GICH_HCR]
+ ldr w5, [x2, #GICH_VMCR]
+ ldr w6, [x2, #GICH_MISR]
+ ldr w7, [x2, #GICH_EISR0]
+ ldr w8, [x2, #GICH_EISR1]
+ ldr w9, [x2, #GICH_ELRSR0]
+ ldr w10, [x2, #GICH_ELRSR1]
+ ldr w11, [x2, #GICH_APR]
+
+ str w4, [x3, #VGIC_CPU_HCR]
+ str w5, [x3, #VGIC_CPU_VMCR]
+ str w6, [x3, #VGIC_CPU_MISR]
+ str w7, [x3, #VGIC_CPU_EISR]
+ str w8, [x3, #(VGIC_CPU_EISR + 4)]
+ str w9, [x3, #VGIC_CPU_ELRSR]
+ str w10, [x3, #(VGIC_CPU_ELRSR + 4)]
+ str w11, [x3, #VGIC_CPU_APR]
+
+ /* Clear GICH_HCR */
+ str wzr, [x2, #GICH_HCR]
+
+ /* Save list registers */
+ add x2, x2, #GICH_LR0
+ ldr w4, [x3, #VGIC_CPU_NR_LR]
+ add x3, x3, #VGIC_CPU_LR
+1: ldr w5, [x2], #4
+ str w5, [x3], #4
+ sub w4, w4, #1
+ cbnz w4, 1b
+2:
+.endm
+
+/*
+ * Restore the VGIC CPU state from memory
+ * x0: Register pointing to VCPU struct
+ */
+.macro restore_vgic_state
+ /* Get VGIC VCTRL base into x2 */
+ ldr x2, [x0, #VCPU_KVM]
+ kern_hyp_va x2
+ ldr x2, [x2, #KVM_VGIC_VCTRL]
+ kern_hyp_va x2
+ cbz x2, 2f // disabled
+
+ /* Compute the address of struct vgic_cpu */
+ add x3, x0, #VCPU_VGIC_CPU
+
+ /* We only restore a minimal set of registers */
+ ldr w4, [x3, #VGIC_CPU_HCR]
+ ldr w5, [x3, #VGIC_CPU_VMCR]
+ ldr w6, [x3, #VGIC_CPU_APR]
+
+ str w4, [x2, #GICH_HCR]
+ str w5, [x2, #GICH_VMCR]
+ str w6, [x2, #GICH_APR]
+
+ /* Restore list registers */
+ add x2, x2, #GICH_LR0
+ ldr w4, [x3, #VGIC_CPU_NR_LR]
+ add x3, x3, #VGIC_CPU_LR
+1: ldr w5, [x3], #4
+ str w5, [x2], #4
+ sub w4, w4, #1
+ cbnz w4, 1b
+2:
+.endm
+
+.macro save_timer_state
+ ldr x2, [x0, #VCPU_KVM]
+ kern_hyp_va x2
+ ldr w3, [x2, #KVM_TIMER_ENABLED]
+ cbz w3, 1f
+
+ mrs x3, cntv_ctl_el0
+ and x3, x3, #3
+ str w3, [x0, #VCPU_TIMER_CNTV_CTL]
+ bic x3, x3, #1 // Clear Enable
+ msr cntv_ctl_el0, x3
+
+ isb
+
+ mrs x3, cntv_cval_el0
+ str x3, [x0, #VCPU_TIMER_CNTV_CVAL]
+
+1:
+ // Allow physical timer/counter access for the host
+ mrs x2, cnthctl_el2
+ orr x2, x2, #3
+ msr cnthctl_el2, x2
+
+ // Clear cntvoff for the host
+ msr cntvoff_el2, xzr
+.endm
+
+.macro restore_timer_state vcpup
+ // Disallow physical timer access for the guest
+ // Physical counter access is allowed
+ mrs x2, cnthctl_el2
+ orr x2, x2, #1
+ bic x2, x2, #2
+ msr cnthctl_el2, x2
+
+ ldr x2, [x0, #VCPU_KVM]
+ kern_hyp_va x2
+ ldr w3, [x2, #KVM_TIMER_ENABLED]
+ cbz w3, 1f
+
+ ldr x3, [x2, #KVM_TIMER_CNTVOFF]
+ msr cntvoff_el2, x3
+ ldr x2, [x0, #VCPU_TIMER_CNTV_CVAL]
+ msr cntv_cval_el0, x2
+ isb
+
+ ldr w2, [x0, #VCPU_TIMER_CNTV_CTL]
+ and x2, x2, #3
+ msr cntv_ctl_el0, x2
+1:
+.endm
+
+/*
+ * u64 __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+ *
+ * This is the world switch. The first half of the function
+ * deals with entering the guest, and anything from __kvm_vcpu_return
+ * to the end of the function deals with reentering the host.
+ * On the enter path, only x0 (vcpu pointer) must be preserved until
+ * the last moment. On the exit path, x0 (vcpu pointer) and x1 (exception
+ * code) must both be preserved until the epilogue.
+ */
+ENTRY(__kvm_vcpu_run)
+ kern_hyp_va x0
+ msr tpidr_el2, x0 // Save the vcpu register
+
+ save_host_regs
+ save_host_fpsimd
+ save_host_sysregs
+
+ activate_traps
+ activate_vm
+
+ restore_vgic_state
+ restore_timer_state
+ restore_guest_sysregs
+ restore_guest_fpsimd
+ restore_guest_regs
+
+ // That's it, no more messing around.
+ clrex
+ eret
+
+__kvm_vcpu_return:
+ // Assume x0 is the vcpu pointer, x1 the return code
+ // Guest's x0-x3 are on the stack
+ save_guest_regs
+ save_guest_fpsimd
+ save_guest_sysregs
+ save_timer_state
+ save_vgic_state
+
+ deactivate_traps
+ deactivate_vm
+
+ restore_host_sysregs
+ restore_host_fpsimd
+ restore_host_regs
+ mov x0, x1
+ clrex
+ ret
+END(__kvm_vcpu_run)
+
+// void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
+ENTRY(__kvm_tlb_flush_vmid_ipa)
+ kern_hyp_va x0
+ ldr x2, [x0, #KVM_VTTBR]
+ msr vttbr_el2, x2
+ isb
+
+ /*
+ * We could do so much better if we had the VA as well.
+ * Instead, we invalidate Stage-2 for this IPA, and the
+ * whole of Stage-1. Weep...
+ */
+ tlbi ipas2e1is, x1
+ dsb sy
+ tlbi vmalle1is
+ dsb sy
+ isb
+
+ msr vttbr_el2, xzr
+ isb
+ ret
+ENDPROC(__kvm_tlb_flush_vmid_ipa)
+
+ENTRY(__kvm_flush_vm_context)
+ tlbi alle1is
+ ic ialluis
+ dsb sy
+ isb
+ ret
+ENDPROC(__kvm_flush_vm_context)
+
+__kvm_hyp_panic:
+ adr x0, __hyp_panic_str
+ adr x1, 1f
+ ldp x2, x3, [x1]
+ sub x0, x0, x2
+ add x0, x0, x3
+ mrs x1, spsr_el2
+ mrs x2, elr_el2
+ mrs x3, esr_el2
+ mrs x4, far_el2
+ mrs x5, hpfar_el2
+ mrs x6, tpidr_el2
+
+ mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
+ PSR_MODE_EL1h)
+ msr spsr_el2, lr
+ ldr lr, =panic
+ msr elr_el2, lr
+ eret
+
+ .align 3
+1: .quad HYP_PAGE_OFFSET
+ .quad PAGE_OFFSET
+ENDPROC(__kvm_hyp_panic)
+
+__hyp_panic_str:
+ .ascii "HYP panic:\nPS:%08x PC:%p ESR:%p\nFAR:%p HPFAR:%p VCPU:%p\n\0"
+
+ .align 2
+
+ENTRY(kvm_call_hyp)
+ hvc #0
+ ret
+ENDPROC(kvm_call_hyp)
+
+.macro invalid_vector label, target
+ .align 2
+\label:
+ b \target
+ENDPROC(\label)
+.endm
+
+ /* None of these should ever happen */
+ invalid_vector el2t_sync_invalid, __kvm_hyp_panic
+ invalid_vector el2t_irq_invalid, __kvm_hyp_panic
+ invalid_vector el2t_fiq_invalid, __kvm_hyp_panic
+ invalid_vector el2t_error_invalid, __kvm_hyp_panic
+ invalid_vector el2h_sync_invalid, __kvm_hyp_panic
+ invalid_vector el2h_irq_invalid, __kvm_hyp_panic
+ invalid_vector el2h_fiq_invalid, __kvm_hyp_panic
+ invalid_vector el2h_error_invalid, __kvm_hyp_panic
+ invalid_vector el1_sync_invalid, __kvm_hyp_panic
+ invalid_vector el1_irq_invalid, __kvm_hyp_panic
+ invalid_vector el1_fiq_invalid, __kvm_hyp_panic
+ invalid_vector el1_error_invalid, __kvm_hyp_panic
+
+el1_sync: // Guest trapped into EL2
+ push x0, x1
+ push x2, x3
+
+ mrs x1, esr_el2
+ lsr x2, x1, #ESR_EL2_EC_SHIFT
+
+ cmp x2, #ESR_EL2_EC_HVC64
+ b.ne el1_trap
+
+ mrs x3, vttbr_el2 // If vttbr is valid, the 64bit guest
+ cbnz x3, el1_trap // called HVC
+
+ /* Here, we're pretty sure the host called HVC. */
+ pop x2, x3
+ pop x0, x1
+
+ push lr, xzr
+
+ /*
+ * Compute the function address in EL2, and shuffle the parameters.
+ */
+ kern_hyp_va x0
+ mov lr, x0
+ mov x0, x1
+ mov x1, x2
+ mov x2, x3
+ blr lr
+
+ pop lr, xzr
+ eret
+
+el1_trap:
+ /*
+ * x1: ESR
+ * x2: ESR_EC
+ */
+ cmp x2, #ESR_EL2_EC_DABT
+ mov x0, #ESR_EL2_EC_IABT
+ ccmp x2, x0, #4, ne
+ b.ne 1f // Not an abort we care about
+
+ /* This is an abort. Check for permission fault */
+ and x2, x1, #ESR_EL2_FSC_TYPE
+ cmp x2, #FSC_PERM
+ b.ne 1f // Not a permission fault
+
+ /*
+ * Check for Stage-1 page table walk, which is guaranteed
+ * to give a valid HPFAR_EL2.
+ */
+ tbnz x1, #7, 1f // S1PTW is set
+
+ /*
+ * Permission fault, HPFAR_EL2 is invalid.
+ * Resolve the IPA the hard way using the guest VA.
+ * We always perform an EL1 lookup, as we already
+ * went through Stage-1.
+ */
+ mrs x3, far_el2
+ at s1e1r, x3
+ isb
+
+ /* Read result */
+ mrs x3, par_el1
+ tbnz x3, #1, 3f // Bail out if we failed the translation
+ ubfx x3, x3, #12, #36 // Extract IPA
+ lsl x3, x3, #4 // and present it like HPFAR
+ b 2f
+
+1: mrs x3, hpfar_el2
+
+2: mrs x0, tpidr_el2
+ mrs x2, far_el2
+ str x1, [x0, #VCPU_ESR_EL2]
+ str x2, [x0, #VCPU_FAR_EL2]
+ str x3, [x0, #VCPU_HPFAR_EL2]
+
+ mov x1, #ARM_EXCEPTION_TRAP
+ b __kvm_vcpu_return
+
+ /*
+ * Translation failed. Just return to the guest and
+ * let it fault again. Another CPU is probably playing
+ * behind our back.
+ */
+3: pop x2, x3
+ pop x0, x1
+
+ eret
+
+el1_irq:
+ push x0, x1
+ push x2, x3
+ mrs x0, tpidr_el2
+ mov x1, #ARM_EXCEPTION_IRQ
+ b __kvm_vcpu_return
+
+ .ltorg
+
+ .align 11
+
+ENTRY(__kvm_hyp_vector)
+ ventry el2t_sync_invalid // Synchronous EL2t
+ ventry el2t_irq_invalid // IRQ EL2t
+ ventry el2t_fiq_invalid // FIQ EL2t
+ ventry el2t_error_invalid // Error EL2t
+
+ ventry el2h_sync_invalid // Synchronous EL2h
+ ventry el2h_irq_invalid // IRQ EL2h
+ ventry el2h_fiq_invalid // FIQ EL2h
+ ventry el2h_error_invalid // Error EL2h
+
+ ventry el1_sync // Synchronous 64-bit EL1
+ ventry el1_irq // IRQ 64-bit EL1
+ ventry el1_fiq_invalid // FIQ 64-bit EL1
+ ventry el1_error_invalid // Error 64-bit EL1
+
+ ventry el1_sync // Synchronous 32-bit EL1
+ ventry el1_irq // IRQ 32-bit EL1
+ ventry el1_fiq_invalid // FIQ 32-bit EL1
+ ventry el1_error_invalid // Error 32-bit EL1
+ENDPROC(__kvm_hyp_vector)
+
+__kvm_hyp_code_end:
+ .globl __kvm_hyp_code_end
+
+ .popsection
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 16/29] arm64: KVM: HYP mode world switch implementation
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
The HYP mode world switch in all its glory.
Implements save/restore of host/guest registers, EL2 trapping,
IPA resolution, and additional services (tlb invalidation).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kernel/asm-offsets.c | 33 ++
arch/arm64/kvm/hyp.S | 756 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 789 insertions(+)
create mode 100644 arch/arm64/kvm/hyp.S
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index a2a4d81..a7f706a 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -21,6 +21,7 @@
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/dma-mapping.h>
+#include <linux/kvm_host.h>
#include <asm/thread_info.h>
#include <asm/memory.h>
#include <asm/cputable.h>
@@ -104,5 +105,37 @@ int main(void)
BLANK();
DEFINE(TZ_MINWEST, offsetof(struct timezone, tz_minuteswest));
DEFINE(TZ_DSTTIME, offsetof(struct timezone, tz_dsttime));
+ BLANK();
+#ifdef CONFIG_KVM_ARM_HOST
+ DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
+ DEFINE(VCPU_USER_PT_REGS, offsetof(struct kvm_regs, regs));
+ DEFINE(VCPU_VFP_GUEST, offsetof(struct kvm_vcpu, arch.vfp_guest));
+ DEFINE(VCPU_VFP_HOST, offsetof(struct kvm_vcpu, arch.vfp_host));
+ DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2));
+ DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
+ DEFINE(VCPU_SP_EL1, offsetof(struct kvm_vcpu, arch.regs.sp_el1));
+ DEFINE(VCPU_ELR_EL1, offsetof(struct kvm_vcpu, arch.regs.elr_el1));
+ DEFINE(VCPU_SPSR, offsetof(struct kvm_vcpu, arch.regs.spsr));
+ DEFINE(VCPU_SYSREGS, offsetof(struct kvm_vcpu, arch.sys_regs));
+ DEFINE(VCPU_ESR_EL2, offsetof(struct kvm_vcpu, arch.fault.esr_el2));
+ DEFINE(VCPU_FAR_EL2, offsetof(struct kvm_vcpu, arch.fault.far_el2));
+ DEFINE(VCPU_HPFAR_EL2, offsetof(struct kvm_vcpu, arch.fault.hpfar_el2));
+ DEFINE(VCPU_TIMER_CNTV_CTL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_ctl));
+ DEFINE(VCPU_TIMER_CNTV_CVAL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_cval));
+ DEFINE(KVM_TIMER_CNTVOFF, offsetof(struct kvm, arch.timer.cntvoff));
+ DEFINE(KVM_TIMER_ENABLED, offsetof(struct kvm, arch.timer.enabled));
+ DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
+ DEFINE(VCPU_VGIC_CPU, offsetof(struct kvm_vcpu, arch.vgic_cpu));
+ DEFINE(VGIC_CPU_HCR, offsetof(struct vgic_cpu, vgic_hcr));
+ DEFINE(VGIC_CPU_VMCR, offsetof(struct vgic_cpu, vgic_vmcr));
+ DEFINE(VGIC_CPU_MISR, offsetof(struct vgic_cpu, vgic_misr));
+ DEFINE(VGIC_CPU_EISR, offsetof(struct vgic_cpu, vgic_eisr));
+ DEFINE(VGIC_CPU_ELRSR, offsetof(struct vgic_cpu, vgic_elrsr));
+ DEFINE(VGIC_CPU_APR, offsetof(struct vgic_cpu, vgic_apr));
+ DEFINE(VGIC_CPU_LR, offsetof(struct vgic_cpu, vgic_lr));
+ DEFINE(VGIC_CPU_NR_LR, offsetof(struct vgic_cpu, nr_lr));
+ DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr));
+ DEFINE(KVM_VGIC_VCTRL, offsetof(struct kvm, arch.vgic.vctrl_base));
+#endif
return 0;
}
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
new file mode 100644
index 0000000..cd7506d
--- /dev/null
+++ b/arch/arm64/kvm/hyp.S
@@ -0,0 +1,756 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/linkage.h>
+#include <linux/irqchip/arm-gic.h>
+
+#include <asm/assembler.h>
+#include <asm/memory.h>
+#include <asm/asm-offsets.h>
+#include <asm/fpsimdmacros.h>
+#include <asm/kvm.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_mmu.h>
+
+#define REG_OFFSET(x) (VCPU_REGS + VCPU_USER_PT_REGS + 8*x)
+#define SPSR_OFFSET(x) (VCPU_SPSR + 8*x)
+#define SYSREG_OFFSET(x) (VCPU_SYSREGS + 8*x)
+
+ .text
+ .pushsection .hyp.text, "ax"
+ .align PAGE_SHIFT
+
+__kvm_hyp_code_start:
+ .globl __kvm_hyp_code_start
+
+.macro save_host_regs
+ push x19, x20
+ push x21, x22
+ push x23, x24
+ push x25, x26
+ push x27, x28
+ push x29, lr
+
+ mrs x19, sp_el0
+ mrs x20, sp_el1
+ mrs x21, elr_el1
+ mrs x22, spsr_el1
+ mrs x23, elr_el2
+ mrs x24, spsr_el2
+
+ push x19, x20
+ push x21, x22
+ push x23, x24
+.endm
+
+.macro restore_host_regs
+ pop x23, x24
+ pop x21, x22
+ pop x19, x20
+
+ msr sp_el0, x19
+ msr sp_el1, x20
+ msr elr_el1, x21
+ msr spsr_el1, x22
+ msr elr_el2, x23
+ msr spsr_el2, x24
+
+ pop x29, lr
+ pop x27, x28
+ pop x25, x26
+ pop x23, x24
+ pop x21, x22
+ pop x19, x20
+.endm
+
+.macro save_host_fpsimd
+ // X0: vcpu address
+ // x2, x3: tmp regs
+ ldr x2, [x0, #VCPU_VFP_HOST]
+ kern_hyp_va x2
+ fpsimd_save x2, 3
+.endm
+
+.macro restore_host_fpsimd
+ // X0: vcpu address
+ // x2, x3: tmp regs
+ ldr x2, [x0, #VCPU_VFP_HOST]
+ kern_hyp_va x2
+ fpsimd_restore x2, 3
+.endm
+
+.macro save_guest_regs
+ // x0 is the vcpu address.
+ // x1 is the return code, do not corrupt!
+ // Guest's x0-x3 are on the stack
+
+ // Compute base to save registers
+ add x2, x0, #REG_OFFSET(4)
+ mrs x3, sp_el0
+ stp x4, x5, [x2], #16
+ stp x6, x7, [x2], #16
+ stp x8, x9, [x2], #16
+ stp x10, x11, [x2], #16
+ stp x12, x13, [x2], #16
+ stp x14, x15, [x2], #16
+ stp x16, x17, [x2], #16
+ stp x18, x19, [x2], #16
+ stp x20, x21, [x2], #16
+ stp x22, x23, [x2], #16
+ stp x24, x25, [x2], #16
+ stp x26, x27, [x2], #16
+ stp x28, x29, [x2], #16
+ stp lr, x3, [x2], #16 // LR, SP_EL0
+
+ mrs x4, elr_el2 // PC
+ mrs x5, spsr_el2 // CPSR
+ stp x4, x5, [x2], #16
+
+ pop x6, x7 // x2, x3
+ pop x4, x5 // x0, x1
+
+ add x2, x0, #REG_OFFSET(0)
+ stp x4, x5, [x2], #16
+ stp x6, x7, [x2], #16
+
+ // EL1 state
+ mrs x4, sp_el1
+ mrs x5, elr_el1
+ mrs x6, spsr_el1
+ str x4, [x0, #VCPU_SP_EL1]
+ str x5, [x0, #VCPU_ELR_EL1]
+ str x6, [x0, #SPSR_OFFSET(KVM_SPSR_EL1)]
+.endm
+
+.macro restore_guest_regs
+ // x0 is the vcpu address.
+
+ // EL1 state
+ ldr x4, [x0, #VCPU_SP_EL1]
+ ldr x5, [x0, #VCPU_ELR_EL1]
+ ldr x6, [x0, #SPSR_OFFSET(KVM_SPSR_EL1)]
+ msr sp_el1, x4
+ msr elr_el1, x5
+ msr spsr_el1, x6
+
+ // Prepare x0-x3 for later restore
+ add x1, x0, #REG_OFFSET(0)
+ ldp x4, x5, [x1], #16
+ ldp x6, x7, [x1], #16
+ push x4, x5 // Push x0-x3 on the stack
+ push x6, x7
+
+ // x4-x29, lr, sp_el0
+ ldp x4, x5, [x1], #16
+ ldp x6, x7, [x1], #16
+ ldp x8, x9, [x1], #16
+ ldp x10, x11, [x1], #16
+ ldp x12, x13, [x1], #16
+ ldp x14, x15, [x1], #16
+ ldp x16, x17, [x1], #16
+ ldp x18, x19, [x1], #16
+ ldp x20, x21, [x1], #16
+ ldp x22, x23, [x1], #16
+ ldp x24, x25, [x1], #16
+ ldp x26, x27, [x1], #16
+ ldp x28, x29, [x1], #16
+ ldp lr, x3, [x1], #16
+ msr sp_el0, x3
+
+ // PC, cpsr
+ ldp x2, x3, [x1]
+ msr elr_el2, x2
+ msr spsr_el2, x3
+
+ // Last bits of the 64bit state
+ pop x2, x3
+ pop x0, x1
+
+ // Do not touch any register after this!
+.endm
+
+.macro save_guest_fpsimd
+ // X0: vcpu address
+ // x2, x3: tmp regs
+ add x2, x0, #VCPU_VFP_GUEST
+ fpsimd_save x2, 3
+.endm
+
+.macro restore_guest_fpsimd
+ // X0: vcpu address
+ // x2, x3: tmp regs
+ add x2, x0, #VCPU_VFP_GUEST
+ fpsimd_restore x2, 3
+.endm
+
+/*
+ * Macros to perform system register save/restore.
+ *
+ * Ordering here is absolutely critical, and must be kept consistent
+ * in dump_sysregs, load_sysregs, {save,restore}_guest_sysregs,
+ * {save,restore}_guest_32bit_state, and in kvm_asm.h.
+ *
+ * In other words, don't touch any of these unless you know what
+ * you are doing.
+ */
+.macro dump_sysregs
+ mrs x4, mpidr_el1
+ mrs x5, csselr_el1
+ mrs x6, sctlr_el1
+ mrs x7, actlr_el1
+ mrs x8, cpacr_el1
+ mrs x9, ttbr0_el1
+ mrs x10, ttbr1_el1
+ mrs x11, tcr_el1
+ mrs x12, esr_el1
+ mrs x13, afsr0_el1
+ mrs x14, afsr1_el1
+ mrs x15, far_el1
+ mrs x16, mair_el1
+ mrs x17, vbar_el1
+ mrs x18, contextidr_el1
+ mrs x19, tpidr_el0
+ mrs x20, tpidrro_el0
+ mrs x21, tpidr_el1
+ mrs x22, amair_el1
+ mrs x23, cntkctl_el1
+.endm
+
+.macro load_sysregs
+ msr vmpidr_el2, x4
+ msr csselr_el1, x5
+ msr sctlr_el1, x6
+ msr actlr_el1, x7
+ msr cpacr_el1, x8
+ msr ttbr0_el1, x9
+ msr ttbr1_el1, x10
+ msr tcr_el1, x11
+ msr esr_el1, x12
+ msr afsr0_el1, x13
+ msr afsr1_el1, x14
+ msr far_el1, x15
+ msr mair_el1, x16
+ msr vbar_el1, x17
+ msr contextidr_el1, x18
+ msr tpidr_el0, x19
+ msr tpidrro_el0, x20
+ msr tpidr_el1, x21
+ msr amair_el1, x22
+ msr cntkctl_el1, x23
+.endm
+
+.macro save_host_sysregs
+ dump_sysregs
+ push x4, x5
+ push x6, x7
+ push x8, x9
+ push x10, x11
+ push x12, x13
+ push x14, x15
+ push x16, x17
+ push x18, x19
+ push x20, x21
+ push x22, x23
+.endm
+
+.macro save_guest_sysregs
+ dump_sysregs
+ add x2, x0, #SYSREG_OFFSET(CSSELR_EL1) // MIPDR_EL2 not written back
+ str x5, [x2], #8
+ stp x6, x7, [x2], #16
+ stp x8, x9, [x2], #16
+ stp x10, x11, [x2], #16
+ stp x12, x13, [x2], #16
+ stp x14, x15, [x2], #16
+ stp x16, x17, [x2], #16
+ stp x18, x19, [x2], #16
+ stp x20, x21, [x2], #16
+ stp x22, x23, [x2], #16
+.endm
+
+.macro restore_host_sysregs
+ pop x22, x23
+ pop x20, x21
+ pop x18, x19
+ pop x16, x17
+ pop x14, x15
+ pop x12, x13
+ pop x10, x11
+ pop x8, x9
+ pop x6, x7
+ pop x4, x5
+ load_sysregs
+.endm
+
+.macro restore_guest_sysregs
+ add x2, x0, #SYSREG_OFFSET(MPIDR_EL1)
+ ldp x4, x5, [x2], #16
+ ldp x6, x7, [x2], #16
+ ldp x8, x9, [x2], #16
+ ldp x10, x11, [x2], #16
+ ldp x12, x13, [x2], #16
+ ldp x14, x15, [x2], #16
+ ldp x16, x17, [x2], #16
+ ldp x18, x19, [x2], #16
+ ldp x20, x21, [x2], #16
+ ldp x22, x23, [x2], #16
+ load_sysregs
+.endm
+
+.macro activate_traps
+ ldr x2, [x0, #VCPU_IRQ_LINES]
+ ldr x1, [x0, #VCPU_HCR_EL2]
+ orr x2, x2, x1
+ msr hcr_el2, x2
+
+ ldr x2, =(CPTR_EL2_TTA)
+ msr cptr_el2, x2
+
+ ldr x2, =(1 << 15) // Trap CP15 Cr=15
+ msr hstr_el2, x2
+
+ mrs x2, mdcr_el2
+ and x2, x2, #MDCR_EL2_HPMN_MASK
+ orr x2, x2, #(MDCR_EL2_TPM | MDCR_EL2_TPMCR)
+ msr mdcr_el2, x2
+.endm
+
+.macro deactivate_traps
+ mov x2, #HCR_RW
+ msr hcr_el2, x2
+ msr cptr_el2, xzr
+ msr hstr_el2, xzr
+
+ mrs x2, mdcr_el2
+ and x2, x2, #MDCR_EL2_HPMN_MASK
+ msr mdcr_el2, x2
+.endm
+
+.macro activate_vm
+ ldr x1, [x0, #VCPU_KVM]
+ kern_hyp_va x1
+ ldr x2, [x1, #KVM_VTTBR]
+ msr vttbr_el2, x2
+.endm
+
+.macro deactivate_vm
+ msr vttbr_el2, xzr
+.endm
+
+/*
+ * Save the VGIC CPU state into memory
+ * x0: Register pointing to VCPU struct
+ * Do not corrupt x1!!!
+ */
+.macro save_vgic_state
+ /* Get VGIC VCTRL base into x2 */
+ ldr x2, [x0, #VCPU_KVM]
+ kern_hyp_va x2
+ ldr x2, [x2, #KVM_VGIC_VCTRL]
+ kern_hyp_va x2
+ cbz x2, 2f // disabled
+
+ /* Compute the address of struct vgic_cpu */
+ add x3, x0, #VCPU_VGIC_CPU
+
+ /* Save all interesting registers */
+ ldr w4, [x2, #GICH_HCR]
+ ldr w5, [x2, #GICH_VMCR]
+ ldr w6, [x2, #GICH_MISR]
+ ldr w7, [x2, #GICH_EISR0]
+ ldr w8, [x2, #GICH_EISR1]
+ ldr w9, [x2, #GICH_ELRSR0]
+ ldr w10, [x2, #GICH_ELRSR1]
+ ldr w11, [x2, #GICH_APR]
+
+ str w4, [x3, #VGIC_CPU_HCR]
+ str w5, [x3, #VGIC_CPU_VMCR]
+ str w6, [x3, #VGIC_CPU_MISR]
+ str w7, [x3, #VGIC_CPU_EISR]
+ str w8, [x3, #(VGIC_CPU_EISR + 4)]
+ str w9, [x3, #VGIC_CPU_ELRSR]
+ str w10, [x3, #(VGIC_CPU_ELRSR + 4)]
+ str w11, [x3, #VGIC_CPU_APR]
+
+ /* Clear GICH_HCR */
+ str wzr, [x2, #GICH_HCR]
+
+ /* Save list registers */
+ add x2, x2, #GICH_LR0
+ ldr w4, [x3, #VGIC_CPU_NR_LR]
+ add x3, x3, #VGIC_CPU_LR
+1: ldr w5, [x2], #4
+ str w5, [x3], #4
+ sub w4, w4, #1
+ cbnz w4, 1b
+2:
+.endm
+
+/*
+ * Restore the VGIC CPU state from memory
+ * x0: Register pointing to VCPU struct
+ */
+.macro restore_vgic_state
+ /* Get VGIC VCTRL base into x2 */
+ ldr x2, [x0, #VCPU_KVM]
+ kern_hyp_va x2
+ ldr x2, [x2, #KVM_VGIC_VCTRL]
+ kern_hyp_va x2
+ cbz x2, 2f // disabled
+
+ /* Compute the address of struct vgic_cpu */
+ add x3, x0, #VCPU_VGIC_CPU
+
+ /* We only restore a minimal set of registers */
+ ldr w4, [x3, #VGIC_CPU_HCR]
+ ldr w5, [x3, #VGIC_CPU_VMCR]
+ ldr w6, [x3, #VGIC_CPU_APR]
+
+ str w4, [x2, #GICH_HCR]
+ str w5, [x2, #GICH_VMCR]
+ str w6, [x2, #GICH_APR]
+
+ /* Restore list registers */
+ add x2, x2, #GICH_LR0
+ ldr w4, [x3, #VGIC_CPU_NR_LR]
+ add x3, x3, #VGIC_CPU_LR
+1: ldr w5, [x3], #4
+ str w5, [x2], #4
+ sub w4, w4, #1
+ cbnz w4, 1b
+2:
+.endm
+
+.macro save_timer_state
+ ldr x2, [x0, #VCPU_KVM]
+ kern_hyp_va x2
+ ldr w3, [x2, #KVM_TIMER_ENABLED]
+ cbz w3, 1f
+
+ mrs x3, cntv_ctl_el0
+ and x3, x3, #3
+ str w3, [x0, #VCPU_TIMER_CNTV_CTL]
+ bic x3, x3, #1 // Clear Enable
+ msr cntv_ctl_el0, x3
+
+ isb
+
+ mrs x3, cntv_cval_el0
+ str x3, [x0, #VCPU_TIMER_CNTV_CVAL]
+
+1:
+ // Allow physical timer/counter access for the host
+ mrs x2, cnthctl_el2
+ orr x2, x2, #3
+ msr cnthctl_el2, x2
+
+ // Clear cntvoff for the host
+ msr cntvoff_el2, xzr
+.endm
+
+.macro restore_timer_state vcpup
+ // Disallow physical timer access for the guest
+ // Physical counter access is allowed
+ mrs x2, cnthctl_el2
+ orr x2, x2, #1
+ bic x2, x2, #2
+ msr cnthctl_el2, x2
+
+ ldr x2, [x0, #VCPU_KVM]
+ kern_hyp_va x2
+ ldr w3, [x2, #KVM_TIMER_ENABLED]
+ cbz w3, 1f
+
+ ldr x3, [x2, #KVM_TIMER_CNTVOFF]
+ msr cntvoff_el2, x3
+ ldr x2, [x0, #VCPU_TIMER_CNTV_CVAL]
+ msr cntv_cval_el0, x2
+ isb
+
+ ldr w2, [x0, #VCPU_TIMER_CNTV_CTL]
+ and x2, x2, #3
+ msr cntv_ctl_el0, x2
+1:
+.endm
+
+/*
+ * u64 __kvm_vcpu_run(struct kvm_vcpu *vcpu);
+ *
+ * This is the world switch. The first half of the function
+ * deals with entering the guest, and anything from __kvm_vcpu_return
+ * to the end of the function deals with reentering the host.
+ * On the enter path, only x0 (vcpu pointer) must be preserved until
+ * the last moment. On the exit path, x0 (vcpu pointer) and x1 (exception
+ * code) must both be preserved until the epilogue.
+ */
+ENTRY(__kvm_vcpu_run)
+ kern_hyp_va x0
+ msr tpidr_el2, x0 // Save the vcpu register
+
+ save_host_regs
+ save_host_fpsimd
+ save_host_sysregs
+
+ activate_traps
+ activate_vm
+
+ restore_vgic_state
+ restore_timer_state
+ restore_guest_sysregs
+ restore_guest_fpsimd
+ restore_guest_regs
+
+ // That's it, no more messing around.
+ clrex
+ eret
+
+__kvm_vcpu_return:
+ // Assume x0 is the vcpu pointer, x1 the return code
+ // Guest's x0-x3 are on the stack
+ save_guest_regs
+ save_guest_fpsimd
+ save_guest_sysregs
+ save_timer_state
+ save_vgic_state
+
+ deactivate_traps
+ deactivate_vm
+
+ restore_host_sysregs
+ restore_host_fpsimd
+ restore_host_regs
+ mov x0, x1
+ clrex
+ ret
+END(__kvm_vcpu_run)
+
+// void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
+ENTRY(__kvm_tlb_flush_vmid_ipa)
+ kern_hyp_va x0
+ ldr x2, [x0, #KVM_VTTBR]
+ msr vttbr_el2, x2
+ isb
+
+ /*
+ * We could do so much better if we had the VA as well.
+ * Instead, we invalidate Stage-2 for this IPA, and the
+ * whole of Stage-1. Weep...
+ */
+ tlbi ipas2e1is, x1
+ dsb sy
+ tlbi vmalle1is
+ dsb sy
+ isb
+
+ msr vttbr_el2, xzr
+ isb
+ ret
+ENDPROC(__kvm_tlb_flush_vmid_ipa)
+
+ENTRY(__kvm_flush_vm_context)
+ tlbi alle1is
+ ic ialluis
+ dsb sy
+ isb
+ ret
+ENDPROC(__kvm_flush_vm_context)
+
+__kvm_hyp_panic:
+ adr x0, __hyp_panic_str
+ adr x1, 1f
+ ldp x2, x3, [x1]
+ sub x0, x0, x2
+ add x0, x0, x3
+ mrs x1, spsr_el2
+ mrs x2, elr_el2
+ mrs x3, esr_el2
+ mrs x4, far_el2
+ mrs x5, hpfar_el2
+ mrs x6, tpidr_el2
+
+ mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
+ PSR_MODE_EL1h)
+ msr spsr_el2, lr
+ ldr lr, =panic
+ msr elr_el2, lr
+ eret
+
+ .align 3
+1: .quad HYP_PAGE_OFFSET
+ .quad PAGE_OFFSET
+ENDPROC(__kvm_hyp_panic)
+
+__hyp_panic_str:
+ .ascii "HYP panic:\nPS:%08x PC:%p ESR:%p\nFAR:%p HPFAR:%p VCPU:%p\n\0"
+
+ .align 2
+
+ENTRY(kvm_call_hyp)
+ hvc #0
+ ret
+ENDPROC(kvm_call_hyp)
+
+.macro invalid_vector label, target
+ .align 2
+\label:
+ b \target
+ENDPROC(\label)
+.endm
+
+ /* None of these should ever happen */
+ invalid_vector el2t_sync_invalid, __kvm_hyp_panic
+ invalid_vector el2t_irq_invalid, __kvm_hyp_panic
+ invalid_vector el2t_fiq_invalid, __kvm_hyp_panic
+ invalid_vector el2t_error_invalid, __kvm_hyp_panic
+ invalid_vector el2h_sync_invalid, __kvm_hyp_panic
+ invalid_vector el2h_irq_invalid, __kvm_hyp_panic
+ invalid_vector el2h_fiq_invalid, __kvm_hyp_panic
+ invalid_vector el2h_error_invalid, __kvm_hyp_panic
+ invalid_vector el1_sync_invalid, __kvm_hyp_panic
+ invalid_vector el1_irq_invalid, __kvm_hyp_panic
+ invalid_vector el1_fiq_invalid, __kvm_hyp_panic
+ invalid_vector el1_error_invalid, __kvm_hyp_panic
+
+el1_sync: // Guest trapped into EL2
+ push x0, x1
+ push x2, x3
+
+ mrs x1, esr_el2
+ lsr x2, x1, #ESR_EL2_EC_SHIFT
+
+ cmp x2, #ESR_EL2_EC_HVC64
+ b.ne el1_trap
+
+ mrs x3, vttbr_el2 // If vttbr is valid, the 64bit guest
+ cbnz x3, el1_trap // called HVC
+
+ /* Here, we're pretty sure the host called HVC. */
+ pop x2, x3
+ pop x0, x1
+
+ push lr, xzr
+
+ /*
+ * Compute the function address in EL2, and shuffle the parameters.
+ */
+ kern_hyp_va x0
+ mov lr, x0
+ mov x0, x1
+ mov x1, x2
+ mov x2, x3
+ blr lr
+
+ pop lr, xzr
+ eret
+
+el1_trap:
+ /*
+ * x1: ESR
+ * x2: ESR_EC
+ */
+ cmp x2, #ESR_EL2_EC_DABT
+ mov x0, #ESR_EL2_EC_IABT
+ ccmp x2, x0, #4, ne
+ b.ne 1f // Not an abort we care about
+
+ /* This is an abort. Check for permission fault */
+ and x2, x1, #ESR_EL2_FSC_TYPE
+ cmp x2, #FSC_PERM
+ b.ne 1f // Not a permission fault
+
+ /*
+ * Check for Stage-1 page table walk, which is guaranteed
+ * to give a valid HPFAR_EL2.
+ */
+ tbnz x1, #7, 1f // S1PTW is set
+
+ /*
+ * Permission fault, HPFAR_EL2 is invalid.
+ * Resolve the IPA the hard way using the guest VA.
+ * We always perform an EL1 lookup, as we already
+ * went through Stage-1.
+ */
+ mrs x3, far_el2
+ at s1e1r, x3
+ isb
+
+ /* Read result */
+ mrs x3, par_el1
+ tbnz x3, #1, 3f // Bail out if we failed the translation
+ ubfx x3, x3, #12, #36 // Extract IPA
+ lsl x3, x3, #4 // and present it like HPFAR
+ b 2f
+
+1: mrs x3, hpfar_el2
+
+2: mrs x0, tpidr_el2
+ mrs x2, far_el2
+ str x1, [x0, #VCPU_ESR_EL2]
+ str x2, [x0, #VCPU_FAR_EL2]
+ str x3, [x0, #VCPU_HPFAR_EL2]
+
+ mov x1, #ARM_EXCEPTION_TRAP
+ b __kvm_vcpu_return
+
+ /*
+ * Translation failed. Just return to the guest and
+ * let it fault again. Another CPU is probably playing
+ * behind our back.
+ */
+3: pop x2, x3
+ pop x0, x1
+
+ eret
+
+el1_irq:
+ push x0, x1
+ push x2, x3
+ mrs x0, tpidr_el2
+ mov x1, #ARM_EXCEPTION_IRQ
+ b __kvm_vcpu_return
+
+ .ltorg
+
+ .align 11
+
+ENTRY(__kvm_hyp_vector)
+ ventry el2t_sync_invalid // Synchronous EL2t
+ ventry el2t_irq_invalid // IRQ EL2t
+ ventry el2t_fiq_invalid // FIQ EL2t
+ ventry el2t_error_invalid // Error EL2t
+
+ ventry el2h_sync_invalid // Synchronous EL2h
+ ventry el2h_irq_invalid // IRQ EL2h
+ ventry el2h_fiq_invalid // FIQ EL2h
+ ventry el2h_error_invalid // Error EL2h
+
+ ventry el1_sync // Synchronous 64-bit EL1
+ ventry el1_irq // IRQ 64-bit EL1
+ ventry el1_fiq_invalid // FIQ 64-bit EL1
+ ventry el1_error_invalid // Error 64-bit EL1
+
+ ventry el1_sync // Synchronous 32-bit EL1
+ ventry el1_irq // IRQ 32-bit EL1
+ ventry el1_fiq_invalid // FIQ 32-bit EL1
+ ventry el1_error_invalid // Error 32-bit EL1
+ENDPROC(__kvm_hyp_vector)
+
+__kvm_hyp_code_end:
+ .globl __kvm_hyp_code_end
+
+ .popsection
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 17/29] arm64: KVM: Exit handling
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Handle the exit of a VM, decoding the exit reason from HYP mode
and calling the corresponding handler.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/handle_exit.c | 119 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 119 insertions(+)
create mode 100644 arch/arm64/kvm/handle_exit.c
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
new file mode 100644
index 0000000..0e1fa4e
--- /dev/null
+++ b/arch/arm64/kvm/handle_exit.c
@@ -0,0 +1,119 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/handle_exit.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_coproc.h>
+#include <asm/kvm_mmu.h>
+
+typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
+
+static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ /*
+ * Guest called HVC instruction:
+ * Let it know we don't want that by injecting an undefined exception.
+ */
+ kvm_debug("hvc: %x (at %08lx)", kvm_vcpu_get_hsr(vcpu) & ((1 << 16) - 1),
+ *vcpu_pc(vcpu));
+ kvm_debug(" HSR: %8x", kvm_vcpu_get_hsr(vcpu));
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ /* We don't support SMC; don't do that. */
+ kvm_debug("smc: at %08lx", *vcpu_pc(vcpu));
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+/**
+ * kvm_handle_wfi - handle a wait-for-interrupts instruction executed by a guest
+ * @vcpu: the vcpu pointer
+ *
+ * Simply call kvm_vcpu_block(), which will halt execution of
+ * world-switches and schedule other host processes until there is an
+ * incoming IRQ or FIQ to the VM.
+ */
+static int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ kvm_vcpu_block(vcpu);
+ return 1;
+}
+
+static exit_handle_fn arm_exit_handlers[] = {
+ [ESR_EL2_EC_WFI] = kvm_handle_wfi,
+ [ESR_EL2_EC_HVC64] = handle_hvc,
+ [ESR_EL2_EC_SMC64] = handle_smc,
+ [ESR_EL2_EC_SYS64] = kvm_handle_sys_reg,
+ [ESR_EL2_EC_IABT] = kvm_handle_guest_abort,
+ [ESR_EL2_EC_DABT] = kvm_handle_guest_abort,
+};
+
+static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
+{
+ u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
+
+ if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers) ||
+ !arm_exit_handlers[hsr_ec]) {
+ kvm_err("Unkown exception class: hsr: %#08x\n",
+ (unsigned int)kvm_vcpu_get_hsr(vcpu));
+ BUG();
+ }
+
+ return arm_exit_handlers[hsr_ec];
+}
+
+/*
+ * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
+ * proper exit to userspace.
+ */
+int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ int exception_index)
+{
+ exit_handle_fn exit_handler;
+
+ switch (exception_index) {
+ case ARM_EXCEPTION_IRQ:
+ return 1;
+ case ARM_EXCEPTION_TRAP:
+ /*
+ * See ARM ARM B1.14.1: "Hyp traps on instructions
+ * that fail their condition code check"
+ */
+ if (!kvm_condition_valid(vcpu)) {
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ return 1;
+ }
+
+ exit_handler = kvm_get_exit_handler(vcpu);
+
+ return exit_handler(vcpu, run);
+ default:
+ kvm_pr_unimpl("Unsupported exception type: %d",
+ exception_index);
+ run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+ return 0;
+ }
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 17/29] arm64: KVM: Exit handling
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Handle the exit of a VM, decoding the exit reason from HYP mode
and calling the corresponding handler.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/handle_exit.c | 119 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 119 insertions(+)
create mode 100644 arch/arm64/kvm/handle_exit.c
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
new file mode 100644
index 0000000..0e1fa4e
--- /dev/null
+++ b/arch/arm64/kvm/handle_exit.c
@@ -0,0 +1,119 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/handle_exit.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_coproc.h>
+#include <asm/kvm_mmu.h>
+
+typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
+
+static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ /*
+ * Guest called HVC instruction:
+ * Let it know we don't want that by injecting an undefined exception.
+ */
+ kvm_debug("hvc: %x (at %08lx)", kvm_vcpu_get_hsr(vcpu) & ((1 << 16) - 1),
+ *vcpu_pc(vcpu));
+ kvm_debug(" HSR: %8x", kvm_vcpu_get_hsr(vcpu));
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ /* We don't support SMC; don't do that. */
+ kvm_debug("smc:@%08lx", *vcpu_pc(vcpu));
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+/**
+ * kvm_handle_wfi - handle a wait-for-interrupts instruction executed by a guest
+ * @vcpu: the vcpu pointer
+ *
+ * Simply call kvm_vcpu_block(), which will halt execution of
+ * world-switches and schedule other host processes until there is an
+ * incoming IRQ or FIQ to the VM.
+ */
+static int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ kvm_vcpu_block(vcpu);
+ return 1;
+}
+
+static exit_handle_fn arm_exit_handlers[] = {
+ [ESR_EL2_EC_WFI] = kvm_handle_wfi,
+ [ESR_EL2_EC_HVC64] = handle_hvc,
+ [ESR_EL2_EC_SMC64] = handle_smc,
+ [ESR_EL2_EC_SYS64] = kvm_handle_sys_reg,
+ [ESR_EL2_EC_IABT] = kvm_handle_guest_abort,
+ [ESR_EL2_EC_DABT] = kvm_handle_guest_abort,
+};
+
+static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
+{
+ u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
+
+ if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers) ||
+ !arm_exit_handlers[hsr_ec]) {
+ kvm_err("Unkown exception class: hsr: %#08x\n",
+ (unsigned int)kvm_vcpu_get_hsr(vcpu));
+ BUG();
+ }
+
+ return arm_exit_handlers[hsr_ec];
+}
+
+/*
+ * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
+ * proper exit to userspace.
+ */
+int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ int exception_index)
+{
+ exit_handle_fn exit_handler;
+
+ switch (exception_index) {
+ case ARM_EXCEPTION_IRQ:
+ return 1;
+ case ARM_EXCEPTION_TRAP:
+ /*
+ * See ARM ARM B1.14.1: "Hyp traps on instructions
+ * that fail their condition code check"
+ */
+ if (!kvm_condition_valid(vcpu)) {
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ return 1;
+ }
+
+ exit_handler = kvm_get_exit_handler(vcpu);
+
+ return exit_handler(vcpu, run);
+ default:
+ kvm_pr_unimpl("Unsupported exception type: %d",
+ exception_index);
+ run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+ return 0;
+ }
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 18/29] arm64: KVM: Plug the VGIC
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Shouldn't be needed - a complete duplicate from arch/arm.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_vgic.h | 156 ++++++++++++++++++++++++++++++++++++++
1 file changed, 156 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_vgic.h
diff --git a/arch/arm64/include/asm/kvm_vgic.h b/arch/arm64/include/asm/kvm_vgic.h
new file mode 100644
index 0000000..f353f22
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_vgic.h
@@ -0,0 +1,156 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_VGIC_H
+#define __ARM64_KVM_VGIC_H
+
+#include <linux/kernel.h>
+#include <linux/kvm.h>
+#include <linux/irqreturn.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/irqchip/arm-gic.h>
+
+#define VGIC_NR_IRQS 128
+#define VGIC_NR_SGIS 16
+#define VGIC_NR_PPIS 16
+#define VGIC_NR_PRIVATE_IRQS (VGIC_NR_SGIS + VGIC_NR_PPIS)
+#define VGIC_NR_SHARED_IRQS (VGIC_NR_IRQS - VGIC_NR_PRIVATE_IRQS)
+#define VGIC_MAX_CPUS KVM_MAX_VCPUS
+
+/* Sanity checks... */
+#if (VGIC_MAX_CPUS > 8)
+#error Invalid number of CPU interfaces
+#endif
+
+#if (VGIC_NR_IRQS & 31)
+#error "VGIC_NR_IRQS must be a multiple of 32"
+#endif
+
+#if (VGIC_NR_IRQS > 1024)
+#error "VGIC_NR_IRQS must be <= 1024"
+#endif
+
+/*
+ * The GIC distributor registers describing interrupts have two parts:
+ * - 32 per-CPU interrupts (SGI + PPI)
+ * - a bunch of shared interrupts (SPI)
+ */
+struct vgic_bitmap {
+ union {
+ u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
+ DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
+ } percpu[VGIC_MAX_CPUS];
+ union {
+ u32 reg[VGIC_NR_SHARED_IRQS / 32];
+ DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
+ } shared;
+};
+
+struct vgic_bytemap {
+ u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
+ u32 shared[VGIC_NR_SHARED_IRQS / 4];
+};
+
+struct vgic_dist {
+ spinlock_t lock;
+ bool ready;
+
+ /* Virtual control interface mapping */
+ void __iomem *vctrl_base;
+
+ /* Distributor and vcpu interface mapping in the guest */
+ phys_addr_t vgic_dist_base;
+ phys_addr_t vgic_cpu_base;
+
+ /* Distributor enabled */
+ u32 enabled;
+
+ /* Interrupt enabled (one bit per IRQ) */
+ struct vgic_bitmap irq_enabled;
+
+ /* Interrupt 'pin' level */
+ struct vgic_bitmap irq_state;
+
+ /* Level-triggered interrupt in progress */
+ struct vgic_bitmap irq_active;
+
+ /* Interrupt priority. Not used yet. */
+ struct vgic_bytemap irq_priority;
+
+ /* Level/edge triggered */
+ struct vgic_bitmap irq_cfg;
+
+ /* Source CPU per SGI and target CPU */
+ u8 irq_sgi_sources[VGIC_MAX_CPUS][16];
+
+ /* Target CPU for each IRQ */
+ u8 irq_spi_cpu[VGIC_NR_SHARED_IRQS];
+ struct vgic_bitmap irq_spi_target[VGIC_MAX_CPUS];
+
+ /* Bitmap indicating which CPU has something pending */
+ unsigned long irq_pending_on_cpu;
+};
+
+struct vgic_cpu {
+ /* per IRQ to LR mapping */
+ u8 vgic_irq_lr_map[VGIC_NR_IRQS];
+
+ /* Pending interrupts on this VCPU */
+ DECLARE_BITMAP( pending_percpu, VGIC_NR_PRIVATE_IRQS);
+ DECLARE_BITMAP( pending_shared, VGIC_NR_SHARED_IRQS);
+
+ /* Bitmap of used/free list registers */
+ DECLARE_BITMAP( lr_used, 64);
+
+ /* Number of list registers on this CPU */
+ int nr_lr;
+
+ /* CPU vif control registers for world switch */
+ u32 vgic_hcr;
+ u32 vgic_vmcr;
+ u32 vgic_misr; /* Saved only */
+ u32 vgic_eisr[2]; /* Saved only */
+ u32 vgic_elrsr[2]; /* Saved only */
+ u32 vgic_apr;
+ u32 vgic_lr[64]; /* A15 has only 4... */
+};
+
+#define LR_EMPTY 0xff
+
+struct kvm;
+struct kvm_vcpu;
+struct kvm_run;
+struct kvm_exit_mmio;
+
+int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr);
+int kvm_vgic_hyp_init(void);
+int kvm_vgic_init(struct kvm *kvm);
+int kvm_vgic_create(struct kvm *kvm);
+int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
+void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
+void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
+int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
+ bool level);
+int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
+bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_exit_mmio *mmio);
+
+#define irqchip_in_kernel(k) (!!((k)->arch.vgic.vctrl_base))
+#define vgic_initialized(k) ((k)->arch.vgic.ready)
+
+#endif
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 18/29] arm64: KVM: Plug the VGIC
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Shouldn't be needed - a complete duplicate from arch/arm.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_vgic.h | 156 ++++++++++++++++++++++++++++++++++++++
1 file changed, 156 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_vgic.h
diff --git a/arch/arm64/include/asm/kvm_vgic.h b/arch/arm64/include/asm/kvm_vgic.h
new file mode 100644
index 0000000..f353f22
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_vgic.h
@@ -0,0 +1,156 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_VGIC_H
+#define __ARM64_KVM_VGIC_H
+
+#include <linux/kernel.h>
+#include <linux/kvm.h>
+#include <linux/irqreturn.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/irqchip/arm-gic.h>
+
+#define VGIC_NR_IRQS 128
+#define VGIC_NR_SGIS 16
+#define VGIC_NR_PPIS 16
+#define VGIC_NR_PRIVATE_IRQS (VGIC_NR_SGIS + VGIC_NR_PPIS)
+#define VGIC_NR_SHARED_IRQS (VGIC_NR_IRQS - VGIC_NR_PRIVATE_IRQS)
+#define VGIC_MAX_CPUS KVM_MAX_VCPUS
+
+/* Sanity checks... */
+#if (VGIC_MAX_CPUS > 8)
+#error Invalid number of CPU interfaces
+#endif
+
+#if (VGIC_NR_IRQS & 31)
+#error "VGIC_NR_IRQS must be a multiple of 32"
+#endif
+
+#if (VGIC_NR_IRQS > 1024)
+#error "VGIC_NR_IRQS must be <= 1024"
+#endif
+
+/*
+ * The GIC distributor registers describing interrupts have two parts:
+ * - 32 per-CPU interrupts (SGI + PPI)
+ * - a bunch of shared interrupts (SPI)
+ */
+struct vgic_bitmap {
+ union {
+ u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
+ DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
+ } percpu[VGIC_MAX_CPUS];
+ union {
+ u32 reg[VGIC_NR_SHARED_IRQS / 32];
+ DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
+ } shared;
+};
+
+struct vgic_bytemap {
+ u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
+ u32 shared[VGIC_NR_SHARED_IRQS / 4];
+};
+
+struct vgic_dist {
+ spinlock_t lock;
+ bool ready;
+
+ /* Virtual control interface mapping */
+ void __iomem *vctrl_base;
+
+ /* Distributor and vcpu interface mapping in the guest */
+ phys_addr_t vgic_dist_base;
+ phys_addr_t vgic_cpu_base;
+
+ /* Distributor enabled */
+ u32 enabled;
+
+ /* Interrupt enabled (one bit per IRQ) */
+ struct vgic_bitmap irq_enabled;
+
+ /* Interrupt 'pin' level */
+ struct vgic_bitmap irq_state;
+
+ /* Level-triggered interrupt in progress */
+ struct vgic_bitmap irq_active;
+
+ /* Interrupt priority. Not used yet. */
+ struct vgic_bytemap irq_priority;
+
+ /* Level/edge triggered */
+ struct vgic_bitmap irq_cfg;
+
+ /* Source CPU per SGI and target CPU */
+ u8 irq_sgi_sources[VGIC_MAX_CPUS][16];
+
+ /* Target CPU for each IRQ */
+ u8 irq_spi_cpu[VGIC_NR_SHARED_IRQS];
+ struct vgic_bitmap irq_spi_target[VGIC_MAX_CPUS];
+
+ /* Bitmap indicating which CPU has something pending */
+ unsigned long irq_pending_on_cpu;
+};
+
+struct vgic_cpu {
+ /* per IRQ to LR mapping */
+ u8 vgic_irq_lr_map[VGIC_NR_IRQS];
+
+ /* Pending interrupts on this VCPU */
+ DECLARE_BITMAP( pending_percpu, VGIC_NR_PRIVATE_IRQS);
+ DECLARE_BITMAP( pending_shared, VGIC_NR_SHARED_IRQS);
+
+ /* Bitmap of used/free list registers */
+ DECLARE_BITMAP( lr_used, 64);
+
+ /* Number of list registers on this CPU */
+ int nr_lr;
+
+ /* CPU vif control registers for world switch */
+ u32 vgic_hcr;
+ u32 vgic_vmcr;
+ u32 vgic_misr; /* Saved only */
+ u32 vgic_eisr[2]; /* Saved only */
+ u32 vgic_elrsr[2]; /* Saved only */
+ u32 vgic_apr;
+ u32 vgic_lr[64]; /* A15 has only 4... */
+};
+
+#define LR_EMPTY 0xff
+
+struct kvm;
+struct kvm_vcpu;
+struct kvm_run;
+struct kvm_exit_mmio;
+
+int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr);
+int kvm_vgic_hyp_init(void);
+int kvm_vgic_init(struct kvm *kvm);
+int kvm_vgic_create(struct kvm *kvm);
+int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
+void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
+void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
+int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
+ bool level);
+int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
+bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_exit_mmio *mmio);
+
+#define irqchip_in_kernel(k) (!!((k)->arch.vgic.vctrl_base))
+#define vgic_initialized(k) ((k)->arch.vgic.ready)
+
+#endif
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 19/29] arm64: KVM: Plug the arch timer
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Shouldn't be needed - a complete duplicate from arch/arm.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/arch_timer.c | 1 +
arch/arm64/include/asm/kvm_arch_timer.h | 58 +++++++++++++++++++++++++++++++++
2 files changed, 59 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_arch_timer.h
diff --git a/arch/arm/kvm/arch_timer.c b/arch/arm/kvm/arch_timer.c
index 6ac938d..ca04a99 100644
--- a/arch/arm/kvm/arch_timer.c
+++ b/arch/arm/kvm/arch_timer.c
@@ -194,6 +194,7 @@ static struct notifier_block kvm_timer_cpu_nb = {
static const struct of_device_id arch_timer_of_match[] = {
{ .compatible = "arm,armv7-timer", },
+ { .compatible = "arm,armv8-timer", },
{},
};
diff --git a/arch/arm64/include/asm/kvm_arch_timer.h b/arch/arm64/include/asm/kvm_arch_timer.h
new file mode 100644
index 0000000..eb02273
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_arch_timer.h
@@ -0,0 +1,58 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_ARCH_TIMER_H
+#define __ARM64_KVM_ARCH_TIMER_H
+
+#include <linux/clocksource.h>
+#include <linux/hrtimer.h>
+#include <linux/workqueue.h>
+
+struct arch_timer_kvm {
+ /* Is the timer enabled */
+ bool enabled;
+
+ /* Virtual offset, restored only */
+ cycle_t cntvoff;
+};
+
+struct arch_timer_cpu {
+ /* Background timer used when the guest is not running */
+ struct hrtimer timer;
+
+ /* Work queued with the above timer expires */
+ struct work_struct expired;
+
+ /* Background timer active */
+ bool armed;
+
+ /* Timer IRQ */
+ const struct kvm_irq_level *irq;
+
+ /* Registers: control register, timer value */
+ u32 cntv_ctl; /* Saved/restored */
+ cycle_t cntv_cval; /* Saved/restored */
+};
+
+int kvm_timer_hyp_init(void);
+int kvm_timer_init(struct kvm *kvm);
+void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
+void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
+void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
+void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu);
+
+#endif
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 19/29] arm64: KVM: Plug the arch timer
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Shouldn't be needed - a complete duplicate from arch/arm.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm/kvm/arch_timer.c | 1 +
arch/arm64/include/asm/kvm_arch_timer.h | 58 +++++++++++++++++++++++++++++++++
2 files changed, 59 insertions(+)
create mode 100644 arch/arm64/include/asm/kvm_arch_timer.h
diff --git a/arch/arm/kvm/arch_timer.c b/arch/arm/kvm/arch_timer.c
index 6ac938d..ca04a99 100644
--- a/arch/arm/kvm/arch_timer.c
+++ b/arch/arm/kvm/arch_timer.c
@@ -194,6 +194,7 @@ static struct notifier_block kvm_timer_cpu_nb = {
static const struct of_device_id arch_timer_of_match[] = {
{ .compatible = "arm,armv7-timer", },
+ { .compatible = "arm,armv8-timer", },
{},
};
diff --git a/arch/arm64/include/asm/kvm_arch_timer.h b/arch/arm64/include/asm/kvm_arch_timer.h
new file mode 100644
index 0000000..eb02273
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_arch_timer.h
@@ -0,0 +1,58 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_ARCH_TIMER_H
+#define __ARM64_KVM_ARCH_TIMER_H
+
+#include <linux/clocksource.h>
+#include <linux/hrtimer.h>
+#include <linux/workqueue.h>
+
+struct arch_timer_kvm {
+ /* Is the timer enabled */
+ bool enabled;
+
+ /* Virtual offset, restored only */
+ cycle_t cntvoff;
+};
+
+struct arch_timer_cpu {
+ /* Background timer used when the guest is not running */
+ struct hrtimer timer;
+
+ /* Work queued with the above timer expires */
+ struct work_struct expired;
+
+ /* Background timer active */
+ bool armed;
+
+ /* Timer IRQ */
+ const struct kvm_irq_level *irq;
+
+ /* Registers: control register, timer value */
+ u32 cntv_ctl; /* Saved/restored */
+ cycle_t cntv_cval; /* Saved/restored */
+};
+
+int kvm_timer_hyp_init(void);
+int kvm_timer_init(struct kvm *kvm);
+void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
+void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
+void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
+void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu);
+
+#endif
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 20/29] arm64: KVM: PSCI implementation
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Wire the PSCI backend into the exit handling code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/include/asm/kvm_psci.h | 23 +++++++++++++++++++++++
arch/arm64/include/uapi/asm/kvm.h | 16 ++++++++++++++++
arch/arm64/kvm/handle_exit.c | 16 +++++++---------
4 files changed, 47 insertions(+), 10 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_psci.h
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 85e706b..68558ac 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -34,7 +34,7 @@
#include <asm/kvm_vgic.h>
#include <asm/kvm_arch_timer.h>
-#define KVM_VCPU_MAX_FEATURES 0
+#define KVM_VCPU_MAX_FEATURES 1
/* We don't currently support large pages. */
#define KVM_HPAGE_GFN_SHIFT(x) 0
diff --git a/arch/arm64/include/asm/kvm_psci.h b/arch/arm64/include/asm/kvm_psci.h
new file mode 100644
index 0000000..d96f054
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_psci.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_PSCI_H__
+#define __ARM64_KVM_PSCI_H__
+
+bool kvm_psci_call(struct kvm_vcpu *vcpu);
+
+#endif /* __ARM64_KVM_PSCI_H__ */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index fffeb11..24c8318 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -64,6 +64,8 @@ struct kvm_regs {
#define KVM_VGIC_V2_DIST_SIZE 0x1000
#define KVM_VGIC_V2_CPU_SIZE 0x2000
+#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
+
struct kvm_vcpu_init {
__u32 target;
__u32 features[7];
@@ -163,6 +165,20 @@ struct kvm_arch_memory_slot {
/* Highest supported SPI, from VGIC_NR_IRQS */
#define KVM_ARM_IRQ_GIC_MAX 127
+/* PSCI interface */
+#define KVM_PSCI_FN_BASE 0x95c1ba5e
+#define KVM_PSCI_FN(n) (KVM_PSCI_FN_BASE + (n))
+
+#define KVM_PSCI_FN_CPU_SUSPEND KVM_PSCI_FN(0)
+#define KVM_PSCI_FN_CPU_OFF KVM_PSCI_FN(1)
+#define KVM_PSCI_FN_CPU_ON KVM_PSCI_FN(2)
+#define KVM_PSCI_FN_MIGRATE KVM_PSCI_FN(3)
+
+#define KVM_PSCI_RET_SUCCESS 0
+#define KVM_PSCI_RET_NI ((unsigned long)-1)
+#define KVM_PSCI_RET_INVAL ((unsigned long)-2)
+#define KVM_PSCI_RET_DENIED ((unsigned long)-3)
+
#endif
#endif /* __ARM_KVM_H__ */
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 0e1fa4e..fa38230 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -24,26 +24,24 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_psci.h>
typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
- /*
- * Guest called HVC instruction:
- * Let it know we don't want that by injecting an undefined exception.
- */
- kvm_debug("hvc: %x (at %08lx)", kvm_vcpu_get_hsr(vcpu) & ((1 << 16) - 1),
- *vcpu_pc(vcpu));
- kvm_debug(" HSR: %8x", kvm_vcpu_get_hsr(vcpu));
+ if (kvm_psci_call(vcpu))
+ return 1;
+
kvm_inject_undefined(vcpu);
return 1;
}
static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
- /* We don't support SMC; don't do that. */
- kvm_debug("smc: at %08lx", *vcpu_pc(vcpu));
+ if (kvm_psci_call(vcpu))
+ return 1;
+
kvm_inject_undefined(vcpu);
return 1;
}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 20/29] arm64: KVM: PSCI implementation
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Wire the PSCI backend into the exit handling code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/include/asm/kvm_psci.h | 23 +++++++++++++++++++++++
arch/arm64/include/uapi/asm/kvm.h | 16 ++++++++++++++++
arch/arm64/kvm/handle_exit.c | 16 +++++++---------
4 files changed, 47 insertions(+), 10 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_psci.h
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 85e706b..68558ac 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -34,7 +34,7 @@
#include <asm/kvm_vgic.h>
#include <asm/kvm_arch_timer.h>
-#define KVM_VCPU_MAX_FEATURES 0
+#define KVM_VCPU_MAX_FEATURES 1
/* We don't currently support large pages. */
#define KVM_HPAGE_GFN_SHIFT(x) 0
diff --git a/arch/arm64/include/asm/kvm_psci.h b/arch/arm64/include/asm/kvm_psci.h
new file mode 100644
index 0000000..d96f054
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_psci.h
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_KVM_PSCI_H__
+#define __ARM64_KVM_PSCI_H__
+
+bool kvm_psci_call(struct kvm_vcpu *vcpu);
+
+#endif /* __ARM64_KVM_PSCI_H__ */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index fffeb11..24c8318 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -64,6 +64,8 @@ struct kvm_regs {
#define KVM_VGIC_V2_DIST_SIZE 0x1000
#define KVM_VGIC_V2_CPU_SIZE 0x2000
+#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
+
struct kvm_vcpu_init {
__u32 target;
__u32 features[7];
@@ -163,6 +165,20 @@ struct kvm_arch_memory_slot {
/* Highest supported SPI, from VGIC_NR_IRQS */
#define KVM_ARM_IRQ_GIC_MAX 127
+/* PSCI interface */
+#define KVM_PSCI_FN_BASE 0x95c1ba5e
+#define KVM_PSCI_FN(n) (KVM_PSCI_FN_BASE + (n))
+
+#define KVM_PSCI_FN_CPU_SUSPEND KVM_PSCI_FN(0)
+#define KVM_PSCI_FN_CPU_OFF KVM_PSCI_FN(1)
+#define KVM_PSCI_FN_CPU_ON KVM_PSCI_FN(2)
+#define KVM_PSCI_FN_MIGRATE KVM_PSCI_FN(3)
+
+#define KVM_PSCI_RET_SUCCESS 0
+#define KVM_PSCI_RET_NI ((unsigned long)-1)
+#define KVM_PSCI_RET_INVAL ((unsigned long)-2)
+#define KVM_PSCI_RET_DENIED ((unsigned long)-3)
+
#endif
#endif /* __ARM_KVM_H__ */
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 0e1fa4e..fa38230 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -24,26 +24,24 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_psci.h>
typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
- /*
- * Guest called HVC instruction:
- * Let it know we don't want that by injecting an undefined exception.
- */
- kvm_debug("hvc: %x (at %08lx)", kvm_vcpu_get_hsr(vcpu) & ((1 << 16) - 1),
- *vcpu_pc(vcpu));
- kvm_debug(" HSR: %8x", kvm_vcpu_get_hsr(vcpu));
+ if (kvm_psci_call(vcpu))
+ return 1;
+
kvm_inject_undefined(vcpu);
return 1;
}
static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
- /* We don't support SMC; don't do that. */
- kvm_debug("smc:@%08lx", *vcpu_pc(vcpu));
+ if (kvm_psci_call(vcpu))
+ return 1;
+
kvm_inject_undefined(vcpu);
return 1;
}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 21/29] arm64: KVM: Build system integration
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/Kconfig | 2 ++
arch/arm64/Makefile | 2 +-
arch/arm64/kvm/Kconfig | 59 +++++++++++++++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/Makefile | 18 +++++++++++++++
4 files changed, 80 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kvm/Kconfig
create mode 100644 arch/arm64/kvm/Makefile
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 536611a..cff688b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -241,6 +241,8 @@ source "drivers/Kconfig"
source "fs/Kconfig"
+source "arch/arm64/kvm/Kconfig"
+
source "arch/arm64/Kconfig.debug"
source "security/Kconfig"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index c95c5cb..ae89e63 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -36,7 +36,7 @@ TEXT_OFFSET := 0x00080000
export TEXT_OFFSET GZFLAGS
-core-y += arch/arm64/kernel/ arch/arm64/mm/
+core-y += arch/arm64/kernel/ arch/arm64/mm/ arch/arm64/kvm/
libs-y := arch/arm64/lib/ $(libs-y)
libs-y += $(LIBGCC)
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
new file mode 100644
index 0000000..a76be8b
--- /dev/null
+++ b/arch/arm64/kvm/Kconfig
@@ -0,0 +1,59 @@
+#
+# KVM configuration
+#
+
+source "virt/kvm/Kconfig"
+
+menuconfig VIRTUALIZATION
+ bool "Virtualization"
+ ---help---
+ Say Y here to get to see options for using your Linux host to run
+ other operating systems inside virtual machines (guests).
+ This option alone does not add any kernel code.
+
+ If you say N, all options in this submenu will be skipped and
+ disabled.
+
+if VIRTUALIZATION
+
+config KVM
+ bool "Kernel-based Virtual Machine (KVM) support"
+ select PREEMPT_NOTIFIERS
+ select ANON_INODES
+ select KVM_MMIO
+ select KVM_ARM_HOST
+ select KVM_ARM_VGIC
+ select KVM_ARM_TIMER
+ ---help---
+ Support hosting virtualized guest machines.
+
+ This module provides access to the hardware capabilities through
+ a character device node named /dev/kvm.
+
+ If unsure, say N.
+
+config KVM_ARM_HOST
+ bool
+ depends on KVM
+ depends on MMU
+ select MMU_NOTIFIER
+ ---help---
+ Provides host support for ARM processors.
+
+config KVM_ARM_VGIC
+ bool
+ depends on KVM_ARM_HOST && OF
+ select HAVE_KVM_IRQCHIP
+ ---help---
+ Adds support for a hardware assisted, in-kernel GIC emulation.
+
+config KVM_ARM_TIMER
+ bool
+ depends on KVM_ARM_VGIC
+ select HAVE_KVM_IRQCHIP
+ ---help---
+ Adds support for the Architected Timers in virtual machines
+
+source drivers/virtio/Kconfig
+
+endif # VIRTUALIZATION
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
new file mode 100644
index 0000000..14ba38d
--- /dev/null
+++ b/arch/arm64/kvm/Makefile
@@ -0,0 +1,18 @@
+#
+# Makefile for Kernel-based Virtual Machine module
+#
+
+ccflags-y += -Ivirt/kvm -Iarch/arm64/kvm
+CFLAGS_arm.o := -I.
+CFLAGS_mmu.o := -I.
+
+obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o)
+
+obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../arch/arm/kvm/, arm.o mmu.o mmio.o psci.o perf.o)
+
+obj-$(CONFIG_KVM_ARM_HOST) += inject_fault.o
+obj-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o idmap.o
+obj-$(CONFIG_KVM_ARM_HOST) += guest.o reset.o sys_regs.o sys_regs_a57.o
+
+obj-$(CONFIG_KVM_ARM_VGIC) += $(addprefix ../../../arch/arm/kvm/, vgic.o)
+obj-$(CONFIG_KVM_ARM_TIMER) += $(addprefix ../../../arch/arm/kvm/, arch_timer.o)
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 21/29] arm64: KVM: Build system integration
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/Kconfig | 2 ++
arch/arm64/Makefile | 2 +-
arch/arm64/kvm/Kconfig | 59 +++++++++++++++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/Makefile | 18 +++++++++++++++
4 files changed, 80 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kvm/Kconfig
create mode 100644 arch/arm64/kvm/Makefile
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 536611a..cff688b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -241,6 +241,8 @@ source "drivers/Kconfig"
source "fs/Kconfig"
+source "arch/arm64/kvm/Kconfig"
+
source "arch/arm64/Kconfig.debug"
source "security/Kconfig"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index c95c5cb..ae89e63 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -36,7 +36,7 @@ TEXT_OFFSET := 0x00080000
export TEXT_OFFSET GZFLAGS
-core-y += arch/arm64/kernel/ arch/arm64/mm/
+core-y += arch/arm64/kernel/ arch/arm64/mm/ arch/arm64/kvm/
libs-y := arch/arm64/lib/ $(libs-y)
libs-y += $(LIBGCC)
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
new file mode 100644
index 0000000..a76be8b
--- /dev/null
+++ b/arch/arm64/kvm/Kconfig
@@ -0,0 +1,59 @@
+#
+# KVM configuration
+#
+
+source "virt/kvm/Kconfig"
+
+menuconfig VIRTUALIZATION
+ bool "Virtualization"
+ ---help---
+ Say Y here to get to see options for using your Linux host to run
+ other operating systems inside virtual machines (guests).
+ This option alone does not add any kernel code.
+
+ If you say N, all options in this submenu will be skipped and
+ disabled.
+
+if VIRTUALIZATION
+
+config KVM
+ bool "Kernel-based Virtual Machine (KVM) support"
+ select PREEMPT_NOTIFIERS
+ select ANON_INODES
+ select KVM_MMIO
+ select KVM_ARM_HOST
+ select KVM_ARM_VGIC
+ select KVM_ARM_TIMER
+ ---help---
+ Support hosting virtualized guest machines.
+
+ This module provides access to the hardware capabilities through
+ a character device node named /dev/kvm.
+
+ If unsure, say N.
+
+config KVM_ARM_HOST
+ bool
+ depends on KVM
+ depends on MMU
+ select MMU_NOTIFIER
+ ---help---
+ Provides host support for ARM processors.
+
+config KVM_ARM_VGIC
+ bool
+ depends on KVM_ARM_HOST && OF
+ select HAVE_KVM_IRQCHIP
+ ---help---
+ Adds support for a hardware assisted, in-kernel GIC emulation.
+
+config KVM_ARM_TIMER
+ bool
+ depends on KVM_ARM_VGIC
+ select HAVE_KVM_IRQCHIP
+ ---help---
+ Adds support for the Architected Timers in virtual machines
+
+source drivers/virtio/Kconfig
+
+endif # VIRTUALIZATION
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
new file mode 100644
index 0000000..14ba38d
--- /dev/null
+++ b/arch/arm64/kvm/Makefile
@@ -0,0 +1,18 @@
+#
+# Makefile for Kernel-based Virtual Machine module
+#
+
+ccflags-y += -Ivirt/kvm -Iarch/arm64/kvm
+CFLAGS_arm.o := -I.
+CFLAGS_mmu.o := -I.
+
+obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o)
+
+obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../arch/arm/kvm/, arm.o mmu.o mmio.o psci.o perf.o)
+
+obj-$(CONFIG_KVM_ARM_HOST) += inject_fault.o
+obj-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o idmap.o
+obj-$(CONFIG_KVM_ARM_HOST) += guest.o reset.o sys_regs.o sys_regs_a57.o
+
+obj-$(CONFIG_KVM_ARM_VGIC) += $(addprefix ../../../arch/arm/kvm/, vgic.o)
+obj-$(CONFIG_KVM_ARM_TIMER) += $(addprefix ../../../arch/arm/kvm/, arch_timer.o)
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 22/29] arm64: KVM: define 32bit specific registers
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Define the 32bit specific registers (SPSRs, cp15...).
Most CPU registers are directly mapped to a 64bit register
(r0->x0...). Only the SPSRs have separate registers.
cp15 registers are also mapped into their 64bit counterpart in most
cases.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_asm.h | 38 +++++++++++++++++++++++++++++++++++++-
arch/arm64/include/asm/kvm_host.h | 5 ++++-
arch/arm64/include/uapi/asm/kvm.h | 7 ++++++-
3 files changed, 47 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 851fee5..3f4e6e1 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -42,7 +42,43 @@
#define TPIDR_EL1 18 /* Thread ID, Privileged */
#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
-#define NR_SYS_REGS 21
+/* 32bit specific registers. Keep them at the end of the range */
+#define DACR32_EL2 21 /* Domain Access Control Register */
+#define IFSR32_EL2 22 /* Instruction Fault Status Register */
+#define FPEXC32_EL2 23 /* Floating-Point Exception Control Register */
+#define DBGVCR32_EL2 24 /* Debug Vector Catch Register */
+#define TEECR32_EL1 25 /* ThumbEE Configuration Register */
+#define TEEHBR32_EL1 26 /* ThumbEE Handler Base Register */
+#define NR_SYS_REGS 27
+
+/* 32bit mapping */
+#define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */
+#define c0_CSSELR (CSSELR_EL1 * 2)/* Cache Size Selection Register */
+#define c1_SCTLR (SCTLR_EL1 * 2) /* System Control Register */
+#define c1_ACTLR (ACTLR_EL1 * 2) /* Auxilliary Control Register */
+#define c1_CPACR (CPACR_EL1 * 2) /* Coprocessor Access Control */
+#define c2_TTBR0 (TTBR0_EL1 * 2) /* Translation Table Base Register 0 */
+#define c2_TTBR0_high (c2_TTBR0 + 1) /* TTBR0 top 32 bits */
+#define c2_TTBR1 (TTBR1_EL1 * 2) /* Translation Table Base Register 1 */
+#define c2_TTBR1_high (c2_TTBR1 + 1) /* TTBR1 top 32 bits */
+#define c2_TTBCR (TCR_EL1 * 2) /* Translation Table Base Control R. */
+#define c3_DACR (DACR32_EL2 * 2)/* Domain Access Control Register */
+#define c5_DFSR (ESR_EL1 * 2) /* Data Fault Status Register */
+#define c5_IFSR (IFSR32_EL2 * 2)/* Instruction Fault Status Register */
+#define c5_ADFSR (AFSR0_EL1 * 2) /* Auxilary Data Fault Status R */
+#define c5_AIFSR (AFSR1_EL1 * 2) /* Auxilary Instr Fault Status R */
+#define c6_DFAR (FAR_EL1 * 2) /* Data Fault Address Register */
+#define c6_IFAR (c6_DFAR + 1) /* Instruction Fault Address Register */
+#define c10_PRRR (MAIR_EL1 * 2) /* Primary Region Remap Register */
+#define c10_NMRR (c10_PRRR + 1) /* Normal Memory Remap Register */
+#define c12_VBAR (VBAR_EL1 * 2) /* Vector Base Address Register */
+#define c13_CID (CONTEXTIDR_EL1 * 2) /* Context ID Register */
+#define c13_TID_URW (TPIDR_EL0 * 2) /* Thread ID, User R/W */
+#define c13_TID_URO (TPIDRRO_EL0 * 2)/* Thread ID, User R/O */
+#define c13_TID_PRIV (TPIDR_EL1 * 2) /* Thread ID, Priveleged */
+#define c10_AMAIR (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
+#define c14_CNTKCTL (CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
+#define NR_CP15_REGS (NR_SYS_REGS * 2)
#define ARM_EXCEPTION_IRQ 0
#define ARM_EXCEPTION_TRAP 1
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 68558ac..24dc8d7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -85,7 +85,10 @@ typedef struct user_fpsimd_state kvm_kernel_vfp_t;
struct kvm_vcpu_arch {
struct kvm_regs regs;
- u64 sys_regs[NR_SYS_REGS];
+ union {
+ u64 sys_regs[NR_SYS_REGS];
+ u32 cp15[NR_CP15_REGS];
+ };
/* HYP configuration */
u64 hcr_el2;
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 24c8318..f9c269e 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -23,7 +23,12 @@
#define __ARM_KVM_H__
#define KVM_SPSR_EL1 0
-#define KVM_NR_SPSR 1
+#define KVM_SPSR_SVC KVM_SPSR_EL1
+#define KVM_SPSR_ABT 1
+#define KVM_SPSR_UND 2
+#define KVM_SPSR_IRQ 3
+#define KVM_SPSR_FIQ 4
+#define KVM_NR_SPSR 5
#ifndef __ASSEMBLY__
#include <asm/types.h>
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 22/29] arm64: KVM: define 32bit specific registers
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Define the 32bit specific registers (SPSRs, cp15...).
Most CPU registers are directly mapped to a 64bit register
(r0->x0...). Only the SPSRs have separate registers.
cp15 registers are also mapped into their 64bit counterpart in most
cases.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_asm.h | 38 +++++++++++++++++++++++++++++++++++++-
arch/arm64/include/asm/kvm_host.h | 5 ++++-
arch/arm64/include/uapi/asm/kvm.h | 7 ++++++-
3 files changed, 47 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 851fee5..3f4e6e1 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -42,7 +42,43 @@
#define TPIDR_EL1 18 /* Thread ID, Privileged */
#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
-#define NR_SYS_REGS 21
+/* 32bit specific registers. Keep them at the end of the range */
+#define DACR32_EL2 21 /* Domain Access Control Register */
+#define IFSR32_EL2 22 /* Instruction Fault Status Register */
+#define FPEXC32_EL2 23 /* Floating-Point Exception Control Register */
+#define DBGVCR32_EL2 24 /* Debug Vector Catch Register */
+#define TEECR32_EL1 25 /* ThumbEE Configuration Register */
+#define TEEHBR32_EL1 26 /* ThumbEE Handler Base Register */
+#define NR_SYS_REGS 27
+
+/* 32bit mapping */
+#define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */
+#define c0_CSSELR (CSSELR_EL1 * 2)/* Cache Size Selection Register */
+#define c1_SCTLR (SCTLR_EL1 * 2) /* System Control Register */
+#define c1_ACTLR (ACTLR_EL1 * 2) /* Auxilliary Control Register */
+#define c1_CPACR (CPACR_EL1 * 2) /* Coprocessor Access Control */
+#define c2_TTBR0 (TTBR0_EL1 * 2) /* Translation Table Base Register 0 */
+#define c2_TTBR0_high (c2_TTBR0 + 1) /* TTBR0 top 32 bits */
+#define c2_TTBR1 (TTBR1_EL1 * 2) /* Translation Table Base Register 1 */
+#define c2_TTBR1_high (c2_TTBR1 + 1) /* TTBR1 top 32 bits */
+#define c2_TTBCR (TCR_EL1 * 2) /* Translation Table Base Control R. */
+#define c3_DACR (DACR32_EL2 * 2)/* Domain Access Control Register */
+#define c5_DFSR (ESR_EL1 * 2) /* Data Fault Status Register */
+#define c5_IFSR (IFSR32_EL2 * 2)/* Instruction Fault Status Register */
+#define c5_ADFSR (AFSR0_EL1 * 2) /* Auxilary Data Fault Status R */
+#define c5_AIFSR (AFSR1_EL1 * 2) /* Auxilary Instr Fault Status R */
+#define c6_DFAR (FAR_EL1 * 2) /* Data Fault Address Register */
+#define c6_IFAR (c6_DFAR + 1) /* Instruction Fault Address Register */
+#define c10_PRRR (MAIR_EL1 * 2) /* Primary Region Remap Register */
+#define c10_NMRR (c10_PRRR + 1) /* Normal Memory Remap Register */
+#define c12_VBAR (VBAR_EL1 * 2) /* Vector Base Address Register */
+#define c13_CID (CONTEXTIDR_EL1 * 2) /* Context ID Register */
+#define c13_TID_URW (TPIDR_EL0 * 2) /* Thread ID, User R/W */
+#define c13_TID_URO (TPIDRRO_EL0 * 2)/* Thread ID, User R/O */
+#define c13_TID_PRIV (TPIDR_EL1 * 2) /* Thread ID, Priveleged */
+#define c10_AMAIR (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
+#define c14_CNTKCTL (CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
+#define NR_CP15_REGS (NR_SYS_REGS * 2)
#define ARM_EXCEPTION_IRQ 0
#define ARM_EXCEPTION_TRAP 1
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 68558ac..24dc8d7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -85,7 +85,10 @@ typedef struct user_fpsimd_state kvm_kernel_vfp_t;
struct kvm_vcpu_arch {
struct kvm_regs regs;
- u64 sys_regs[NR_SYS_REGS];
+ union {
+ u64 sys_regs[NR_SYS_REGS];
+ u32 cp15[NR_CP15_REGS];
+ };
/* HYP configuration */
u64 hcr_el2;
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 24c8318..f9c269e 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -23,7 +23,12 @@
#define __ARM_KVM_H__
#define KVM_SPSR_EL1 0
-#define KVM_NR_SPSR 1
+#define KVM_SPSR_SVC KVM_SPSR_EL1
+#define KVM_SPSR_ABT 1
+#define KVM_SPSR_UND 2
+#define KVM_SPSR_IRQ 3
+#define KVM_SPSR_FIQ 4
+#define KVM_NR_SPSR 5
#ifndef __ASSEMBLY__
#include <asm/types.h>
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 23/29] arm64: KVM: 32bit GP register access
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Allow access to the 32bit register file through the usual API.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_emulate.h | 17 +++-
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/regmap.c | 168 +++++++++++++++++++++++++++++++++++
3 files changed, 184 insertions(+), 3 deletions(-)
create mode 100644 arch/arm64/kvm/regmap.c
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 16a343b..2e72a4f 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -28,6 +28,9 @@
#include <asm/kvm_mmio.h>
#include <asm/ptrace.h>
+unsigned long *vcpu_reg32(struct kvm_vcpu *vcpu, u8 reg_num);
+unsigned long *vcpu_spsr32(struct kvm_vcpu *vcpu);
+
void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
@@ -44,7 +47,7 @@ static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
{
- return false; /* 32bit? Bahhh... */
+ return !!(*vcpu_cpsr(vcpu) & PSR_MODE32_BIT);
}
static inline bool kvm_condition_valid(struct kvm_vcpu *vcpu)
@@ -59,10 +62,14 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
{
+ *vcpu_cpsr(vcpu) |= COMPAT_PSR_T_BIT;
}
static inline unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
{
+ if (vcpu_mode_is_32bit(vcpu))
+ return vcpu_reg32(vcpu, reg_num);
+
return (unsigned long *)&vcpu->arch.regs.regs.regs[reg_num];
}
@@ -70,18 +77,24 @@ static inline unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
/* Get vcpu SPSR for current mode */
static inline unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
{
+ if (vcpu_mode_is_32bit(vcpu))
+ return vcpu_spsr32(vcpu);
+
return &vcpu->arch.regs.spsr[KVM_SPSR_EL1];
}
static inline bool kvm_vcpu_reg_is_pc(struct kvm_vcpu *vcpu, int reg)
{
- return false;
+ return (vcpu_mode_is_32bit(vcpu)) && reg == 15;
}
static inline bool vcpu_mode_priv(struct kvm_vcpu *vcpu)
{
u32 mode = *vcpu_cpsr(vcpu) & PSR_MODE_MASK;
+ if (vcpu_mode_is_32bit(vcpu))
+ return mode > COMPAT_PSR_MODE_USR;
+
return mode != PSR_MODE_EL0t;
}
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 14ba38d..50f9da0 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../virt/kvm/, kvm_main.o coalesc
obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../arch/arm/kvm/, arm.o mmu.o mmio.o psci.o perf.o)
-obj-$(CONFIG_KVM_ARM_HOST) += inject_fault.o
+obj-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o idmap.o
obj-$(CONFIG_KVM_ARM_HOST) += guest.o reset.o sys_regs.o sys_regs_a57.o
diff --git a/arch/arm64/kvm/regmap.c b/arch/arm64/kvm/regmap.c
new file mode 100644
index 0000000..f8d4a0c
--- /dev/null
+++ b/arch/arm64/kvm/regmap.c
@@ -0,0 +1,168 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/emulate.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/ptrace.h>
+
+#define VCPU_NR_MODES 6
+#define REG_OFFSET(_reg) \
+ (offsetof(struct user_pt_regs, _reg) / sizeof(unsigned long))
+
+#define USR_REG_OFFSET(R) REG_OFFSET(compat_usr(R))
+
+static const unsigned long vcpu_reg_offsets[VCPU_NR_MODES][16] = {
+ /* USR Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12), USR_REG_OFFSET(13), USR_REG_OFFSET(14),
+ REG_OFFSET(pc)
+ },
+
+ /* FIQ Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7),
+ REG_OFFSET(compat_r8_fiq), /* r8 */
+ REG_OFFSET(compat_r9_fiq), /* r9 */
+ REG_OFFSET(compat_r10_fiq), /* r10 */
+ REG_OFFSET(compat_r11_fiq), /* r11 */
+ REG_OFFSET(compat_r12_fiq), /* r12 */
+ REG_OFFSET(compat_sp_fiq), /* r13 */
+ REG_OFFSET(compat_lr_fiq), /* r14 */
+ REG_OFFSET(pc)
+ },
+
+ /* IRQ Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12),
+ REG_OFFSET(compat_sp_irq), /* r13 */
+ REG_OFFSET(compat_lr_irq), /* r14 */
+ REG_OFFSET(pc)
+ },
+
+ /* SVC Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12),
+ REG_OFFSET(compat_sp_svc), /* r13 */
+ REG_OFFSET(compat_lr_svc), /* r14 */
+ REG_OFFSET(pc)
+ },
+
+ /* ABT Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12),
+ REG_OFFSET(compat_sp_abt), /* r13 */
+ REG_OFFSET(compat_lr_abt), /* r14 */
+ REG_OFFSET(pc)
+ },
+
+ /* UND Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12),
+ REG_OFFSET(compat_sp_und), /* r13 */
+ REG_OFFSET(compat_lr_und), /* r14 */
+ REG_OFFSET(pc)
+ },
+};
+
+/*
+ * Return a pointer to the register number valid in the current mode of
+ * the virtual CPU.
+ */
+unsigned long *vcpu_reg32(struct kvm_vcpu *vcpu, u8 reg_num)
+{
+ unsigned long *reg_array = (unsigned long *)&vcpu->arch.regs.regs;
+ unsigned long mode = *vcpu_cpsr(vcpu) & COMPAT_PSR_MODE_MASK;
+
+ switch (mode) {
+ case COMPAT_PSR_MODE_USR...COMPAT_PSR_MODE_SVC:
+ mode &= ~PSR_MODE32_BIT; /* 0 ... 3 */
+ break;
+
+ case COMPAT_PSR_MODE_ABT:
+ mode = 4;
+ break;
+
+ case COMPAT_PSR_MODE_UND:
+ mode = 5;
+ break;
+
+ case COMPAT_PSR_MODE_SYS:
+ mode = 0; /* SYS maps to USR */
+ break;
+
+ default:
+ BUG();
+ }
+
+ return reg_array + vcpu_reg_offsets[mode][reg_num];
+}
+
+/*
+ * Return the SPSR for the current mode of the virtual CPU.
+ */
+unsigned long *vcpu_spsr32(struct kvm_vcpu *vcpu)
+{
+ unsigned long mode = *vcpu_cpsr(vcpu) & COMPAT_PSR_MODE_MASK;
+ switch (mode) {
+ case COMPAT_PSR_MODE_SVC:
+ mode = KVM_SPSR_SVC;
+ break;
+ case COMPAT_PSR_MODE_ABT:
+ mode = KVM_SPSR_ABT;
+ break;
+ case COMPAT_PSR_MODE_UND:
+ mode = KVM_SPSR_UND;
+ break;
+ case COMPAT_PSR_MODE_IRQ:
+ mode = KVM_SPSR_IRQ;
+ break;
+ case COMPAT_PSR_MODE_FIQ:
+ mode = KVM_SPSR_FIQ;
+ break;
+ default:
+ BUG();
+ }
+
+ return &vcpu->arch.regs.spsr[mode];
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 23/29] arm64: KVM: 32bit GP register access
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Allow access to the 32bit register file through the usual API.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_emulate.h | 17 +++-
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/regmap.c | 168 +++++++++++++++++++++++++++++++++++
3 files changed, 184 insertions(+), 3 deletions(-)
create mode 100644 arch/arm64/kvm/regmap.c
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 16a343b..2e72a4f 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -28,6 +28,9 @@
#include <asm/kvm_mmio.h>
#include <asm/ptrace.h>
+unsigned long *vcpu_reg32(struct kvm_vcpu *vcpu, u8 reg_num);
+unsigned long *vcpu_spsr32(struct kvm_vcpu *vcpu);
+
void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
@@ -44,7 +47,7 @@ static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
{
- return false; /* 32bit? Bahhh... */
+ return !!(*vcpu_cpsr(vcpu) & PSR_MODE32_BIT);
}
static inline bool kvm_condition_valid(struct kvm_vcpu *vcpu)
@@ -59,10 +62,14 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
{
+ *vcpu_cpsr(vcpu) |= COMPAT_PSR_T_BIT;
}
static inline unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
{
+ if (vcpu_mode_is_32bit(vcpu))
+ return vcpu_reg32(vcpu, reg_num);
+
return (unsigned long *)&vcpu->arch.regs.regs.regs[reg_num];
}
@@ -70,18 +77,24 @@ static inline unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
/* Get vcpu SPSR for current mode */
static inline unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
{
+ if (vcpu_mode_is_32bit(vcpu))
+ return vcpu_spsr32(vcpu);
+
return &vcpu->arch.regs.spsr[KVM_SPSR_EL1];
}
static inline bool kvm_vcpu_reg_is_pc(struct kvm_vcpu *vcpu, int reg)
{
- return false;
+ return (vcpu_mode_is_32bit(vcpu)) && reg == 15;
}
static inline bool vcpu_mode_priv(struct kvm_vcpu *vcpu)
{
u32 mode = *vcpu_cpsr(vcpu) & PSR_MODE_MASK;
+ if (vcpu_mode_is_32bit(vcpu))
+ return mode > COMPAT_PSR_MODE_USR;
+
return mode != PSR_MODE_EL0t;
}
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 14ba38d..50f9da0 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../virt/kvm/, kvm_main.o coalesc
obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../arch/arm/kvm/, arm.o mmu.o mmio.o psci.o perf.o)
-obj-$(CONFIG_KVM_ARM_HOST) += inject_fault.o
+obj-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o idmap.o
obj-$(CONFIG_KVM_ARM_HOST) += guest.o reset.o sys_regs.o sys_regs_a57.o
diff --git a/arch/arm64/kvm/regmap.c b/arch/arm64/kvm/regmap.c
new file mode 100644
index 0000000..f8d4a0c
--- /dev/null
+++ b/arch/arm64/kvm/regmap.c
@@ -0,0 +1,168 @@
+/*
+ * Copyright (C) 2012 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * Derived from arch/arm/kvm/emulate.c:
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/mm.h>
+#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/ptrace.h>
+
+#define VCPU_NR_MODES 6
+#define REG_OFFSET(_reg) \
+ (offsetof(struct user_pt_regs, _reg) / sizeof(unsigned long))
+
+#define USR_REG_OFFSET(R) REG_OFFSET(compat_usr(R))
+
+static const unsigned long vcpu_reg_offsets[VCPU_NR_MODES][16] = {
+ /* USR Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12), USR_REG_OFFSET(13), USR_REG_OFFSET(14),
+ REG_OFFSET(pc)
+ },
+
+ /* FIQ Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7),
+ REG_OFFSET(compat_r8_fiq), /* r8 */
+ REG_OFFSET(compat_r9_fiq), /* r9 */
+ REG_OFFSET(compat_r10_fiq), /* r10 */
+ REG_OFFSET(compat_r11_fiq), /* r11 */
+ REG_OFFSET(compat_r12_fiq), /* r12 */
+ REG_OFFSET(compat_sp_fiq), /* r13 */
+ REG_OFFSET(compat_lr_fiq), /* r14 */
+ REG_OFFSET(pc)
+ },
+
+ /* IRQ Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12),
+ REG_OFFSET(compat_sp_irq), /* r13 */
+ REG_OFFSET(compat_lr_irq), /* r14 */
+ REG_OFFSET(pc)
+ },
+
+ /* SVC Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12),
+ REG_OFFSET(compat_sp_svc), /* r13 */
+ REG_OFFSET(compat_lr_svc), /* r14 */
+ REG_OFFSET(pc)
+ },
+
+ /* ABT Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12),
+ REG_OFFSET(compat_sp_abt), /* r13 */
+ REG_OFFSET(compat_lr_abt), /* r14 */
+ REG_OFFSET(pc)
+ },
+
+ /* UND Registers */
+ {
+ USR_REG_OFFSET(0), USR_REG_OFFSET(1), USR_REG_OFFSET(2),
+ USR_REG_OFFSET(3), USR_REG_OFFSET(4), USR_REG_OFFSET(5),
+ USR_REG_OFFSET(6), USR_REG_OFFSET(7), USR_REG_OFFSET(8),
+ USR_REG_OFFSET(9), USR_REG_OFFSET(10), USR_REG_OFFSET(11),
+ USR_REG_OFFSET(12),
+ REG_OFFSET(compat_sp_und), /* r13 */
+ REG_OFFSET(compat_lr_und), /* r14 */
+ REG_OFFSET(pc)
+ },
+};
+
+/*
+ * Return a pointer to the register number valid in the current mode of
+ * the virtual CPU.
+ */
+unsigned long *vcpu_reg32(struct kvm_vcpu *vcpu, u8 reg_num)
+{
+ unsigned long *reg_array = (unsigned long *)&vcpu->arch.regs.regs;
+ unsigned long mode = *vcpu_cpsr(vcpu) & COMPAT_PSR_MODE_MASK;
+
+ switch (mode) {
+ case COMPAT_PSR_MODE_USR...COMPAT_PSR_MODE_SVC:
+ mode &= ~PSR_MODE32_BIT; /* 0 ... 3 */
+ break;
+
+ case COMPAT_PSR_MODE_ABT:
+ mode = 4;
+ break;
+
+ case COMPAT_PSR_MODE_UND:
+ mode = 5;
+ break;
+
+ case COMPAT_PSR_MODE_SYS:
+ mode = 0; /* SYS maps to USR */
+ break;
+
+ default:
+ BUG();
+ }
+
+ return reg_array + vcpu_reg_offsets[mode][reg_num];
+}
+
+/*
+ * Return the SPSR for the current mode of the virtual CPU.
+ */
+unsigned long *vcpu_spsr32(struct kvm_vcpu *vcpu)
+{
+ unsigned long mode = *vcpu_cpsr(vcpu) & COMPAT_PSR_MODE_MASK;
+ switch (mode) {
+ case COMPAT_PSR_MODE_SVC:
+ mode = KVM_SPSR_SVC;
+ break;
+ case COMPAT_PSR_MODE_ABT:
+ mode = KVM_SPSR_ABT;
+ break;
+ case COMPAT_PSR_MODE_UND:
+ mode = KVM_SPSR_UND;
+ break;
+ case COMPAT_PSR_MODE_IRQ:
+ mode = KVM_SPSR_IRQ;
+ break;
+ case COMPAT_PSR_MODE_FIQ:
+ mode = KVM_SPSR_FIQ;
+ break;
+ default:
+ BUG();
+ }
+
+ return &vcpu->arch.regs.spsr[mode];
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 24/29] arm64: KVM: 32bit conditional execution emulation
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
As conditionnal instructions can trap on AArch32, add the thinest
possible emulation layer to keep 32bit guests happy.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_emulate.h | 13 ++-
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/emulate.c | 154 +++++++++++++++++++++++++++++++++++
3 files changed, 166 insertions(+), 3 deletions(-)
create mode 100644 arch/arm64/kvm/emulate.c
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 2e72a4f..4d5e0ee 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -31,6 +31,9 @@
unsigned long *vcpu_reg32(struct kvm_vcpu *vcpu, u8 reg_num);
unsigned long *vcpu_spsr32(struct kvm_vcpu *vcpu);
+bool kvm_condition_valid32(struct kvm_vcpu *vcpu);
+void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr);
+
void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
@@ -52,12 +55,18 @@ static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
static inline bool kvm_condition_valid(struct kvm_vcpu *vcpu)
{
- return true; /* No conditionals on arm64 */
+ if (vcpu_mode_is_32bit(vcpu))
+ return kvm_condition_valid32(vcpu);
+
+ return true;
}
static inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
{
- *vcpu_pc(vcpu) += 4;
+ if (vcpu_mode_is_32bit(vcpu))
+ kvm_skip_instr32(vcpu, is_wide_instr);
+ else
+ *vcpu_pc(vcpu) += 4;
}
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 50f9da0..a6ba0d8 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../virt/kvm/, kvm_main.o coalesc
obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../arch/arm/kvm/, arm.o mmu.o mmio.o psci.o perf.o)
-obj-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o
+obj-$(CONFIG_KVM_ARM_HOST) += emulate.o inject_fault.o regmap.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o idmap.o
obj-$(CONFIG_KVM_ARM_HOST) += guest.o reset.o sys_regs.o sys_regs_a57.o
diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
new file mode 100644
index 0000000..6b3dbc3
--- /dev/null
+++ b/arch/arm64/kvm/emulate.c
@@ -0,0 +1,154 @@
+/*
+ * (not much of an) Emulation layer for 32bit guests.
+ *
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+
+/*
+ * stolen from arch/arm/kernel/opcodes.c
+ *
+ * condition code lookup table
+ * index into the table is test code: EQ, NE, ... LT, GT, AL, NV
+ *
+ * bit position in short is condition code: NZCV
+ */
+static const unsigned short cc_map[16] = {
+ 0xF0F0, /* EQ == Z set */
+ 0x0F0F, /* NE */
+ 0xCCCC, /* CS == C set */
+ 0x3333, /* CC */
+ 0xFF00, /* MI == N set */
+ 0x00FF, /* PL */
+ 0xAAAA, /* VS == V set */
+ 0x5555, /* VC */
+ 0x0C0C, /* HI == C set && Z clear */
+ 0xF3F3, /* LS == C clear || Z set */
+ 0xAA55, /* GE == (N==V) */
+ 0x55AA, /* LT == (N!=V) */
+ 0x0A05, /* GT == (!Z && (N==V)) */
+ 0xF5FA, /* LE == (Z || (N!=V)) */
+ 0xFFFF, /* AL always */
+ 0 /* NV */
+};
+
+static int kvm_vcpu_get_condition(struct kvm_vcpu *vcpu)
+{
+ u32 esr = kvm_vcpu_get_hsr(vcpu);
+
+ if (esr & ESR_EL2_CV)
+ return (esr & ESR_EL2_COND) >> ESR_EL2_COND_SHIFT;
+
+ return -1;
+}
+
+/*
+ * Check if a trapped instruction should have been executed or not.
+ */
+bool kvm_condition_valid32(struct kvm_vcpu *vcpu)
+{
+ unsigned long cpsr;
+ u32 cpsr_cond;
+ int cond;
+
+ /* Top two bits non-zero? Unconditional. */
+ if (kvm_vcpu_get_hsr(vcpu) >> 30)
+ return true;
+
+ /* Is condition field valid? */
+ cond = kvm_vcpu_get_condition(vcpu);
+ if (cond == 0xE)
+ return true;
+
+ cpsr = *vcpu_cpsr(vcpu);
+
+ if (cond < 0) {
+ /* This can happen in Thumb mode: examine IT state. */
+ unsigned long it;
+
+ it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
+
+ /* it == 0 => unconditional. */
+ if (it == 0)
+ return true;
+
+ /* The cond for this insn works out as the top 4 bits. */
+ cond = (it >> 4);
+ }
+
+ cpsr_cond = cpsr >> 28;
+
+ if (!((cc_map[cond] >> cpsr_cond) & 1))
+ return false;
+
+ return true;
+}
+
+/**
+ * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
+ * @vcpu: The VCPU pointer
+ *
+ * When exceptions occur while instructions are executed in Thumb IF-THEN
+ * blocks, the ITSTATE field of the CPSR is not advanved (updated), so we have
+ * to do this little bit of work manually. The fields map like this:
+ *
+ * IT[7:0] -> CPSR[26:25],CPSR[15:10]
+ */
+static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
+{
+ unsigned long itbits, cond;
+ unsigned long cpsr = *vcpu_cpsr(vcpu);
+ bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
+
+ BUG_ON(is_arm && (cpsr & COMPAT_PSR_IT_MASK));
+
+ if (!(cpsr & COMPAT_PSR_IT_MASK))
+ return;
+
+ cond = (cpsr & 0xe000) >> 13;
+ itbits = (cpsr & 0x1c00) >> (10 - 2);
+ itbits |= (cpsr & (0x3 << 25)) >> 25;
+
+ /* Perform ITAdvance (see page A-52 in ARM DDI 0406C) */
+ if ((itbits & 0x7) == 0)
+ itbits = cond = 0;
+ else
+ itbits = (itbits << 1) & 0x1f;
+
+ cpsr &= ~COMPAT_PSR_IT_MASK;
+ cpsr |= cond << 13;
+ cpsr |= (itbits & 0x1c) << (10 - 2);
+ cpsr |= (itbits & 0x3) << 25;
+ *vcpu_cpsr(vcpu) = cpsr;
+}
+
+/**
+ * kvm_skip_instr - skip a trapped instruction and proceed to the next
+ * @vcpu: The vcpu pointer
+ */
+void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
+{
+ bool is_thumb;
+
+ is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
+ if (is_thumb && !is_wide_instr)
+ *vcpu_pc(vcpu) += 2;
+ else
+ *vcpu_pc(vcpu) += 4;
+ kvm_adjust_itstate(vcpu);
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 24/29] arm64: KVM: 32bit conditional execution emulation
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
As conditionnal instructions can trap on AArch32, add the thinest
possible emulation layer to keep 32bit guests happy.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_emulate.h | 13 ++-
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/emulate.c | 154 +++++++++++++++++++++++++++++++++++
3 files changed, 166 insertions(+), 3 deletions(-)
create mode 100644 arch/arm64/kvm/emulate.c
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 2e72a4f..4d5e0ee 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -31,6 +31,9 @@
unsigned long *vcpu_reg32(struct kvm_vcpu *vcpu, u8 reg_num);
unsigned long *vcpu_spsr32(struct kvm_vcpu *vcpu);
+bool kvm_condition_valid32(struct kvm_vcpu *vcpu);
+void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr);
+
void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
@@ -52,12 +55,18 @@ static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
static inline bool kvm_condition_valid(struct kvm_vcpu *vcpu)
{
- return true; /* No conditionals on arm64 */
+ if (vcpu_mode_is_32bit(vcpu))
+ return kvm_condition_valid32(vcpu);
+
+ return true;
}
static inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
{
- *vcpu_pc(vcpu) += 4;
+ if (vcpu_mode_is_32bit(vcpu))
+ kvm_skip_instr32(vcpu, is_wide_instr);
+ else
+ *vcpu_pc(vcpu) += 4;
}
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 50f9da0..a6ba0d8 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../virt/kvm/, kvm_main.o coalesc
obj-$(CONFIG_KVM_ARM_HOST) += $(addprefix ../../../arch/arm/kvm/, arm.o mmu.o mmio.o psci.o perf.o)
-obj-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o
+obj-$(CONFIG_KVM_ARM_HOST) += emulate.o inject_fault.o regmap.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o idmap.o
obj-$(CONFIG_KVM_ARM_HOST) += guest.o reset.o sys_regs.o sys_regs_a57.o
diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
new file mode 100644
index 0000000..6b3dbc3
--- /dev/null
+++ b/arch/arm64/kvm/emulate.c
@@ -0,0 +1,154 @@
+/*
+ * (not much of an) Emulation layer for 32bit guests.
+ *
+ * Copyright (C) 2012 - Virtual Open Systems and Columbia University
+ * Author: Christoffer Dall <c.dall@virtualopensystems.com>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+
+/*
+ * stolen from arch/arm/kernel/opcodes.c
+ *
+ * condition code lookup table
+ * index into the table is test code: EQ, NE, ... LT, GT, AL, NV
+ *
+ * bit position in short is condition code: NZCV
+ */
+static const unsigned short cc_map[16] = {
+ 0xF0F0, /* EQ == Z set */
+ 0x0F0F, /* NE */
+ 0xCCCC, /* CS == C set */
+ 0x3333, /* CC */
+ 0xFF00, /* MI == N set */
+ 0x00FF, /* PL */
+ 0xAAAA, /* VS == V set */
+ 0x5555, /* VC */
+ 0x0C0C, /* HI == C set && Z clear */
+ 0xF3F3, /* LS == C clear || Z set */
+ 0xAA55, /* GE == (N==V) */
+ 0x55AA, /* LT == (N!=V) */
+ 0x0A05, /* GT == (!Z && (N==V)) */
+ 0xF5FA, /* LE == (Z || (N!=V)) */
+ 0xFFFF, /* AL always */
+ 0 /* NV */
+};
+
+static int kvm_vcpu_get_condition(struct kvm_vcpu *vcpu)
+{
+ u32 esr = kvm_vcpu_get_hsr(vcpu);
+
+ if (esr & ESR_EL2_CV)
+ return (esr & ESR_EL2_COND) >> ESR_EL2_COND_SHIFT;
+
+ return -1;
+}
+
+/*
+ * Check if a trapped instruction should have been executed or not.
+ */
+bool kvm_condition_valid32(struct kvm_vcpu *vcpu)
+{
+ unsigned long cpsr;
+ u32 cpsr_cond;
+ int cond;
+
+ /* Top two bits non-zero? Unconditional. */
+ if (kvm_vcpu_get_hsr(vcpu) >> 30)
+ return true;
+
+ /* Is condition field valid? */
+ cond = kvm_vcpu_get_condition(vcpu);
+ if (cond == 0xE)
+ return true;
+
+ cpsr = *vcpu_cpsr(vcpu);
+
+ if (cond < 0) {
+ /* This can happen in Thumb mode: examine IT state. */
+ unsigned long it;
+
+ it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
+
+ /* it == 0 => unconditional. */
+ if (it == 0)
+ return true;
+
+ /* The cond for this insn works out as the top 4 bits. */
+ cond = (it >> 4);
+ }
+
+ cpsr_cond = cpsr >> 28;
+
+ if (!((cc_map[cond] >> cpsr_cond) & 1))
+ return false;
+
+ return true;
+}
+
+/**
+ * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
+ * @vcpu: The VCPU pointer
+ *
+ * When exceptions occur while instructions are executed in Thumb IF-THEN
+ * blocks, the ITSTATE field of the CPSR is not advanved (updated), so we have
+ * to do this little bit of work manually. The fields map like this:
+ *
+ * IT[7:0] -> CPSR[26:25],CPSR[15:10]
+ */
+static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
+{
+ unsigned long itbits, cond;
+ unsigned long cpsr = *vcpu_cpsr(vcpu);
+ bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
+
+ BUG_ON(is_arm && (cpsr & COMPAT_PSR_IT_MASK));
+
+ if (!(cpsr & COMPAT_PSR_IT_MASK))
+ return;
+
+ cond = (cpsr & 0xe000) >> 13;
+ itbits = (cpsr & 0x1c00) >> (10 - 2);
+ itbits |= (cpsr & (0x3 << 25)) >> 25;
+
+ /* Perform ITAdvance (see page A-52 in ARM DDI 0406C) */
+ if ((itbits & 0x7) == 0)
+ itbits = cond = 0;
+ else
+ itbits = (itbits << 1) & 0x1f;
+
+ cpsr &= ~COMPAT_PSR_IT_MASK;
+ cpsr |= cond << 13;
+ cpsr |= (itbits & 0x1c) << (10 - 2);
+ cpsr |= (itbits & 0x3) << 25;
+ *vcpu_cpsr(vcpu) = cpsr;
+}
+
+/**
+ * kvm_skip_instr - skip a trapped instruction and proceed to the next
+ * @vcpu: The vcpu pointer
+ */
+void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
+{
+ bool is_thumb;
+
+ is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
+ if (is_thumb && !is_wide_instr)
+ *vcpu_pc(vcpu) += 2;
+ else
+ *vcpu_pc(vcpu) += 4;
+ kvm_adjust_itstate(vcpu);
+}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 25/29] arm64: KVM: 32bit handling of coprocessor traps
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Provide the necessary infrastructure to trap coprocessor accesses that
occur when running 32bit guests.
Also wire SMC and HVC trapped in 32bit mode while were at it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_coproc.h | 5 ++
arch/arm64/kvm/handle_exit.c | 7 ++
arch/arm64/kvm/sys_regs.c | 165 ++++++++++++++++++++++++++++++++++--
3 files changed, 170 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
index e791894..0378be9 100644
--- a/arch/arm64/include/asm/kvm_coproc.h
+++ b/arch/arm64/include/asm/kvm_coproc.h
@@ -33,10 +33,15 @@ struct kvm_sys_reg_table {
struct kvm_sys_reg_target_table {
unsigned target;
struct kvm_sys_reg_table table64;
+ struct kvm_sys_reg_table table32;
};
void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table);
+int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run);
#define kvm_coproc_table_init kvm_sys_reg_table_init
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index fa38230..3e61dcb 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -62,6 +62,13 @@ static int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
static exit_handle_fn arm_exit_handlers[] = {
[ESR_EL2_EC_WFI] = kvm_handle_wfi,
+ [ESR_EL2_EC_CP15_32] = kvm_handle_cp15_32,
+ [ESR_EL2_EC_CP15_64] = kvm_handle_cp15_64,
+ [ESR_EL2_EC_CP14_MR] = kvm_handle_cp14_access,
+ [ESR_EL2_EC_CP14_LS] = kvm_handle_cp14_load_store,
+ [ESR_EL2_EC_CP14_64] = kvm_handle_cp14_access,
+ [ESR_EL2_EC_HVC32] = handle_hvc,
+ [ESR_EL2_EC_SMC32] = handle_smc,
[ESR_EL2_EC_HVC64] = handle_hvc,
[ESR_EL2_EC_SMC64] = handle_smc,
[ESR_EL2_EC_SYS64] = kvm_handle_sys_reg,
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9fc8c17..1b1cb21 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -38,6 +38,10 @@
* types are different. My gut feeling is that it should be pretty
* easy to merge, but that would be an ABI breakage -- again. VFP
* would also need to be abstracted.
+ *
+ * For AArch32, we only take care of what is being trapped. Anything
+ * that has to do with init and userspace access has to go via the
+ * 64bit interface.
*/
/* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
@@ -256,6 +260,36 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* TPIDRRO_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
NULL, reset_unknown, TPIDRRO_EL0 },
+
+ /* DACR32_EL2 */
+ { Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
+ NULL, reset_unknown, DACR32_EL2 },
+ /* IFSR32_EL2 */
+ { Op0(0b11), Op1(0b100), CRn(0b0101), CRm(0b0000), Op2(0b001),
+ NULL, reset_unknown, IFSR32_EL2 },
+};
+
+/* Trapped cp15 registers */
+static const struct sys_reg_desc cp15_regs[] = {
+ /*
+ * DC{C,I,CI}SW operations:
+ */
+ { Op1( 0), CRn( 7), CRm( 6), Op2( 2), access_dcsw },
+ { Op1( 0), CRn( 7), CRm(10), Op2( 2), access_dcsw },
+ { Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 0), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 1), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 2), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 3), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 5), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 6), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 7), pm_fake },
+ { Op1( 0), CRn( 9), CRm(13), Op2( 0), pm_fake },
+ { Op1( 0), CRn( 9), CRm(13), Op2( 1), pm_fake },
+ { Op1( 0), CRn( 9), CRm(13), Op2( 2), pm_fake },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 0), pm_fake },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 1), pm_fake },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 2), pm_fake },
};
/* Target specific emulation tables */
@@ -267,13 +301,20 @@ void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table)
}
/* Get specific register table for this target. */
-static const struct sys_reg_desc *get_target_table(unsigned target, size_t *num)
+static const struct sys_reg_desc *get_target_table(unsigned target,
+ bool mode_is_64,
+ size_t *num)
{
struct kvm_sys_reg_target_table *table;
table = target_tables[target];
- *num = table->table64.num;
- return table->table64.table;
+ if (mode_is_64) {
+ *num = table->table64.num;
+ return table->table64.table;
+ } else {
+ *num = table->table32.num;
+ return table->table32.table;
+ }
}
static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
@@ -301,13 +342,123 @@ static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
return NULL;
}
+int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+static int emulate_cp15(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *params)
+{
+ size_t num;
+ const struct sys_reg_desc *table, *r;
+
+ table = get_target_table(vcpu->arch.target, false, &num);
+
+ /* Search target-specific then generic table. */
+ r = find_reg(params, table, num);
+ if (!r)
+ r = find_reg(params, cp15_regs, ARRAY_SIZE(cp15_regs));
+
+ if (likely(r)) {
+ /* If we don't have an accessor, we should never get here! */
+ BUG_ON(!r->access);
+
+ if (likely(r->access(vcpu, params, r))) {
+ /* Skip instruction, since it was emulated */
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ return 1;
+ }
+ /* If access function fails, it should complain. */
+ } else {
+ kvm_err("Unsupported guest CP15 access at: %08lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(params);
+ }
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+/**
+ * kvm_handle_cp15_64 -- handles a mrrc/mcrr trap on a guest CP15 access
+ * @vcpu: The VCPU pointer
+ * @run: The kvm_run struct
+ */
+int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ struct sys_reg_params params;
+ u32 hsr = kvm_vcpu_get_hsr(vcpu);
+ int Rt2 = (hsr >> 10) & 0xf;
+ int ret;
+
+ params.CRm = (hsr >> 1) & 0xf;
+ params.Rt = (hsr >> 5) & 0xf;
+ params.is_write = ((hsr & 1) == 0);
+
+ params.Op0 = 0;
+ params.Op1 = (hsr >> 16) & 0xf;
+ params.Op2 = 0;
+ params.CRn = 0;
+
+ /*
+ * Massive hack here. Store Rt2 in the top 32bits so we only
+ * have one register to deal with. As we use the same trap
+ * backends between AArch32 and AArch64, we get away with it.
+ */
+ if (params.is_write) {
+ u64 val = *vcpu_reg(vcpu, params.Rt);
+ val &= 0xffffffff;
+ val |= *vcpu_reg(vcpu, Rt2) << 32;
+ *vcpu_reg(vcpu, params.Rt) = val;
+ }
+
+ ret = emulate_cp15(vcpu, ¶ms);
+
+ /* Reverse hack here */
+ if (ret && !params.is_write) {
+ u64 val = *vcpu_reg(vcpu, params.Rt);
+ val >>= 32;
+ *vcpu_reg(vcpu, Rt2) = val;
+ }
+
+ return ret;
+}
+
+/**
+ * kvm_handle_cp15_32 -- handles a mrc/mcr trap on a guest CP15 access
+ * @vcpu: The VCPU pointer
+ * @run: The kvm_run struct
+ */
+int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ struct sys_reg_params params;
+ u32 hsr = kvm_vcpu_get_hsr(vcpu);
+
+ params.CRm = (hsr >> 1) & 0xf;
+ params.Rt = (hsr >> 5) & 0xf;
+ params.is_write = ((hsr & 1) == 0);
+ params.CRn = (hsr >> 10) & 0xf;
+ params.Op0 = 0;
+ params.Op1 = (hsr >> 14) & 0x7;
+ params.Op2 = (hsr >> 17) & 0x7;
+
+ return emulate_cp15(vcpu, ¶ms);
+}
+
static int emulate_sys_reg(struct kvm_vcpu *vcpu,
const struct sys_reg_params *params)
{
size_t num;
const struct sys_reg_desc *table, *r;
- table = get_target_table(vcpu->arch.target, &num);
+ table = get_target_table(vcpu->arch.target, true, &num);
/* Search target-specific then generic table. */
r = find_reg(params, table, num);
@@ -412,7 +563,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
if (!index_to_params(id, ¶ms))
return NULL;
- table = get_target_table(vcpu->arch.target, &num);
+ table = get_target_table(vcpu->arch.target, true, &num);
r = find_reg(¶ms, table, num);
if (!r)
r = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
@@ -835,7 +986,7 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
size_t num;
/* We check for duplicates here, to allow arch-specific overrides. */
- i1 = get_target_table(vcpu->arch.target, &num);
+ i1 = get_target_table(vcpu->arch.target, true, &num);
end1 = i1 + num;
i2 = sys_reg_descs;
end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
@@ -953,7 +1104,7 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
/* Generic chip reset first (so target could override). */
reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
- table = get_target_table(vcpu->arch.target, &num);
+ table = get_target_table(vcpu->arch.target, true, &num);
reset_sys_reg_descs(vcpu, table, num);
for (num = 1; num < NR_SYS_REGS; num++)
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 25/29] arm64: KVM: 32bit handling of coprocessor traps
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Provide the necessary infrastructure to trap coprocessor accesses that
occur when running 32bit guests.
Also wire SMC and HVC trapped in 32bit mode while were at it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_coproc.h | 5 ++
arch/arm64/kvm/handle_exit.c | 7 ++
arch/arm64/kvm/sys_regs.c | 165 ++++++++++++++++++++++++++++++++++--
3 files changed, 170 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
index e791894..0378be9 100644
--- a/arch/arm64/include/asm/kvm_coproc.h
+++ b/arch/arm64/include/asm/kvm_coproc.h
@@ -33,10 +33,15 @@ struct kvm_sys_reg_table {
struct kvm_sys_reg_target_table {
unsigned target;
struct kvm_sys_reg_table table64;
+ struct kvm_sys_reg_table table32;
};
void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table);
+int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run);
#define kvm_coproc_table_init kvm_sys_reg_table_init
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index fa38230..3e61dcb 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -62,6 +62,13 @@ static int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
static exit_handle_fn arm_exit_handlers[] = {
[ESR_EL2_EC_WFI] = kvm_handle_wfi,
+ [ESR_EL2_EC_CP15_32] = kvm_handle_cp15_32,
+ [ESR_EL2_EC_CP15_64] = kvm_handle_cp15_64,
+ [ESR_EL2_EC_CP14_MR] = kvm_handle_cp14_access,
+ [ESR_EL2_EC_CP14_LS] = kvm_handle_cp14_load_store,
+ [ESR_EL2_EC_CP14_64] = kvm_handle_cp14_access,
+ [ESR_EL2_EC_HVC32] = handle_hvc,
+ [ESR_EL2_EC_SMC32] = handle_smc,
[ESR_EL2_EC_HVC64] = handle_hvc,
[ESR_EL2_EC_SMC64] = handle_smc,
[ESR_EL2_EC_SYS64] = kvm_handle_sys_reg,
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9fc8c17..1b1cb21 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -38,6 +38,10 @@
* types are different. My gut feeling is that it should be pretty
* easy to merge, but that would be an ABI breakage -- again. VFP
* would also need to be abstracted.
+ *
+ * For AArch32, we only take care of what is being trapped. Anything
+ * that has to do with init and userspace access has to go via the
+ * 64bit interface.
*/
/* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
@@ -256,6 +260,36 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* TPIDRRO_EL0 */
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
NULL, reset_unknown, TPIDRRO_EL0 },
+
+ /* DACR32_EL2 */
+ { Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
+ NULL, reset_unknown, DACR32_EL2 },
+ /* IFSR32_EL2 */
+ { Op0(0b11), Op1(0b100), CRn(0b0101), CRm(0b0000), Op2(0b001),
+ NULL, reset_unknown, IFSR32_EL2 },
+};
+
+/* Trapped cp15 registers */
+static const struct sys_reg_desc cp15_regs[] = {
+ /*
+ * DC{C,I,CI}SW operations:
+ */
+ { Op1( 0), CRn( 7), CRm( 6), Op2( 2), access_dcsw },
+ { Op1( 0), CRn( 7), CRm(10), Op2( 2), access_dcsw },
+ { Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 0), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 1), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 2), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 3), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 5), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 6), pm_fake },
+ { Op1( 0), CRn( 9), CRm(12), Op2( 7), pm_fake },
+ { Op1( 0), CRn( 9), CRm(13), Op2( 0), pm_fake },
+ { Op1( 0), CRn( 9), CRm(13), Op2( 1), pm_fake },
+ { Op1( 0), CRn( 9), CRm(13), Op2( 2), pm_fake },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 0), pm_fake },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 1), pm_fake },
+ { Op1( 0), CRn( 9), CRm(14), Op2( 2), pm_fake },
};
/* Target specific emulation tables */
@@ -267,13 +301,20 @@ void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table)
}
/* Get specific register table for this target. */
-static const struct sys_reg_desc *get_target_table(unsigned target, size_t *num)
+static const struct sys_reg_desc *get_target_table(unsigned target,
+ bool mode_is_64,
+ size_t *num)
{
struct kvm_sys_reg_target_table *table;
table = target_tables[target];
- *num = table->table64.num;
- return table->table64.table;
+ if (mode_is_64) {
+ *num = table->table64.num;
+ return table->table64.table;
+ } else {
+ *num = table->table32.num;
+ return table->table32.table;
+ }
}
static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
@@ -301,13 +342,123 @@ static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
return NULL;
}
+int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+static int emulate_cp15(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *params)
+{
+ size_t num;
+ const struct sys_reg_desc *table, *r;
+
+ table = get_target_table(vcpu->arch.target, false, &num);
+
+ /* Search target-specific then generic table. */
+ r = find_reg(params, table, num);
+ if (!r)
+ r = find_reg(params, cp15_regs, ARRAY_SIZE(cp15_regs));
+
+ if (likely(r)) {
+ /* If we don't have an accessor, we should never get here! */
+ BUG_ON(!r->access);
+
+ if (likely(r->access(vcpu, params, r))) {
+ /* Skip instruction, since it was emulated */
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ return 1;
+ }
+ /* If access function fails, it should complain. */
+ } else {
+ kvm_err("Unsupported guest CP15 access at: %08lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(params);
+ }
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
+/**
+ * kvm_handle_cp15_64 -- handles a mrrc/mcrr trap on a guest CP15 access
+ * @vcpu: The VCPU pointer
+ * @run: The kvm_run struct
+ */
+int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ struct sys_reg_params params;
+ u32 hsr = kvm_vcpu_get_hsr(vcpu);
+ int Rt2 = (hsr >> 10) & 0xf;
+ int ret;
+
+ params.CRm = (hsr >> 1) & 0xf;
+ params.Rt = (hsr >> 5) & 0xf;
+ params.is_write = ((hsr & 1) == 0);
+
+ params.Op0 = 0;
+ params.Op1 = (hsr >> 16) & 0xf;
+ params.Op2 = 0;
+ params.CRn = 0;
+
+ /*
+ * Massive hack here. Store Rt2 in the top 32bits so we only
+ * have one register to deal with. As we use the same trap
+ * backends between AArch32 and AArch64, we get away with it.
+ */
+ if (params.is_write) {
+ u64 val = *vcpu_reg(vcpu, params.Rt);
+ val &= 0xffffffff;
+ val |= *vcpu_reg(vcpu, Rt2) << 32;
+ *vcpu_reg(vcpu, params.Rt) = val;
+ }
+
+ ret = emulate_cp15(vcpu, ¶ms);
+
+ /* Reverse hack here */
+ if (ret && !params.is_write) {
+ u64 val = *vcpu_reg(vcpu, params.Rt);
+ val >>= 32;
+ *vcpu_reg(vcpu, Rt2) = val;
+ }
+
+ return ret;
+}
+
+/**
+ * kvm_handle_cp15_32 -- handles a mrc/mcr trap on a guest CP15 access
+ * @vcpu: The VCPU pointer
+ * @run: The kvm_run struct
+ */
+int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ struct sys_reg_params params;
+ u32 hsr = kvm_vcpu_get_hsr(vcpu);
+
+ params.CRm = (hsr >> 1) & 0xf;
+ params.Rt = (hsr >> 5) & 0xf;
+ params.is_write = ((hsr & 1) == 0);
+ params.CRn = (hsr >> 10) & 0xf;
+ params.Op0 = 0;
+ params.Op1 = (hsr >> 14) & 0x7;
+ params.Op2 = (hsr >> 17) & 0x7;
+
+ return emulate_cp15(vcpu, ¶ms);
+}
+
static int emulate_sys_reg(struct kvm_vcpu *vcpu,
const struct sys_reg_params *params)
{
size_t num;
const struct sys_reg_desc *table, *r;
- table = get_target_table(vcpu->arch.target, &num);
+ table = get_target_table(vcpu->arch.target, true, &num);
/* Search target-specific then generic table. */
r = find_reg(params, table, num);
@@ -412,7 +563,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
if (!index_to_params(id, ¶ms))
return NULL;
- table = get_target_table(vcpu->arch.target, &num);
+ table = get_target_table(vcpu->arch.target, true, &num);
r = find_reg(¶ms, table, num);
if (!r)
r = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
@@ -835,7 +986,7 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
size_t num;
/* We check for duplicates here, to allow arch-specific overrides. */
- i1 = get_target_table(vcpu->arch.target, &num);
+ i1 = get_target_table(vcpu->arch.target, true, &num);
end1 = i1 + num;
i2 = sys_reg_descs;
end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
@@ -953,7 +1104,7 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
/* Generic chip reset first (so target could override). */
reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
- table = get_target_table(vcpu->arch.target, &num);
+ table = get_target_table(vcpu->arch.target, true, &num);
reset_sys_reg_descs(vcpu, table, num);
for (num = 1; num < NR_SYS_REGS; num++)
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 26/29] arm64: KVM: 32bit coprocessor access for Cortex-A57
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Enable handling of 32bit coprocessor traps for Cortex-A57.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs_a57.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs_a57.c b/arch/arm64/kvm/sys_regs_a57.c
index dcc88fe..56c0641 100644
--- a/arch/arm64/kvm/sys_regs_a57.c
+++ b/arch/arm64/kvm/sys_regs_a57.c
@@ -59,6 +59,17 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu->arch.sys_regs[ACTLR_EL1] = actlr;
}
+static bool access_ectlr(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return ignore_write(vcpu, p);
+
+ *vcpu_reg(vcpu, p->Rt) = 0;
+ return true;
+}
+
/*
* A57-specific sys-reg registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -74,12 +85,23 @@ static const struct sys_reg_desc a57_sys_regs[] = {
NULL, reset_val, CPACR_EL1, 0 },
};
+static const struct sys_reg_desc a57_cp15_regs[] = {
+ { Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b001), /* ACTLR */
+ access_actlr },
+ { Op1(0b001), CRn(0b0000), CRm(0b1111), Op2(0b000), /* ECTLR */
+ access_ectlr },
+};
+
static struct kvm_sys_reg_target_table a57_target_table = {
.target = KVM_ARM_TARGET_CORTEX_A57,
.table64 = {
.table = a57_sys_regs,
.num = ARRAY_SIZE(a57_sys_regs),
},
+ .table32 = {
+ .table = a57_cp15_regs,
+ .num = ARRAY_SIZE(a57_cp15_regs),
+ },
};
static int __init sys_reg_a57_init(void)
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 26/29] arm64: KVM: 32bit coprocessor access for Cortex-A57
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Enable handling of 32bit coprocessor traps for Cortex-A57.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/sys_regs_a57.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs_a57.c b/arch/arm64/kvm/sys_regs_a57.c
index dcc88fe..56c0641 100644
--- a/arch/arm64/kvm/sys_regs_a57.c
+++ b/arch/arm64/kvm/sys_regs_a57.c
@@ -59,6 +59,17 @@ static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
vcpu->arch.sys_regs[ACTLR_EL1] = actlr;
}
+static bool access_ectlr(struct kvm_vcpu *vcpu,
+ const struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ if (p->is_write)
+ return ignore_write(vcpu, p);
+
+ *vcpu_reg(vcpu, p->Rt) = 0;
+ return true;
+}
+
/*
* A57-specific sys-reg registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -74,12 +85,23 @@ static const struct sys_reg_desc a57_sys_regs[] = {
NULL, reset_val, CPACR_EL1, 0 },
};
+static const struct sys_reg_desc a57_cp15_regs[] = {
+ { Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b001), /* ACTLR */
+ access_actlr },
+ { Op1(0b001), CRn(0b0000), CRm(0b1111), Op2(0b000), /* ECTLR */
+ access_ectlr },
+};
+
static struct kvm_sys_reg_target_table a57_target_table = {
.target = KVM_ARM_TARGET_CORTEX_A57,
.table64 = {
.table = a57_sys_regs,
.num = ARRAY_SIZE(a57_sys_regs),
},
+ .table32 = {
+ .table = a57_cp15_regs,
+ .num = ARRAY_SIZE(a57_cp15_regs),
+ },
};
static int __init sys_reg_a57_init(void)
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 27/29] arm64: KVM: 32bit specific register world switch
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Allow registers specific to 32bit guests to be saved/restored
during the world switch.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/hyp.S | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 70 insertions(+)
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index cd7506d..1d4b0a7 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -312,6 +312,74 @@ __kvm_hyp_code_start:
load_sysregs
.endm
+.macro skip_32bit_state tmp, target
+ // Skip 32bit state if not needed
+ mrs \tmp, hcr_el2
+ tbnz \tmp, #HCR_RW_SHIFT, \target
+.endm
+
+.macro skip_tee_state tmp, target
+ // Skip ThumbEE state if not needed
+ mrs \tmp, id_pfr0_el1
+ tbz \tmp, #12, \target
+.endm
+
+.macro save_guest_32bit_state
+ skip_32bit_state x2, 1f
+
+ add x2, x0, #SPSR_OFFSET(KVM_SPSR_ABT)
+ mrs x4, spsr_abt
+ mrs x5, spsr_und
+ mrs x6, spsr_irq
+ mrs x7, spsr_fiq
+ stp x4, x5, [x2], #16
+ stp x6, x7, [x2]
+
+ add x2, x0, #SYSREG_OFFSET(DACR32_EL2)
+ mrs x4, dacr32_el2
+ mrs x5, ifsr32_el2
+ mrs x6, fpexc32_el2
+ mrs x7, dbgvcr32_el2
+ stp x4, x5, [x2], #16
+ stp x6, x7, [x2]
+
+ skip_tee_state x8, 1f
+
+ add x2, x0, #SYSREG_OFFSET(TEECR32_EL1)
+ mrs x4, teecr32_el1
+ mrs x5, teehbr32_el1
+ stp x4, x5, [x2]
+1:
+.endm
+
+.macro restore_guest_32bit_state
+ skip_32bit_state x2, 1f
+
+ add x2, x0, #SPSR_OFFSET(KVM_SPSR_ABT)
+ ldp x4, x5, [x2], #16
+ ldp x6, x7, [x2]
+ msr spsr_abt, x4
+ msr spsr_und, x5
+ msr spsr_irq, x6
+ msr spsr_fiq, x7
+
+ add x2, x0, #SYSREG_OFFSET(DACR32_EL2)
+ ldp x4, x5, [x2], #16
+ ldp x6, x7, [x2]
+ msr dacr32_el2, x4
+ msr ifsr32_el2, x5
+ msr fpexc32_el2, x6
+ msr dbgvcr32_el2, x7
+
+ skip_tee_state x8, 1f
+
+ add x2, x0, #SYSREG_OFFSET(TEECR32_EL1)
+ ldp x4, x5, [x2]
+ msr teecr32_el1, x4
+ msr teehbr32_el1, x5
+1:
+.endm
+
.macro activate_traps
ldr x2, [x0, #VCPU_IRQ_LINES]
ldr x1, [x0, #VCPU_HCR_EL2]
@@ -513,6 +581,7 @@ ENTRY(__kvm_vcpu_run)
restore_timer_state
restore_guest_sysregs
restore_guest_fpsimd
+ restore_guest_32bit_state
restore_guest_regs
// That's it, no more messing around.
@@ -523,6 +592,7 @@ __kvm_vcpu_return:
// Assume x0 is the vcpu pointer, x1 the return code
// Guest's x0-x3 are on the stack
save_guest_regs
+ save_guest_32bit_state
save_guest_fpsimd
save_guest_sysregs
save_timer_state
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 27/29] arm64: KVM: 32bit specific register world switch
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Allow registers specific to 32bit guests to be saved/restored
during the world switch.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/hyp.S | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 70 insertions(+)
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index cd7506d..1d4b0a7 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -312,6 +312,74 @@ __kvm_hyp_code_start:
load_sysregs
.endm
+.macro skip_32bit_state tmp, target
+ // Skip 32bit state if not needed
+ mrs \tmp, hcr_el2
+ tbnz \tmp, #HCR_RW_SHIFT, \target
+.endm
+
+.macro skip_tee_state tmp, target
+ // Skip ThumbEE state if not needed
+ mrs \tmp, id_pfr0_el1
+ tbz \tmp, #12, \target
+.endm
+
+.macro save_guest_32bit_state
+ skip_32bit_state x2, 1f
+
+ add x2, x0, #SPSR_OFFSET(KVM_SPSR_ABT)
+ mrs x4, spsr_abt
+ mrs x5, spsr_und
+ mrs x6, spsr_irq
+ mrs x7, spsr_fiq
+ stp x4, x5, [x2], #16
+ stp x6, x7, [x2]
+
+ add x2, x0, #SYSREG_OFFSET(DACR32_EL2)
+ mrs x4, dacr32_el2
+ mrs x5, ifsr32_el2
+ mrs x6, fpexc32_el2
+ mrs x7, dbgvcr32_el2
+ stp x4, x5, [x2], #16
+ stp x6, x7, [x2]
+
+ skip_tee_state x8, 1f
+
+ add x2, x0, #SYSREG_OFFSET(TEECR32_EL1)
+ mrs x4, teecr32_el1
+ mrs x5, teehbr32_el1
+ stp x4, x5, [x2]
+1:
+.endm
+
+.macro restore_guest_32bit_state
+ skip_32bit_state x2, 1f
+
+ add x2, x0, #SPSR_OFFSET(KVM_SPSR_ABT)
+ ldp x4, x5, [x2], #16
+ ldp x6, x7, [x2]
+ msr spsr_abt, x4
+ msr spsr_und, x5
+ msr spsr_irq, x6
+ msr spsr_fiq, x7
+
+ add x2, x0, #SYSREG_OFFSET(DACR32_EL2)
+ ldp x4, x5, [x2], #16
+ ldp x6, x7, [x2]
+ msr dacr32_el2, x4
+ msr ifsr32_el2, x5
+ msr fpexc32_el2, x6
+ msr dbgvcr32_el2, x7
+
+ skip_tee_state x8, 1f
+
+ add x2, x0, #SYSREG_OFFSET(TEECR32_EL1)
+ ldp x4, x5, [x2]
+ msr teecr32_el1, x4
+ msr teehbr32_el1, x5
+1:
+.endm
+
.macro activate_traps
ldr x2, [x0, #VCPU_IRQ_LINES]
ldr x1, [x0, #VCPU_HCR_EL2]
@@ -513,6 +581,7 @@ ENTRY(__kvm_vcpu_run)
restore_timer_state
restore_guest_sysregs
restore_guest_fpsimd
+ restore_guest_32bit_state
restore_guest_regs
// That's it, no more messing around.
@@ -523,6 +592,7 @@ __kvm_vcpu_return:
// Assume x0 is the vcpu pointer, x1 the return code
// Guest's x0-x3 are on the stack
save_guest_regs
+ save_guest_32bit_state
save_guest_fpsimd
save_guest_sysregs
save_timer_state
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 28/29] arm64: KVM: 32bit guest fault injection
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Add fault injection capability for 32bit guests.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/inject_fault.c | 79 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 78 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 80b245f..85a4548 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -1,5 +1,5 @@
/*
- * Fault injection for 64bit guests.
+ * Fault injection for both 32 and 64bit guests.
*
* Copyright (C) 2012 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
@@ -24,6 +24,74 @@
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
+{
+ unsigned long cpsr;
+ unsigned long new_spsr_value = *vcpu_cpsr(vcpu);
+ bool is_thumb = (new_spsr_value & COMPAT_PSR_T_BIT);
+ u32 return_offset = (is_thumb) ? 4 : 0;
+ u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
+
+ cpsr = mode | COMPAT_PSR_I_BIT;
+
+ if (sctlr & (1 << 30))
+ cpsr |= COMPAT_PSR_T_BIT;
+ if (sctlr & (1 << 25))
+ cpsr |= COMPAT_PSR_E_BIT;
+
+ *vcpu_cpsr(vcpu) = cpsr;
+
+ /* Note: These now point to the banked copies */
+ *vcpu_spsr(vcpu) = new_spsr_value;
+ *vcpu_reg(vcpu, 14) = *vcpu_pc(vcpu) + return_offset;
+
+ /* Branch to exception vector */
+ if (sctlr & (1 << 13))
+ vect_offset += 0xffff0000;
+ else /* always have security exceptions */
+ vect_offset += vcpu->arch.cp15[c12_VBAR];
+
+ *vcpu_pc(vcpu) = vect_offset;
+}
+
+static void inject_undef32(struct kvm_vcpu *vcpu)
+{
+ prepare_fault32(vcpu, COMPAT_PSR_MODE_UND, 4);
+}
+
+/*
+ * Modelled after TakeDataAbortException() and TakePrefetchAbortException
+ * pseudocode.
+ */
+static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
+ unsigned long addr)
+{
+ u32 vect_offset;
+ u32 *far, *fsr;
+ bool is_lpae;
+
+ if (is_pabt) {
+ vect_offset = 12;
+ far = &vcpu->arch.cp15[c6_IFAR];
+ fsr = &vcpu->arch.cp15[c5_IFSR];
+ } else { /* !iabt */
+ vect_offset = 16;
+ far = &vcpu->arch.cp15[c6_DFAR];
+ fsr = &vcpu->arch.cp15[c5_DFSR];
+ }
+
+ prepare_fault32(vcpu, COMPAT_PSR_MODE_ABT | COMPAT_PSR_A_BIT, vect_offset);
+
+ *far = addr;
+
+ /* Always give debug fault for now - should give guest a clue */
+ is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31);
+ if (is_lpae)
+ *fsr = 1 << 9 | 0x22;
+ else
+ *fsr = 2;
+}
+
static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
@@ -89,6 +157,9 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
*/
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr)
{
+ if (!(vcpu->arch.hcr_el2 & HCR_RW))
+ inject_abt32(vcpu, false, addr);
+
inject_abt64(vcpu, false, addr);
}
@@ -102,6 +173,9 @@ void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr)
*/
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr)
{
+ if (!(vcpu->arch.hcr_el2 & HCR_RW))
+ inject_abt32(vcpu, true, addr);
+
inject_abt64(vcpu, true, addr);
}
@@ -113,5 +187,8 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr)
*/
void kvm_inject_undefined(struct kvm_vcpu *vcpu)
{
+ if (!(vcpu->arch.hcr_el2 & HCR_RW))
+ inject_undef32(vcpu);
+
inject_undef64(vcpu);
}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 28/29] arm64: KVM: 32bit guest fault injection
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Add fault injection capability for 32bit guests.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/kvm/inject_fault.c | 79 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 78 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 80b245f..85a4548 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -1,5 +1,5 @@
/*
- * Fault injection for 64bit guests.
+ * Fault injection for both 32 and 64bit guests.
*
* Copyright (C) 2012 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
@@ -24,6 +24,74 @@
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
+{
+ unsigned long cpsr;
+ unsigned long new_spsr_value = *vcpu_cpsr(vcpu);
+ bool is_thumb = (new_spsr_value & COMPAT_PSR_T_BIT);
+ u32 return_offset = (is_thumb) ? 4 : 0;
+ u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
+
+ cpsr = mode | COMPAT_PSR_I_BIT;
+
+ if (sctlr & (1 << 30))
+ cpsr |= COMPAT_PSR_T_BIT;
+ if (sctlr & (1 << 25))
+ cpsr |= COMPAT_PSR_E_BIT;
+
+ *vcpu_cpsr(vcpu) = cpsr;
+
+ /* Note: These now point to the banked copies */
+ *vcpu_spsr(vcpu) = new_spsr_value;
+ *vcpu_reg(vcpu, 14) = *vcpu_pc(vcpu) + return_offset;
+
+ /* Branch to exception vector */
+ if (sctlr & (1 << 13))
+ vect_offset += 0xffff0000;
+ else /* always have security exceptions */
+ vect_offset += vcpu->arch.cp15[c12_VBAR];
+
+ *vcpu_pc(vcpu) = vect_offset;
+}
+
+static void inject_undef32(struct kvm_vcpu *vcpu)
+{
+ prepare_fault32(vcpu, COMPAT_PSR_MODE_UND, 4);
+}
+
+/*
+ * Modelled after TakeDataAbortException() and TakePrefetchAbortException
+ * pseudocode.
+ */
+static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
+ unsigned long addr)
+{
+ u32 vect_offset;
+ u32 *far, *fsr;
+ bool is_lpae;
+
+ if (is_pabt) {
+ vect_offset = 12;
+ far = &vcpu->arch.cp15[c6_IFAR];
+ fsr = &vcpu->arch.cp15[c5_IFSR];
+ } else { /* !iabt */
+ vect_offset = 16;
+ far = &vcpu->arch.cp15[c6_DFAR];
+ fsr = &vcpu->arch.cp15[c5_DFSR];
+ }
+
+ prepare_fault32(vcpu, COMPAT_PSR_MODE_ABT | COMPAT_PSR_A_BIT, vect_offset);
+
+ *far = addr;
+
+ /* Always give debug fault for now - should give guest a clue */
+ is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31);
+ if (is_lpae)
+ *fsr = 1 << 9 | 0x22;
+ else
+ *fsr = 2;
+}
+
static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
@@ -89,6 +157,9 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
*/
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr)
{
+ if (!(vcpu->arch.hcr_el2 & HCR_RW))
+ inject_abt32(vcpu, false, addr);
+
inject_abt64(vcpu, false, addr);
}
@@ -102,6 +173,9 @@ void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr)
*/
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr)
{
+ if (!(vcpu->arch.hcr_el2 & HCR_RW))
+ inject_abt32(vcpu, true, addr);
+
inject_abt64(vcpu, true, addr);
}
@@ -113,5 +187,8 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr)
*/
void kvm_inject_undefined(struct kvm_vcpu *vcpu)
{
+ if (!(vcpu->arch.hcr_el2 & HCR_RW))
+ inject_undef32(vcpu);
+
inject_undef64(vcpu);
}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 29/29] arm64: KVM: enable initialization of a 32bit vcpu
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-05 3:47 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel, kvm, kvmarm; +Cc: catalin.marinas
Wire the init of a 32bit vcpu by allowing 32bit modes in pstate,
and providing sensible defaults out of reset state.
This feature is of course conditionned by the presence of 32bit
capability on the physical CPU.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/include/uapi/asm/kvm.h | 1 +
arch/arm64/kvm/guest.c | 6 ++++++
arch/arm64/kvm/reset.c | 22 +++++++++++++++++++++-
4 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 24dc8d7..0f17cfe 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -34,7 +34,7 @@
#include <asm/kvm_vgic.h>
#include <asm/kvm_arch_timer.h>
-#define KVM_VCPU_MAX_FEATURES 1
+#define KVM_VCPU_MAX_FEATURES 2
/* We don't currently support large pages. */
#define KVM_HPAGE_GFN_SHIFT(x) 0
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index f9c269e..813427f 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -70,6 +70,7 @@ struct kvm_regs {
#define KVM_VGIC_V2_CPU_SIZE 0x2000
#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
+#define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
struct kvm_vcpu_init {
__u32 target;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 2a8aaf8..123703d 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -84,6 +84,12 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (off == KVM_REG_ARM_CORE_REG(regs.pstate)) {
unsigned long mode = val & COMPAT_PSR_MODE_MASK;
switch (mode) {
+ case COMPAT_PSR_MODE_USR:
+ case COMPAT_PSR_MODE_FIQ:
+ case COMPAT_PSR_MODE_IRQ:
+ case COMPAT_PSR_MODE_SVC:
+ case COMPAT_PSR_MODE_ABT:
+ case COMPAT_PSR_MODE_UND:
case PSR_MODE_EL0t:
case PSR_MODE_EL1t:
case PSR_MODE_EL1h:
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 3ac2f20..411659e 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -35,6 +35,19 @@ static struct kvm_regs default_regs_reset = {
.regs.pstate = PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
};
+static struct kvm_regs default_regs_reset32 = {
+ .regs.pstate = (COMPAT_PSR_MODE_SVC | COMPAT_PSR_A_BIT |
+ COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
+};
+
+static bool cpu_has_32bit_el1(void)
+{
+ u64 pfr0;
+
+ pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+ return !!(pfr0 & 0x20);
+}
+
/**
* kvm_reset_vcpu - sets core registers and sys_regs to reset value
* @vcpu: The VCPU pointer
@@ -49,7 +62,14 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
switch (vcpu->arch.target) {
default:
- cpu_reset = &default_regs_reset;
+ if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
+ if (!cpu_has_32bit_el1())
+ return -EINVAL;
+ cpu_reset = &default_regs_reset32;
+ vcpu->arch.hcr_el2 &= ~HCR_RW;
+ } else {
+ cpu_reset = &default_regs_reset;
+ }
break;
}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* [PATCH 29/29] arm64: KVM: enable initialization of a 32bit vcpu
@ 2013-03-05 3:47 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-05 3:47 UTC (permalink / raw)
To: linux-arm-kernel
Wire the init of a 32bit vcpu by allowing 32bit modes in pstate,
and providing sensible defaults out of reset state.
This feature is of course conditionned by the presence of 32bit
capability on the physical CPU.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/include/uapi/asm/kvm.h | 1 +
arch/arm64/kvm/guest.c | 6 ++++++
arch/arm64/kvm/reset.c | 22 +++++++++++++++++++++-
4 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 24dc8d7..0f17cfe 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -34,7 +34,7 @@
#include <asm/kvm_vgic.h>
#include <asm/kvm_arch_timer.h>
-#define KVM_VCPU_MAX_FEATURES 1
+#define KVM_VCPU_MAX_FEATURES 2
/* We don't currently support large pages. */
#define KVM_HPAGE_GFN_SHIFT(x) 0
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index f9c269e..813427f 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -70,6 +70,7 @@ struct kvm_regs {
#define KVM_VGIC_V2_CPU_SIZE 0x2000
#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
+#define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
struct kvm_vcpu_init {
__u32 target;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 2a8aaf8..123703d 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -84,6 +84,12 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (off == KVM_REG_ARM_CORE_REG(regs.pstate)) {
unsigned long mode = val & COMPAT_PSR_MODE_MASK;
switch (mode) {
+ case COMPAT_PSR_MODE_USR:
+ case COMPAT_PSR_MODE_FIQ:
+ case COMPAT_PSR_MODE_IRQ:
+ case COMPAT_PSR_MODE_SVC:
+ case COMPAT_PSR_MODE_ABT:
+ case COMPAT_PSR_MODE_UND:
case PSR_MODE_EL0t:
case PSR_MODE_EL1t:
case PSR_MODE_EL1h:
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 3ac2f20..411659e 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -35,6 +35,19 @@ static struct kvm_regs default_regs_reset = {
.regs.pstate = PSR_MODE_EL1h | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
};
+static struct kvm_regs default_regs_reset32 = {
+ .regs.pstate = (COMPAT_PSR_MODE_SVC | COMPAT_PSR_A_BIT |
+ COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
+};
+
+static bool cpu_has_32bit_el1(void)
+{
+ u64 pfr0;
+
+ pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+ return !!(pfr0 & 0x20);
+}
+
/**
* kvm_reset_vcpu - sets core registers and sys_regs to reset value
* @vcpu: The VCPU pointer
@@ -49,7 +62,14 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
switch (vcpu->arch.target) {
default:
- cpu_reset = &default_regs_reset;
+ if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
+ if (!cpu_has_32bit_el1())
+ return -EINVAL;
+ cpu_reset = &default_regs_reset32;
+ vcpu->arch.hcr_el2 &= ~HCR_RW;
+ } else {
+ cpu_reset = &default_regs_reset;
+ }
break;
}
--
1.7.12.4
^ permalink raw reply related [flat|nested] 128+ messages in thread
* Re: [PATCH 08/29] arm64: KVM: user space interface
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-07 8:09 ` Michael S. Tsirkin
-1 siblings, 0 replies; 128+ messages in thread
From: Michael S. Tsirkin @ 2013-03-07 8:09 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
On Tue, Mar 05, 2013 at 03:47:24AM +0000, Marc Zyngier wrote:
> Provide the kvm.h file that defines the user space visible
> interface.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/uapi/asm/kvm.h | 112 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 112 insertions(+)
> create mode 100644 arch/arm64/include/uapi/asm/kvm.h
>
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> new file mode 100644
> index 0000000..f5525f1
> --- /dev/null
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -0,0 +1,112 @@
> +/*
> + * Copyright (C) 2012 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * Derived from arch/arm/include/uapi/asm/kvm.h:
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ARM_KVM_H__
> +#define __ARM_KVM_H__
> +
> +#define KVM_SPSR_EL1 0
> +#define KVM_NR_SPSR 1
> +
> +#ifndef __ASSEMBLY__
> +#include <asm/types.h>
> +#include <asm/ptrace.h>
> +
> +#define __KVM_HAVE_GUEST_DEBUG
> +#define __KVM_HAVE_IRQ_LINE
> +
> +#define KVM_REG_SIZE(id) \
> + (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
> +
> +struct kvm_regs {
> + struct user_pt_regs regs; /* sp = sp_el0 */
> +
> + unsigned long sp_el1;
> + unsigned long elr_el1;
> +
> + unsigned long spsr[KVM_NR_SPSR];
> +};
> +
Using long in uapi is generally a mistake: with gcc it has
different size depending on whether you build in 64 and 32 bit mode.
I think it is better to make it __u64.
> +/* Supported Processor Types */
> +#define KVM_ARM_TARGET_CORTEX_A57 0
> +#define KVM_ARM_NUM_TARGETS 1
> +
> +/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
> +#define KVM_ARM_DEVICE_TYPE_SHIFT 0
> +#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
> +#define KVM_ARM_DEVICE_ID_SHIFT 16
> +#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
> +
> +/* Supported device IDs */
> +#define KVM_ARM_DEVICE_VGIC_V2 0
> +
> +/* Supported VGIC address types */
> +#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
> +#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
> +
> +#define KVM_VGIC_V2_DIST_SIZE 0x1000
> +#define KVM_VGIC_V2_CPU_SIZE 0x2000
> +
> +struct kvm_vcpu_init {
> + __u32 target;
> + __u32 features[7];
> +};
> +
> +struct kvm_sregs {
> +};
> +
> +struct kvm_fpu {
> +};
> +
> +struct kvm_guest_debug_arch {
> +};
> +
> +struct kvm_debug_exit_arch {
> +};
> +
This is a problem too: structure alignment is different
in -m32 versus -m64 modes, which will affect the offset of
the following fields. I think it's best to add a "padding"
field in there, and size it to multiple of 8 bytes.
I think the same is a good idea for other empty stuctures,
since otherwise the padding is implicit and
not initialized by gcc, and it is hard not too leak
info to userspace when you copy these structures out.
And, it'll be handy if you want to extend the structures
down the line.
> +struct kvm_sync_regs {
> +};
> +
> +struct kvm_arch_memory_slot {
> +};
> +
> +/* KVM_IRQ_LINE irq field index values */
> +#define KVM_ARM_IRQ_TYPE_SHIFT 24
> +#define KVM_ARM_IRQ_TYPE_MASK 0xff
> +#define KVM_ARM_IRQ_VCPU_SHIFT 16
> +#define KVM_ARM_IRQ_VCPU_MASK 0xff
> +#define KVM_ARM_IRQ_NUM_SHIFT 0
> +#define KVM_ARM_IRQ_NUM_MASK 0xffff
> +
> +/* irq_type field */
> +#define KVM_ARM_IRQ_TYPE_CPU 0
> +#define KVM_ARM_IRQ_TYPE_SPI 1
> +#define KVM_ARM_IRQ_TYPE_PPI 2
> +
> +/* out-of-kernel GIC cpu interrupt injection irq_number field */
> +#define KVM_ARM_IRQ_CPU_IRQ 0
> +#define KVM_ARM_IRQ_CPU_FIQ 1
> +
> +/* Highest supported SPI, from VGIC_NR_IRQS */
> +#define KVM_ARM_IRQ_GIC_MAX 127
> +
> +#endif
> +
> +#endif /* __ARM_KVM_H__ */
> --
> 1.7.12.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 08/29] arm64: KVM: user space interface
@ 2013-03-07 8:09 ` Michael S. Tsirkin
0 siblings, 0 replies; 128+ messages in thread
From: Michael S. Tsirkin @ 2013-03-07 8:09 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Mar 05, 2013 at 03:47:24AM +0000, Marc Zyngier wrote:
> Provide the kvm.h file that defines the user space visible
> interface.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/uapi/asm/kvm.h | 112 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 112 insertions(+)
> create mode 100644 arch/arm64/include/uapi/asm/kvm.h
>
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> new file mode 100644
> index 0000000..f5525f1
> --- /dev/null
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -0,0 +1,112 @@
> +/*
> + * Copyright (C) 2012 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * Derived from arch/arm/include/uapi/asm/kvm.h:
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ARM_KVM_H__
> +#define __ARM_KVM_H__
> +
> +#define KVM_SPSR_EL1 0
> +#define KVM_NR_SPSR 1
> +
> +#ifndef __ASSEMBLY__
> +#include <asm/types.h>
> +#include <asm/ptrace.h>
> +
> +#define __KVM_HAVE_GUEST_DEBUG
> +#define __KVM_HAVE_IRQ_LINE
> +
> +#define KVM_REG_SIZE(id) \
> + (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
> +
> +struct kvm_regs {
> + struct user_pt_regs regs; /* sp = sp_el0 */
> +
> + unsigned long sp_el1;
> + unsigned long elr_el1;
> +
> + unsigned long spsr[KVM_NR_SPSR];
> +};
> +
Using long in uapi is generally a mistake: with gcc it has
different size depending on whether you build in 64 and 32 bit mode.
I think it is better to make it __u64.
> +/* Supported Processor Types */
> +#define KVM_ARM_TARGET_CORTEX_A57 0
> +#define KVM_ARM_NUM_TARGETS 1
> +
> +/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
> +#define KVM_ARM_DEVICE_TYPE_SHIFT 0
> +#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
> +#define KVM_ARM_DEVICE_ID_SHIFT 16
> +#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
> +
> +/* Supported device IDs */
> +#define KVM_ARM_DEVICE_VGIC_V2 0
> +
> +/* Supported VGIC address types */
> +#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
> +#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
> +
> +#define KVM_VGIC_V2_DIST_SIZE 0x1000
> +#define KVM_VGIC_V2_CPU_SIZE 0x2000
> +
> +struct kvm_vcpu_init {
> + __u32 target;
> + __u32 features[7];
> +};
> +
> +struct kvm_sregs {
> +};
> +
> +struct kvm_fpu {
> +};
> +
> +struct kvm_guest_debug_arch {
> +};
> +
> +struct kvm_debug_exit_arch {
> +};
> +
This is a problem too: structure alignment is different
in -m32 versus -m64 modes, which will affect the offset of
the following fields. I think it's best to add a "padding"
field in there, and size it to multiple of 8 bytes.
I think the same is a good idea for other empty stuctures,
since otherwise the padding is implicit and
not initialized by gcc, and it is hard not too leak
info to userspace when you copy these structures out.
And, it'll be handy if you want to extend the structures
down the line.
> +struct kvm_sync_regs {
> +};
> +
> +struct kvm_arch_memory_slot {
> +};
> +
> +/* KVM_IRQ_LINE irq field index values */
> +#define KVM_ARM_IRQ_TYPE_SHIFT 24
> +#define KVM_ARM_IRQ_TYPE_MASK 0xff
> +#define KVM_ARM_IRQ_VCPU_SHIFT 16
> +#define KVM_ARM_IRQ_VCPU_MASK 0xff
> +#define KVM_ARM_IRQ_NUM_SHIFT 0
> +#define KVM_ARM_IRQ_NUM_MASK 0xffff
> +
> +/* irq_type field */
> +#define KVM_ARM_IRQ_TYPE_CPU 0
> +#define KVM_ARM_IRQ_TYPE_SPI 1
> +#define KVM_ARM_IRQ_TYPE_PPI 2
> +
> +/* out-of-kernel GIC cpu interrupt injection irq_number field */
> +#define KVM_ARM_IRQ_CPU_IRQ 0
> +#define KVM_ARM_IRQ_CPU_FIQ 1
> +
> +/* Highest supported SPI, from VGIC_NR_IRQS */
> +#define KVM_ARM_IRQ_GIC_MAX 127
> +
> +#endif
> +
> +#endif /* __ARM_KVM_H__ */
> --
> 1.7.12.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 09/29] arm64: KVM: system register handling
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-07 10:30 ` Alexander Graf
-1 siblings, 0 replies; 128+ messages in thread
From: Alexander Graf @ 2013-03-07 10:30 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
On 05.03.2013, at 04:47, Marc Zyngier wrote:
> Provide 64bit system register handling, modeled after the cp15
> handling for ARM.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_coproc.h | 51 ++
> arch/arm64/include/uapi/asm/kvm.h | 56 +++
> arch/arm64/kvm/sys_regs.c | 962 ++++++++++++++++++++++++++++++++++++
> arch/arm64/kvm/sys_regs.h | 141 ++++++
> include/uapi/linux/kvm.h | 1 +
> 5 files changed, 1211 insertions(+)
> create mode 100644 arch/arm64/include/asm/kvm_coproc.h
> create mode 100644 arch/arm64/kvm/sys_regs.c
> create mode 100644 arch/arm64/kvm/sys_regs.h
>
> diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
> new file mode 100644
> index 0000000..e791894
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_coproc.h
> @@ -0,0 +1,51 @@
> +/*
> + * Copyright (C) 2012 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * Derived from arch/arm/include/asm/kvm_coproc.h
> + * Copyright (C) 2012 Rusty Russell IBM Corporation
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ARM64_KVM_COPROC_H__
> +#define __ARM64_KVM_COPROC_H__
> +
> +#include <linux/kvm_host.h>
> +
> +void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
> +
> +struct kvm_sys_reg_table {
> + const struct sys_reg_desc *table;
> + size_t num;
> +};
> +
> +struct kvm_sys_reg_target_table {
> + unsigned target;
> + struct kvm_sys_reg_table table64;
> +};
> +
> +void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table);
> +
> +int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +
> +#define kvm_coproc_table_init kvm_sys_reg_table_init
> +void kvm_sys_reg_table_init(void);
> +
> +struct kvm_one_reg;
> +int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> +int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
> +int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
> +unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu);
> +
> +#endif /* __ARM64_KVM_COPROC_H__ */
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index f5525f1..fffeb11 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -87,6 +87,62 @@ struct kvm_sync_regs {
> struct kvm_arch_memory_slot {
> };
>
> +/* If you need to interpret the index values, here is the key: */
> +#define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000
> +#define KVM_REG_ARM_COPROC_SHIFT 16
> +#define KVM_REG_ARM_32_OPC2_MASK 0x0000000000000007
> +#define KVM_REG_ARM_32_OPC2_SHIFT 0
> +#define KVM_REG_ARM_OPC1_MASK 0x0000000000000078
> +#define KVM_REG_ARM_OPC1_SHIFT 3
> +#define KVM_REG_ARM_CRM_MASK 0x0000000000000780
> +#define KVM_REG_ARM_CRM_SHIFT 7
> +#define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800
> +#define KVM_REG_ARM_32_CRN_SHIFT 11
> +
> +/* Normal registers are mapped as coprocessor 16. */
> +#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / sizeof(unsigned long))
> +
> +/* Some registers need more space to represent values. */
> +#define KVM_REG_ARM_DEMUX (0x0011 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM_DEMUX_ID_MASK 0x000000000000FF00
> +#define KVM_REG_ARM_DEMUX_ID_SHIFT 8
> +#define KVM_REG_ARM_DEMUX_ID_CCSIDR (0x00 << KVM_REG_ARM_DEMUX_ID_SHIFT)
> +#define KVM_REG_ARM_DEMUX_VAL_MASK 0x00000000000000FF
> +#define KVM_REG_ARM_DEMUX_VAL_SHIFT 0
> +
> +/* VFP registers: we could overload CP10 like ARM does, but that's ugly. */
> +#define KVM_REG_ARM_VFP (0x0012 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM_VFP_MASK 0x000000000000FFFF
> +#define KVM_REG_ARM_VFP_BASE_REG 0x0
> +#define KVM_REG_ARM_VFP_FPSID 0x1000
> +#define KVM_REG_ARM_VFP_FPSCR 0x1001
> +#define KVM_REG_ARM_VFP_MVFR1 0x1006
> +#define KVM_REG_ARM_VFP_MVFR0 0x1007
> +#define KVM_REG_ARM_VFP_FPEXC 0x1008
> +#define KVM_REG_ARM_VFP_FPINST 0x1009
> +#define KVM_REG_ARM_VFP_FPINST2 0x100A
> +
> +/* AArch64 system registers */
> +#define KVM_REG_ARM64_SYSREG (0x0013 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM64_SYSREG_OP0_MASK 0x000000000000c000
> +#define KVM_REG_ARM64_SYSREG_OP0_SHIFT 14
> +#define KVM_REG_ARM64_SYSREG_OP1_MASK 0x0000000000003800
> +#define KVM_REG_ARM64_SYSREG_OP1_SHIFT 11
> +#define KVM_REG_ARM64_SYSREG_CRN_MASK 0x0000000000000780
> +#define KVM_REG_ARM64_SYSREG_CRN_SHIFT 7
> +#define KVM_REG_ARM64_SYSREG_CRM_MASK 0x0000000000000078
> +#define KVM_REG_ARM64_SYSREG_CRM_SHIFT 3
> +#define KVM_REG_ARM64_SYSREG_OP2_MASK 0x0000000000000007
> +#define KVM_REG_ARM64_SYSREG_OP2_SHIFT 0
> +
> +/* FP-SIMD registers */
> +#define KVM_REG_ARM64_FP_SIMD (0x0014 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM64_FP_SIMD_MASK 0x000000000000FFFF
> +#define KVM_REG_ARM64_FP_SIMD_BASE_REG 0x0
> +#define KVM_REG_ARM64_FP_SIMD_FPSR 0x1000
> +#define KVM_REG_ARM64_FP_SIMD_FPCR 0x1001
> +
> /* KVM_IRQ_LINE irq field index values */
> #define KVM_ARM_IRQ_TYPE_SHIFT 24
> #define KVM_ARM_IRQ_TYPE_MASK 0xff
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> new file mode 100644
> index 0000000..9fc8c17
> --- /dev/null
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -0,0 +1,962 @@
> +/*
> + * Copyright (C) 2012 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * Derived from arch/arm/kvm/coproc.c:
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Authors: Rusty Russell <rusty@rustcorp.com.au>
> + * Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License, version 2, as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/uaccess.h>
> +#include <asm/kvm_arm.h>
> +#include <asm/kvm_host.h>
> +#include <asm/kvm_emulate.h>
> +#include <asm/kvm_coproc.h>
> +#include <asm/cacheflush.h>
> +#include <asm/cputype.h>
> +#include <trace/events/kvm.h>
> +
> +#include "sys_regs.h"
> +
> +/*
> + * All of this file is extremly similar to the ARM coproc.c, but the
> + * types are different. My gut feeling is that it should be pretty
> + * easy to merge, but that would be an ABI breakage -- again. VFP
> + * would also need to be abstracted.
> + */
> +
> +/* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
> +static u32 cache_levels;
> +
> +/* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
> +#define CSSELR_MAX 12
> +
> +/* Which cache CCSIDR represents depends on CSSELR value. */
> +static u32 get_ccsidr(u32 csselr)
> +{
> + u32 ccsidr;
> +
> + /* Make sure noone else changes CSSELR during this! */
> + local_irq_disable();
> + /* Put value into CSSELR */
> + asm volatile("msr csselr_el1, %x0" : : "r" (csselr));
> + /* Read result out of CCSIDR */
> + asm volatile("mrs %0, ccsidr_el1" : "=r" (ccsidr));
> + local_irq_enable();
> +
> + return ccsidr;
> +}
> +
> +static void do_dc_cisw(u32 val)
> +{
> + asm volatile("dc cisw, %x0" : : "r" (val));
> +}
> +
> +static void do_dc_csw(u32 val)
> +{
> + asm volatile("dc csw, %x0" : : "r" (val));
> +}
> +
> +/* See note at ARM ARM B1.14.4 */
> +static bool access_dcsw(struct kvm_vcpu *vcpu,
> + const struct sys_reg_params *p,
> + const struct sys_reg_desc *r)
> +{
> + unsigned long val;
> + int cpu;
> +
> + cpu = get_cpu();
> +
> + if (!p->is_write)
> + return read_from_write_only(vcpu, p);
> +
> + cpumask_setall(&vcpu->arch.require_dcache_flush);
> + cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
> +
> + /* If we were already preempted, take the long way around */
> + if (cpu != vcpu->arch.last_pcpu) {
> + flush_cache_all();
> + goto done;
> + }
> +
> + val = *vcpu_reg(vcpu, p->Rt);
> +
> + switch (p->CRm) {
> + case 6: /* Upgrade DCISW to DCCISW, as per HCR.SWIO */
> + case 14: /* DCCISW */
> + do_dc_cisw(val);
> + break;
> +
> + case 10: /* DCCSW */
> + do_dc_csw(val);
> + break;
> + }
> +
> +done:
> + put_cpu();
> +
> + return true;
> +}
> +
> +/*
> + * We could trap ID_DFR0 and tell the guest we don't support performance
> + * monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
> + * NAKed, so it will read the PMCR anyway.
> + *
> + * Therefore we tell the guest we have 0 counters. Unfortunately, we
> + * must always support PMCCNTR (the cycle counter): we just RAZ/WI for
> + * all PM registers, which doesn't crash the guest kernel at least.
> + */
> +static bool pm_fake(struct kvm_vcpu *vcpu,
> + const struct sys_reg_params *p,
> + const struct sys_reg_desc *r)
> +{
> + if (p->is_write)
> + return ignore_write(vcpu, p);
> + else
> + return read_zero(vcpu, p);
> +}
> +
> +static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> + u64 amair;
> +
> + asm volatile("mrs %0, amair_el1\n" : "=r" (amair));
> + vcpu->arch.sys_regs[AMAIR_EL1] = amair;
> +}
> +
> +/*
> + * Architected system registers.
> + * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
> + */
> +static const struct sys_reg_desc sys_reg_descs[] = {
> + /* DC ISW */
> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b0110), Op2(0b010),
> + access_dcsw },
> + /* DC CSW */
> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1010), Op2(0b010),
> + access_dcsw },
> + /* DC CISW */
> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b010),
> + access_dcsw },
> +
> + /* TTBR0_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
> + NULL, reset_unknown, TTBR0_EL1 },
> + /* TTBR1_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
> + NULL, reset_unknown, TTBR1_EL1 },
> + /* TCR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
> + NULL, reset_val, TCR_EL1, 0 },
> +
> + /* AFSR0_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
> + NULL, reset_unknown, AFSR0_EL1 },
> + /* AFSR1_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
> + NULL, reset_unknown, AFSR1_EL1 },
> + /* ESR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
> + NULL, reset_unknown, ESR_EL1 },
> + /* FAR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
> + NULL, reset_unknown, FAR_EL1 },
> +
> + /* PMINTENSET_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
> + pm_fake },
> + /* PMINTENCLR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
> + pm_fake },
> +
> + /* MAIR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
> + NULL, reset_unknown, MAIR_EL1 },
> + /* AMAIR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
> + NULL, reset_amair_el1, AMAIR_EL1 },
> +
> + /* VBAR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
> + NULL, reset_val, VBAR_EL1, 0 },
> + /* CONTEXTIDR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
> + NULL, reset_val, CONTEXTIDR_EL1, 0 },
> + /* TPIDR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
> + NULL, reset_unknown, TPIDR_EL1 },
> +
> + /* CNTKCTL_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1110), CRm(0b0001), Op2(0b000),
> + NULL, reset_val, CNTKCTL_EL1, 0},
> +
> + /* CSSELR_EL1 */
> + { Op0(0b11), Op1(0b010), CRn(0b0000), CRm(0b0000), Op2(0b000),
> + NULL, reset_unknown, CSSELR_EL1 },
> +
> + /* PMCR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
> + pm_fake },
> + /* PMCNTENSET_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
> + pm_fake },
> + /* PMCNTENCLR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
> + pm_fake },
> + /* PMOVSCLR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
> + pm_fake },
> + /* PMSWINC_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
> + pm_fake },
> + /* PMSELR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
> + pm_fake },
> + /* PMCEID0_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
> + pm_fake },
> + /* PMCEID1_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
> + pm_fake },
> + /* PMCCNTR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
> + pm_fake },
> + /* PMXEVTYPER_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
> + pm_fake },
> + /* PMXEVCNTR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
> + pm_fake },
> + /* PMUSERENR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
> + pm_fake },
> + /* PMOVSSET_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
> + pm_fake },
> +
> + /* TPIDR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
> + NULL, reset_unknown, TPIDR_EL0 },
> + /* TPIDRRO_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
> + NULL, reset_unknown, TPIDRRO_EL0 },
> +};
> +
> +/* Target specific emulation tables */
> +static struct kvm_sys_reg_target_table *target_tables[KVM_ARM_NUM_TARGETS];
> +
> +void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table)
> +{
> + target_tables[table->target] = table;
> +}
> +
> +/* Get specific register table for this target. */
> +static const struct sys_reg_desc *get_target_table(unsigned target, size_t *num)
> +{
> + struct kvm_sys_reg_target_table *table;
> +
> + table = target_tables[target];
> + *num = table->table64.num;
> + return table->table64.table;
> +}
> +
> +static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
> + const struct sys_reg_desc table[],
> + unsigned int num)
> +{
> + unsigned int i;
> +
> + for (i = 0; i < num; i++) {
> + const struct sys_reg_desc *r = &table[i];
> +
> + if (params->Op0 != r->Op0)
> + continue;
> + if (params->Op1 != r->Op1)
> + continue;
> + if (params->CRn != r->CRn)
> + continue;
> + if (params->CRm != r->CRm)
> + continue;
> + if (params->Op2 != r->Op2)
> + continue;
> +
> + return r;
> + }
> + return NULL;
> +}
> +
> +static int emulate_sys_reg(struct kvm_vcpu *vcpu,
> + const struct sys_reg_params *params)
> +{
> + size_t num;
> + const struct sys_reg_desc *table, *r;
> +
> + table = get_target_table(vcpu->arch.target, &num);
> +
> + /* Search target-specific then generic table. */
> + r = find_reg(params, table, num);
> + if (!r)
> + r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
Searching through the whole list sounds quite slow. Especially since the TLS register is at the very bottom of it.
Can't you make this a simple switch() statement through a bit of #define and maybe #include magic? After all, the sysreg target encoding is all part of the opcode. And from my experience in the PPC instruction emulator, switch()es are _a lot_ faster than any other way of lookup I've tried.
Alex
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 09/29] arm64: KVM: system register handling
@ 2013-03-07 10:30 ` Alexander Graf
0 siblings, 0 replies; 128+ messages in thread
From: Alexander Graf @ 2013-03-07 10:30 UTC (permalink / raw)
To: linux-arm-kernel
On 05.03.2013, at 04:47, Marc Zyngier wrote:
> Provide 64bit system register handling, modeled after the cp15
> handling for ARM.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_coproc.h | 51 ++
> arch/arm64/include/uapi/asm/kvm.h | 56 +++
> arch/arm64/kvm/sys_regs.c | 962 ++++++++++++++++++++++++++++++++++++
> arch/arm64/kvm/sys_regs.h | 141 ++++++
> include/uapi/linux/kvm.h | 1 +
> 5 files changed, 1211 insertions(+)
> create mode 100644 arch/arm64/include/asm/kvm_coproc.h
> create mode 100644 arch/arm64/kvm/sys_regs.c
> create mode 100644 arch/arm64/kvm/sys_regs.h
>
> diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
> new file mode 100644
> index 0000000..e791894
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_coproc.h
> @@ -0,0 +1,51 @@
> +/*
> + * Copyright (C) 2012 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * Derived from arch/arm/include/asm/kvm_coproc.h
> + * Copyright (C) 2012 Rusty Russell IBM Corporation
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ARM64_KVM_COPROC_H__
> +#define __ARM64_KVM_COPROC_H__
> +
> +#include <linux/kvm_host.h>
> +
> +void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
> +
> +struct kvm_sys_reg_table {
> + const struct sys_reg_desc *table;
> + size_t num;
> +};
> +
> +struct kvm_sys_reg_target_table {
> + unsigned target;
> + struct kvm_sys_reg_table table64;
> +};
> +
> +void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table);
> +
> +int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +
> +#define kvm_coproc_table_init kvm_sys_reg_table_init
> +void kvm_sys_reg_table_init(void);
> +
> +struct kvm_one_reg;
> +int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> +int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
> +int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
> +unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu);
> +
> +#endif /* __ARM64_KVM_COPROC_H__ */
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index f5525f1..fffeb11 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -87,6 +87,62 @@ struct kvm_sync_regs {
> struct kvm_arch_memory_slot {
> };
>
> +/* If you need to interpret the index values, here is the key: */
> +#define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000
> +#define KVM_REG_ARM_COPROC_SHIFT 16
> +#define KVM_REG_ARM_32_OPC2_MASK 0x0000000000000007
> +#define KVM_REG_ARM_32_OPC2_SHIFT 0
> +#define KVM_REG_ARM_OPC1_MASK 0x0000000000000078
> +#define KVM_REG_ARM_OPC1_SHIFT 3
> +#define KVM_REG_ARM_CRM_MASK 0x0000000000000780
> +#define KVM_REG_ARM_CRM_SHIFT 7
> +#define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800
> +#define KVM_REG_ARM_32_CRN_SHIFT 11
> +
> +/* Normal registers are mapped as coprocessor 16. */
> +#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / sizeof(unsigned long))
> +
> +/* Some registers need more space to represent values. */
> +#define KVM_REG_ARM_DEMUX (0x0011 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM_DEMUX_ID_MASK 0x000000000000FF00
> +#define KVM_REG_ARM_DEMUX_ID_SHIFT 8
> +#define KVM_REG_ARM_DEMUX_ID_CCSIDR (0x00 << KVM_REG_ARM_DEMUX_ID_SHIFT)
> +#define KVM_REG_ARM_DEMUX_VAL_MASK 0x00000000000000FF
> +#define KVM_REG_ARM_DEMUX_VAL_SHIFT 0
> +
> +/* VFP registers: we could overload CP10 like ARM does, but that's ugly. */
> +#define KVM_REG_ARM_VFP (0x0012 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM_VFP_MASK 0x000000000000FFFF
> +#define KVM_REG_ARM_VFP_BASE_REG 0x0
> +#define KVM_REG_ARM_VFP_FPSID 0x1000
> +#define KVM_REG_ARM_VFP_FPSCR 0x1001
> +#define KVM_REG_ARM_VFP_MVFR1 0x1006
> +#define KVM_REG_ARM_VFP_MVFR0 0x1007
> +#define KVM_REG_ARM_VFP_FPEXC 0x1008
> +#define KVM_REG_ARM_VFP_FPINST 0x1009
> +#define KVM_REG_ARM_VFP_FPINST2 0x100A
> +
> +/* AArch64 system registers */
> +#define KVM_REG_ARM64_SYSREG (0x0013 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM64_SYSREG_OP0_MASK 0x000000000000c000
> +#define KVM_REG_ARM64_SYSREG_OP0_SHIFT 14
> +#define KVM_REG_ARM64_SYSREG_OP1_MASK 0x0000000000003800
> +#define KVM_REG_ARM64_SYSREG_OP1_SHIFT 11
> +#define KVM_REG_ARM64_SYSREG_CRN_MASK 0x0000000000000780
> +#define KVM_REG_ARM64_SYSREG_CRN_SHIFT 7
> +#define KVM_REG_ARM64_SYSREG_CRM_MASK 0x0000000000000078
> +#define KVM_REG_ARM64_SYSREG_CRM_SHIFT 3
> +#define KVM_REG_ARM64_SYSREG_OP2_MASK 0x0000000000000007
> +#define KVM_REG_ARM64_SYSREG_OP2_SHIFT 0
> +
> +/* FP-SIMD registers */
> +#define KVM_REG_ARM64_FP_SIMD (0x0014 << KVM_REG_ARM_COPROC_SHIFT)
> +#define KVM_REG_ARM64_FP_SIMD_MASK 0x000000000000FFFF
> +#define KVM_REG_ARM64_FP_SIMD_BASE_REG 0x0
> +#define KVM_REG_ARM64_FP_SIMD_FPSR 0x1000
> +#define KVM_REG_ARM64_FP_SIMD_FPCR 0x1001
> +
> /* KVM_IRQ_LINE irq field index values */
> #define KVM_ARM_IRQ_TYPE_SHIFT 24
> #define KVM_ARM_IRQ_TYPE_MASK 0xff
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> new file mode 100644
> index 0000000..9fc8c17
> --- /dev/null
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -0,0 +1,962 @@
> +/*
> + * Copyright (C) 2012 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * Derived from arch/arm/kvm/coproc.c:
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Authors: Rusty Russell <rusty@rustcorp.com.au>
> + * Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License, version 2, as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/uaccess.h>
> +#include <asm/kvm_arm.h>
> +#include <asm/kvm_host.h>
> +#include <asm/kvm_emulate.h>
> +#include <asm/kvm_coproc.h>
> +#include <asm/cacheflush.h>
> +#include <asm/cputype.h>
> +#include <trace/events/kvm.h>
> +
> +#include "sys_regs.h"
> +
> +/*
> + * All of this file is extremly similar to the ARM coproc.c, but the
> + * types are different. My gut feeling is that it should be pretty
> + * easy to merge, but that would be an ABI breakage -- again. VFP
> + * would also need to be abstracted.
> + */
> +
> +/* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
> +static u32 cache_levels;
> +
> +/* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
> +#define CSSELR_MAX 12
> +
> +/* Which cache CCSIDR represents depends on CSSELR value. */
> +static u32 get_ccsidr(u32 csselr)
> +{
> + u32 ccsidr;
> +
> + /* Make sure noone else changes CSSELR during this! */
> + local_irq_disable();
> + /* Put value into CSSELR */
> + asm volatile("msr csselr_el1, %x0" : : "r" (csselr));
> + /* Read result out of CCSIDR */
> + asm volatile("mrs %0, ccsidr_el1" : "=r" (ccsidr));
> + local_irq_enable();
> +
> + return ccsidr;
> +}
> +
> +static void do_dc_cisw(u32 val)
> +{
> + asm volatile("dc cisw, %x0" : : "r" (val));
> +}
> +
> +static void do_dc_csw(u32 val)
> +{
> + asm volatile("dc csw, %x0" : : "r" (val));
> +}
> +
> +/* See note at ARM ARM B1.14.4 */
> +static bool access_dcsw(struct kvm_vcpu *vcpu,
> + const struct sys_reg_params *p,
> + const struct sys_reg_desc *r)
> +{
> + unsigned long val;
> + int cpu;
> +
> + cpu = get_cpu();
> +
> + if (!p->is_write)
> + return read_from_write_only(vcpu, p);
> +
> + cpumask_setall(&vcpu->arch.require_dcache_flush);
> + cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
> +
> + /* If we were already preempted, take the long way around */
> + if (cpu != vcpu->arch.last_pcpu) {
> + flush_cache_all();
> + goto done;
> + }
> +
> + val = *vcpu_reg(vcpu, p->Rt);
> +
> + switch (p->CRm) {
> + case 6: /* Upgrade DCISW to DCCISW, as per HCR.SWIO */
> + case 14: /* DCCISW */
> + do_dc_cisw(val);
> + break;
> +
> + case 10: /* DCCSW */
> + do_dc_csw(val);
> + break;
> + }
> +
> +done:
> + put_cpu();
> +
> + return true;
> +}
> +
> +/*
> + * We could trap ID_DFR0 and tell the guest we don't support performance
> + * monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was
> + * NAKed, so it will read the PMCR anyway.
> + *
> + * Therefore we tell the guest we have 0 counters. Unfortunately, we
> + * must always support PMCCNTR (the cycle counter): we just RAZ/WI for
> + * all PM registers, which doesn't crash the guest kernel at least.
> + */
> +static bool pm_fake(struct kvm_vcpu *vcpu,
> + const struct sys_reg_params *p,
> + const struct sys_reg_desc *r)
> +{
> + if (p->is_write)
> + return ignore_write(vcpu, p);
> + else
> + return read_zero(vcpu, p);
> +}
> +
> +static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> + u64 amair;
> +
> + asm volatile("mrs %0, amair_el1\n" : "=r" (amair));
> + vcpu->arch.sys_regs[AMAIR_EL1] = amair;
> +}
> +
> +/*
> + * Architected system registers.
> + * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
> + */
> +static const struct sys_reg_desc sys_reg_descs[] = {
> + /* DC ISW */
> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b0110), Op2(0b010),
> + access_dcsw },
> + /* DC CSW */
> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1010), Op2(0b010),
> + access_dcsw },
> + /* DC CISW */
> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b010),
> + access_dcsw },
> +
> + /* TTBR0_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
> + NULL, reset_unknown, TTBR0_EL1 },
> + /* TTBR1_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
> + NULL, reset_unknown, TTBR1_EL1 },
> + /* TCR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
> + NULL, reset_val, TCR_EL1, 0 },
> +
> + /* AFSR0_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
> + NULL, reset_unknown, AFSR0_EL1 },
> + /* AFSR1_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
> + NULL, reset_unknown, AFSR1_EL1 },
> + /* ESR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
> + NULL, reset_unknown, ESR_EL1 },
> + /* FAR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
> + NULL, reset_unknown, FAR_EL1 },
> +
> + /* PMINTENSET_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
> + pm_fake },
> + /* PMINTENCLR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
> + pm_fake },
> +
> + /* MAIR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
> + NULL, reset_unknown, MAIR_EL1 },
> + /* AMAIR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
> + NULL, reset_amair_el1, AMAIR_EL1 },
> +
> + /* VBAR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
> + NULL, reset_val, VBAR_EL1, 0 },
> + /* CONTEXTIDR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
> + NULL, reset_val, CONTEXTIDR_EL1, 0 },
> + /* TPIDR_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
> + NULL, reset_unknown, TPIDR_EL1 },
> +
> + /* CNTKCTL_EL1 */
> + { Op0(0b11), Op1(0b000), CRn(0b1110), CRm(0b0001), Op2(0b000),
> + NULL, reset_val, CNTKCTL_EL1, 0},
> +
> + /* CSSELR_EL1 */
> + { Op0(0b11), Op1(0b010), CRn(0b0000), CRm(0b0000), Op2(0b000),
> + NULL, reset_unknown, CSSELR_EL1 },
> +
> + /* PMCR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
> + pm_fake },
> + /* PMCNTENSET_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
> + pm_fake },
> + /* PMCNTENCLR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
> + pm_fake },
> + /* PMOVSCLR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
> + pm_fake },
> + /* PMSWINC_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
> + pm_fake },
> + /* PMSELR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
> + pm_fake },
> + /* PMCEID0_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
> + pm_fake },
> + /* PMCEID1_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
> + pm_fake },
> + /* PMCCNTR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
> + pm_fake },
> + /* PMXEVTYPER_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
> + pm_fake },
> + /* PMXEVCNTR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
> + pm_fake },
> + /* PMUSERENR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
> + pm_fake },
> + /* PMOVSSET_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
> + pm_fake },
> +
> + /* TPIDR_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
> + NULL, reset_unknown, TPIDR_EL0 },
> + /* TPIDRRO_EL0 */
> + { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
> + NULL, reset_unknown, TPIDRRO_EL0 },
> +};
> +
> +/* Target specific emulation tables */
> +static struct kvm_sys_reg_target_table *target_tables[KVM_ARM_NUM_TARGETS];
> +
> +void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table *table)
> +{
> + target_tables[table->target] = table;
> +}
> +
> +/* Get specific register table for this target. */
> +static const struct sys_reg_desc *get_target_table(unsigned target, size_t *num)
> +{
> + struct kvm_sys_reg_target_table *table;
> +
> + table = target_tables[target];
> + *num = table->table64.num;
> + return table->table64.table;
> +}
> +
> +static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
> + const struct sys_reg_desc table[],
> + unsigned int num)
> +{
> + unsigned int i;
> +
> + for (i = 0; i < num; i++) {
> + const struct sys_reg_desc *r = &table[i];
> +
> + if (params->Op0 != r->Op0)
> + continue;
> + if (params->Op1 != r->Op1)
> + continue;
> + if (params->CRn != r->CRn)
> + continue;
> + if (params->CRm != r->CRm)
> + continue;
> + if (params->Op2 != r->Op2)
> + continue;
> +
> + return r;
> + }
> + return NULL;
> +}
> +
> +static int emulate_sys_reg(struct kvm_vcpu *vcpu,
> + const struct sys_reg_params *params)
> +{
> + size_t num;
> + const struct sys_reg_desc *table, *r;
> +
> + table = get_target_table(vcpu->arch.target, &num);
> +
> + /* Search target-specific then generic table. */
> + r = find_reg(params, table, num);
> + if (!r)
> + r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
Searching through the whole list sounds quite slow. Especially since the TLS register is at the very bottom of it.
Can't you make this a simple switch() statement through a bit of #define and maybe #include magic? After all, the sysreg target encoding is all part of the opcode. And from my experience in the PPC instruction emulator, switch()es are _a lot_ faster than any other way of lookup I've tried.
Alex
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-07 10:33 ` Alexander Graf
-1 siblings, 0 replies; 128+ messages in thread
From: Alexander Graf @ 2013-03-07 10:33 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
On 05.03.2013, at 04:47, Marc Zyngier wrote:
> Define the saved/restored registers for 64bit guests.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_asm.h | 68 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 68 insertions(+)
> create mode 100644 arch/arm64/include/asm/kvm_asm.h
>
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> new file mode 100644
> index 0000000..851fee5
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -0,0 +1,68 @@
> +/*
> + * Copyright (C) 2012 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ARM_KVM_ASM_H__
> +#define __ARM_KVM_ASM_H__
> +
> +/*
> + * 0 is reserved as an invalid value.
> + * Order *must* be kept in sync with the hyp switch code.
> + */
> +#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
> +#define CSSELR_EL1 2 /* Cache Size Selection Register */
> +#define SCTLR_EL1 3 /* System Control Register */
> +#define ACTLR_EL1 4 /* Auxilliary Control Register */
> +#define CPACR_EL1 5 /* Coprocessor Access Control */
> +#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
> +#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
> +#define TCR_EL1 8 /* Translation Control Register */
> +#define ESR_EL1 9 /* Exception Syndrome Register */
> +#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
> +#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
> +#define FAR_EL1 12 /* Fault Address Register */
> +#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
> +#define VBAR_EL1 14 /* Vector Base Address Register */
> +#define CONTEXTIDR_EL1 15 /* Context ID Register */
> +#define TPIDR_EL0 16 /* Thread ID, User R/W */
> +#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
> +#define TPIDR_EL1 18 /* Thread ID, Privileged */
> +#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
> +#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
> +#define NR_SYS_REGS 21
These are internal representations of the system registers. Keeping everything strictly linear is quite cumbersome, why not let the compiler do this for you?
enum kvm_sysreg_id {
MPIDR_EL1 = 1,
...
NR_SYS_REGS
}
That way gcc automatically counts the IDs up for you. You eliminate a potential source of breakage (duplicate IDs) and the code is even easier to read ;).
Alex
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
@ 2013-03-07 10:33 ` Alexander Graf
0 siblings, 0 replies; 128+ messages in thread
From: Alexander Graf @ 2013-03-07 10:33 UTC (permalink / raw)
To: linux-arm-kernel
On 05.03.2013, at 04:47, Marc Zyngier wrote:
> Define the saved/restored registers for 64bit guests.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_asm.h | 68 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 68 insertions(+)
> create mode 100644 arch/arm64/include/asm/kvm_asm.h
>
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> new file mode 100644
> index 0000000..851fee5
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -0,0 +1,68 @@
> +/*
> + * Copyright (C) 2012 - ARM Ltd
> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ARM_KVM_ASM_H__
> +#define __ARM_KVM_ASM_H__
> +
> +/*
> + * 0 is reserved as an invalid value.
> + * Order *must* be kept in sync with the hyp switch code.
> + */
> +#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
> +#define CSSELR_EL1 2 /* Cache Size Selection Register */
> +#define SCTLR_EL1 3 /* System Control Register */
> +#define ACTLR_EL1 4 /* Auxilliary Control Register */
> +#define CPACR_EL1 5 /* Coprocessor Access Control */
> +#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
> +#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
> +#define TCR_EL1 8 /* Translation Control Register */
> +#define ESR_EL1 9 /* Exception Syndrome Register */
> +#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
> +#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
> +#define FAR_EL1 12 /* Fault Address Register */
> +#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
> +#define VBAR_EL1 14 /* Vector Base Address Register */
> +#define CONTEXTIDR_EL1 15 /* Context ID Register */
> +#define TPIDR_EL0 16 /* Thread ID, User R/W */
> +#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
> +#define TPIDR_EL1 18 /* Thread ID, Privileged */
> +#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
> +#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
> +#define NR_SYS_REGS 21
These are internal representations of the system registers. Keeping everything strictly linear is quite cumbersome, why not let the compiler do this for you?
enum kvm_sysreg_id {
MPIDR_EL1 = 1,
...
NR_SYS_REGS
}
That way gcc automatically counts the IDs up for you. You eliminate a potential source of breakage (duplicate IDs) and the code is even easier to read ;).
Alex
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
2013-03-07 10:33 ` Alexander Graf
@ 2013-03-08 3:23 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-08 3:23 UTC (permalink / raw)
To: Alexander Graf; +Cc: catalin.marinas, kvm, linux-arm-kernel, kvmarm
On Thu, 7 Mar 2013 11:33:12 +0100, Alexander Graf <agraf@suse.de> wrote:
> On 05.03.2013, at 04:47, Marc Zyngier wrote:
>
>> Define the saved/restored registers for 64bit guests.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_asm.h | 68
>> ++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 68 insertions(+)
>> create mode 100644 arch/arm64/include/asm/kvm_asm.h
>>
>> diff --git a/arch/arm64/include/asm/kvm_asm.h
>> b/arch/arm64/include/asm/kvm_asm.h
>> new file mode 100644
>> index 0000000..851fee5
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/kvm_asm.h
>> @@ -0,0 +1,68 @@
>> +/*
>> + * Copyright (C) 2012 - ARM Ltd
>> + * Author: Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or
modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see
<http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __ARM_KVM_ASM_H__
>> +#define __ARM_KVM_ASM_H__
>> +
>> +/*
>> + * 0 is reserved as an invalid value.
>> + * Order *must* be kept in sync with the hyp switch code.
>> + */
>> +#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
>> +#define CSSELR_EL1 2 /* Cache Size Selection Register */
>> +#define SCTLR_EL1 3 /* System Control Register */
>> +#define ACTLR_EL1 4 /* Auxilliary Control Register */
>> +#define CPACR_EL1 5 /* Coprocessor Access Control */
>> +#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
>> +#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
>> +#define TCR_EL1 8 /* Translation Control Register */
>> +#define ESR_EL1 9 /* Exception Syndrome Register */
>> +#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
>> +#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
>> +#define FAR_EL1 12 /* Fault Address Register */
>> +#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
>> +#define VBAR_EL1 14 /* Vector Base Address Register */
>> +#define CONTEXTIDR_EL1 15 /* Context ID Register */
>> +#define TPIDR_EL0 16 /* Thread ID, User R/W */
>> +#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
>> +#define TPIDR_EL1 18 /* Thread ID, Privileged */
>> +#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
>> +#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
>> +#define NR_SYS_REGS 21
>
> These are internal representations of the system registers. Keeping
> everything strictly linear is quite cumbersome, why not let the compiler
do
> this for you?
>
> enum kvm_sysreg_id {
> MPIDR_EL1 = 1,
> ...
> NR_SYS_REGS
> }
>
> That way gcc automatically counts the IDs up for you. You eliminate a
> potential source of breakage (duplicate IDs) and the code is even easier
to
> read ;).
I thought of that, but it doesn't fly because of the HYP assembly code,
which directly uses these constants.
M.
--
Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
@ 2013-03-08 3:23 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-08 3:23 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 7 Mar 2013 11:33:12 +0100, Alexander Graf <agraf@suse.de> wrote:
> On 05.03.2013, at 04:47, Marc Zyngier wrote:
>
>> Define the saved/restored registers for 64bit guests.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_asm.h | 68
>> ++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 68 insertions(+)
>> create mode 100644 arch/arm64/include/asm/kvm_asm.h
>>
>> diff --git a/arch/arm64/include/asm/kvm_asm.h
>> b/arch/arm64/include/asm/kvm_asm.h
>> new file mode 100644
>> index 0000000..851fee5
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/kvm_asm.h
>> @@ -0,0 +1,68 @@
>> +/*
>> + * Copyright (C) 2012 - ARM Ltd
>> + * Author: Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or
modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see
<http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __ARM_KVM_ASM_H__
>> +#define __ARM_KVM_ASM_H__
>> +
>> +/*
>> + * 0 is reserved as an invalid value.
>> + * Order *must* be kept in sync with the hyp switch code.
>> + */
>> +#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
>> +#define CSSELR_EL1 2 /* Cache Size Selection Register */
>> +#define SCTLR_EL1 3 /* System Control Register */
>> +#define ACTLR_EL1 4 /* Auxilliary Control Register */
>> +#define CPACR_EL1 5 /* Coprocessor Access Control */
>> +#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
>> +#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
>> +#define TCR_EL1 8 /* Translation Control Register */
>> +#define ESR_EL1 9 /* Exception Syndrome Register */
>> +#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
>> +#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
>> +#define FAR_EL1 12 /* Fault Address Register */
>> +#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
>> +#define VBAR_EL1 14 /* Vector Base Address Register */
>> +#define CONTEXTIDR_EL1 15 /* Context ID Register */
>> +#define TPIDR_EL0 16 /* Thread ID, User R/W */
>> +#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
>> +#define TPIDR_EL1 18 /* Thread ID, Privileged */
>> +#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
>> +#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
>> +#define NR_SYS_REGS 21
>
> These are internal representations of the system registers. Keeping
> everything strictly linear is quite cumbersome, why not let the compiler
do
> this for you?
>
> enum kvm_sysreg_id {
> MPIDR_EL1 = 1,
> ...
> NR_SYS_REGS
> }
>
> That way gcc automatically counts the IDs up for you. You eliminate a
> potential source of breakage (duplicate IDs) and the code is even easier
to
> read ;).
I thought of that, but it doesn't fly because of the HYP assembly code,
which directly uses these constants.
M.
--
Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 09/29] arm64: KVM: system register handling
2013-03-07 10:30 ` Alexander Graf
@ 2013-03-08 3:29 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-08 3:29 UTC (permalink / raw)
To: Alexander Graf; +Cc: catalin.marinas, kvm, linux-arm-kernel, kvmarm
On Thu, 7 Mar 2013 11:30:20 +0100, Alexander Graf <agraf@suse.de> wrote:
> On 05.03.2013, at 04:47, Marc Zyngier wrote:
>
>> Provide 64bit system register handling, modeled after the cp15
>> handling for ARM.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_coproc.h | 51 ++
>> arch/arm64/include/uapi/asm/kvm.h | 56 +++
>> arch/arm64/kvm/sys_regs.c | 962
>> ++++++++++++++++++++++++++++++++++++
>> arch/arm64/kvm/sys_regs.h | 141 ++++++
>> include/uapi/linux/kvm.h | 1 +
>> 5 files changed, 1211 insertions(+)
>> create mode 100644 arch/arm64/include/asm/kvm_coproc.h
>> create mode 100644 arch/arm64/kvm/sys_regs.c
>> create mode 100644 arch/arm64/kvm/sys_regs.h
>>
[..]
>> +/*
>> + * Architected system registers.
>> + * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
>> + */
>> +static const struct sys_reg_desc sys_reg_descs[] = {
>> + /* DC ISW */
>> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b0110), Op2(0b010),
>> + access_dcsw },
>> + /* DC CSW */
>> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1010), Op2(0b010),
>> + access_dcsw },
>> + /* DC CISW */
>> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b010),
>> + access_dcsw },
>> +
>> + /* TTBR0_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
>> + NULL, reset_unknown, TTBR0_EL1 },
>> + /* TTBR1_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
>> + NULL, reset_unknown, TTBR1_EL1 },
>> + /* TCR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
>> + NULL, reset_val, TCR_EL1, 0 },
>> +
>> + /* AFSR0_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
>> + NULL, reset_unknown, AFSR0_EL1 },
>> + /* AFSR1_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
>> + NULL, reset_unknown, AFSR1_EL1 },
>> + /* ESR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
>> + NULL, reset_unknown, ESR_EL1 },
>> + /* FAR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
>> + NULL, reset_unknown, FAR_EL1 },
>> +
>> + /* PMINTENSET_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
>> + pm_fake },
>> + /* PMINTENCLR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
>> + pm_fake },
>> +
>> + /* MAIR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
>> + NULL, reset_unknown, MAIR_EL1 },
>> + /* AMAIR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
>> + NULL, reset_amair_el1, AMAIR_EL1 },
>> +
>> + /* VBAR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
>> + NULL, reset_val, VBAR_EL1, 0 },
>> + /* CONTEXTIDR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
>> + NULL, reset_val, CONTEXTIDR_EL1, 0 },
>> + /* TPIDR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
>> + NULL, reset_unknown, TPIDR_EL1 },
>> +
>> + /* CNTKCTL_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1110), CRm(0b0001), Op2(0b000),
>> + NULL, reset_val, CNTKCTL_EL1, 0},
>> +
>> + /* CSSELR_EL1 */
>> + { Op0(0b11), Op1(0b010), CRn(0b0000), CRm(0b0000), Op2(0b000),
>> + NULL, reset_unknown, CSSELR_EL1 },
>> +
>> + /* PMCR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
>> + pm_fake },
>> + /* PMCNTENSET_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
>> + pm_fake },
>> + /* PMCNTENCLR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
>> + pm_fake },
>> + /* PMOVSCLR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
>> + pm_fake },
>> + /* PMSWINC_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
>> + pm_fake },
>> + /* PMSELR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
>> + pm_fake },
>> + /* PMCEID0_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
>> + pm_fake },
>> + /* PMCEID1_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
>> + pm_fake },
>> + /* PMCCNTR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
>> + pm_fake },
>> + /* PMXEVTYPER_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
>> + pm_fake },
>> + /* PMXEVCNTR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
>> + pm_fake },
>> + /* PMUSERENR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
>> + pm_fake },
>> + /* PMOVSSET_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
>> + pm_fake },
>> +
>> + /* TPIDR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
>> + NULL, reset_unknown, TPIDR_EL0 },
>> + /* TPIDRRO_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
>> + NULL, reset_unknown, TPIDRRO_EL0 },
>> +};
>> +
>> +/* Target specific emulation tables */
>> +static struct kvm_sys_reg_target_table
>> *target_tables[KVM_ARM_NUM_TARGETS];
>> +
>> +void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table
>> *table)
>> +{
>> + target_tables[table->target] = table;
>> +}
>> +
>> +/* Get specific register table for this target. */
>> +static const struct sys_reg_desc *get_target_table(unsigned target,
>> size_t *num)
>> +{
>> + struct kvm_sys_reg_target_table *table;
>> +
>> + table = target_tables[target];
>> + *num = table->table64.num;
>> + return table->table64.table;
>> +}
>> +
>> +static const struct sys_reg_desc *find_reg(const struct sys_reg_params
>> *params,
>> + const struct sys_reg_desc table[],
>> + unsigned int num)
>> +{
>> + unsigned int i;
>> +
>> + for (i = 0; i < num; i++) {
>> + const struct sys_reg_desc *r = &table[i];
>> +
>> + if (params->Op0 != r->Op0)
>> + continue;
>> + if (params->Op1 != r->Op1)
>> + continue;
>> + if (params->CRn != r->CRn)
>> + continue;
>> + if (params->CRm != r->CRm)
>> + continue;
>> + if (params->Op2 != r->Op2)
>> + continue;
>> +
>> + return r;
>> + }
>> + return NULL;
>> +}
>> +
>> +static int emulate_sys_reg(struct kvm_vcpu *vcpu,
>> + const struct sys_reg_params *params)
>> +{
>> + size_t num;
>> + const struct sys_reg_desc *table, *r;
>> +
>> + table = get_target_table(vcpu->arch.target, &num);
>> +
>> + /* Search target-specific then generic table. */
>> + r = find_reg(params, table, num);
>> + if (!r)
>> + r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
>
> Searching through the whole list sounds quite slow. Especially since the
> TLS register is at the very bottom of it.
Slow, yes. Though not as bad as it sounds as only the first entries are
used on a trap path at the moment. For all the other entries, this is only
used when userspace tries to read/write a VM system register.
But overall I agree, this is not very efficient.
> Can't you make this a simple switch() statement through a bit of #define
> and maybe #include magic? After all, the sysreg target encoding is all
part
> of the opcode. And from my experience in the PPC instruction emulator,
> switch()es are _a lot_ faster than any other way of lookup I've tried.
There is definitely something like this to be done, and for the 32bit part
as well.
I'll have a look.
Thanks,
M.
--
Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 09/29] arm64: KVM: system register handling
@ 2013-03-08 3:29 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-08 3:29 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 7 Mar 2013 11:30:20 +0100, Alexander Graf <agraf@suse.de> wrote:
> On 05.03.2013, at 04:47, Marc Zyngier wrote:
>
>> Provide 64bit system register handling, modeled after the cp15
>> handling for ARM.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_coproc.h | 51 ++
>> arch/arm64/include/uapi/asm/kvm.h | 56 +++
>> arch/arm64/kvm/sys_regs.c | 962
>> ++++++++++++++++++++++++++++++++++++
>> arch/arm64/kvm/sys_regs.h | 141 ++++++
>> include/uapi/linux/kvm.h | 1 +
>> 5 files changed, 1211 insertions(+)
>> create mode 100644 arch/arm64/include/asm/kvm_coproc.h
>> create mode 100644 arch/arm64/kvm/sys_regs.c
>> create mode 100644 arch/arm64/kvm/sys_regs.h
>>
[..]
>> +/*
>> + * Architected system registers.
>> + * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
>> + */
>> +static const struct sys_reg_desc sys_reg_descs[] = {
>> + /* DC ISW */
>> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b0110), Op2(0b010),
>> + access_dcsw },
>> + /* DC CSW */
>> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1010), Op2(0b010),
>> + access_dcsw },
>> + /* DC CISW */
>> + { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b010),
>> + access_dcsw },
>> +
>> + /* TTBR0_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b000),
>> + NULL, reset_unknown, TTBR0_EL1 },
>> + /* TTBR1_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b001),
>> + NULL, reset_unknown, TTBR1_EL1 },
>> + /* TCR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0010), CRm(0b0000), Op2(0b010),
>> + NULL, reset_val, TCR_EL1, 0 },
>> +
>> + /* AFSR0_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b000),
>> + NULL, reset_unknown, AFSR0_EL1 },
>> + /* AFSR1_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0001), Op2(0b001),
>> + NULL, reset_unknown, AFSR1_EL1 },
>> + /* ESR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0101), CRm(0b0010), Op2(0b000),
>> + NULL, reset_unknown, ESR_EL1 },
>> + /* FAR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
>> + NULL, reset_unknown, FAR_EL1 },
>> +
>> + /* PMINTENSET_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
>> + pm_fake },
>> + /* PMINTENCLR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
>> + pm_fake },
>> +
>> + /* MAIR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
>> + NULL, reset_unknown, MAIR_EL1 },
>> + /* AMAIR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0011), Op2(0b000),
>> + NULL, reset_amair_el1, AMAIR_EL1 },
>> +
>> + /* VBAR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000),
>> + NULL, reset_val, VBAR_EL1, 0 },
>> + /* CONTEXTIDR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001),
>> + NULL, reset_val, CONTEXTIDR_EL1, 0 },
>> + /* TPIDR_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b100),
>> + NULL, reset_unknown, TPIDR_EL1 },
>> +
>> + /* CNTKCTL_EL1 */
>> + { Op0(0b11), Op1(0b000), CRn(0b1110), CRm(0b0001), Op2(0b000),
>> + NULL, reset_val, CNTKCTL_EL1, 0},
>> +
>> + /* CSSELR_EL1 */
>> + { Op0(0b11), Op1(0b010), CRn(0b0000), CRm(0b0000), Op2(0b000),
>> + NULL, reset_unknown, CSSELR_EL1 },
>> +
>> + /* PMCR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
>> + pm_fake },
>> + /* PMCNTENSET_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
>> + pm_fake },
>> + /* PMCNTENCLR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
>> + pm_fake },
>> + /* PMOVSCLR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
>> + pm_fake },
>> + /* PMSWINC_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
>> + pm_fake },
>> + /* PMSELR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
>> + pm_fake },
>> + /* PMCEID0_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
>> + pm_fake },
>> + /* PMCEID1_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
>> + pm_fake },
>> + /* PMCCNTR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
>> + pm_fake },
>> + /* PMXEVTYPER_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
>> + pm_fake },
>> + /* PMXEVCNTR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
>> + pm_fake },
>> + /* PMUSERENR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
>> + pm_fake },
>> + /* PMOVSSET_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
>> + pm_fake },
>> +
>> + /* TPIDR_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
>> + NULL, reset_unknown, TPIDR_EL0 },
>> + /* TPIDRRO_EL0 */
>> + { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
>> + NULL, reset_unknown, TPIDRRO_EL0 },
>> +};
>> +
>> +/* Target specific emulation tables */
>> +static struct kvm_sys_reg_target_table
>> *target_tables[KVM_ARM_NUM_TARGETS];
>> +
>> +void kvm_register_target_sys_reg_table(struct kvm_sys_reg_target_table
>> *table)
>> +{
>> + target_tables[table->target] = table;
>> +}
>> +
>> +/* Get specific register table for this target. */
>> +static const struct sys_reg_desc *get_target_table(unsigned target,
>> size_t *num)
>> +{
>> + struct kvm_sys_reg_target_table *table;
>> +
>> + table = target_tables[target];
>> + *num = table->table64.num;
>> + return table->table64.table;
>> +}
>> +
>> +static const struct sys_reg_desc *find_reg(const struct sys_reg_params
>> *params,
>> + const struct sys_reg_desc table[],
>> + unsigned int num)
>> +{
>> + unsigned int i;
>> +
>> + for (i = 0; i < num; i++) {
>> + const struct sys_reg_desc *r = &table[i];
>> +
>> + if (params->Op0 != r->Op0)
>> + continue;
>> + if (params->Op1 != r->Op1)
>> + continue;
>> + if (params->CRn != r->CRn)
>> + continue;
>> + if (params->CRm != r->CRm)
>> + continue;
>> + if (params->Op2 != r->Op2)
>> + continue;
>> +
>> + return r;
>> + }
>> + return NULL;
>> +}
>> +
>> +static int emulate_sys_reg(struct kvm_vcpu *vcpu,
>> + const struct sys_reg_params *params)
>> +{
>> + size_t num;
>> + const struct sys_reg_desc *table, *r;
>> +
>> + table = get_target_table(vcpu->arch.target, &num);
>> +
>> + /* Search target-specific then generic table. */
>> + r = find_reg(params, table, num);
>> + if (!r)
>> + r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
>
> Searching through the whole list sounds quite slow. Especially since the
> TLS register is at the very bottom of it.
Slow, yes. Though not as bad as it sounds as only the first entries are
used on a trap path at the moment. For all the other entries, this is only
used when userspace tries to read/write a VM system register.
But overall I agree, this is not very efficient.
> Can't you make this a simple switch() statement through a bit of #define
> and maybe #include magic? After all, the sysreg target encoding is all
part
> of the opcode. And from my experience in the PPC instruction emulator,
> switch()es are _a lot_ faster than any other way of lookup I've tried.
There is definitely something like this to be done, and for the 32bit part
as well.
I'll have a look.
Thanks,
M.
--
Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 08/29] arm64: KVM: user space interface
2013-03-07 8:09 ` Michael S. Tsirkin
@ 2013-03-08 3:46 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-08 3:46 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: catalin.marinas, kvm, linux-arm-kernel, kvmarm
On Thu, 7 Mar 2013 10:09:03 +0200, "Michael S. Tsirkin" <mst@redhat.com>
wrote:
> On Tue, Mar 05, 2013 at 03:47:24AM +0000, Marc Zyngier wrote:
>> Provide the kvm.h file that defines the user space visible
>> interface.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/include/uapi/asm/kvm.h | 112
>> ++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 112 insertions(+)
>> create mode 100644 arch/arm64/include/uapi/asm/kvm.h
>>
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h
>> b/arch/arm64/include/uapi/asm/kvm.h
>> new file mode 100644
>> index 0000000..f5525f1
>> --- /dev/null
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -0,0 +1,112 @@
>> +/*
>> + * Copyright (C) 2012 - ARM Ltd
>> + * Author: Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * Derived from arch/arm/include/uapi/asm/kvm.h:
>> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
>> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
>> + *
>> + * This program is free software; you can redistribute it and/or
modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see
<http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __ARM_KVM_H__
>> +#define __ARM_KVM_H__
>> +
>> +#define KVM_SPSR_EL1 0
>> +#define KVM_NR_SPSR 1
>> +
>> +#ifndef __ASSEMBLY__
>> +#include <asm/types.h>
>> +#include <asm/ptrace.h>
>> +
>> +#define __KVM_HAVE_GUEST_DEBUG
>> +#define __KVM_HAVE_IRQ_LINE
>> +
>> +#define KVM_REG_SIZE(id) \
>> + (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
>> +
>> +struct kvm_regs {
>> + struct user_pt_regs regs; /* sp = sp_el0 */
>> +
>> + unsigned long sp_el1;
>> + unsigned long elr_el1;
>> +
>> + unsigned long spsr[KVM_NR_SPSR];
>> +};
>> +
>
> Using long in uapi is generally a mistake: with gcc it has
> different size depending on whether you build in 64 and 32 bit mode.
> I think it is better to make it __u64.
Ah, well spotted. Definitely a mistake.
>> +/* Supported Processor Types */
>> +#define KVM_ARM_TARGET_CORTEX_A57 0
>> +#define KVM_ARM_NUM_TARGETS 1
>> +
>> +/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
>> +#define KVM_ARM_DEVICE_TYPE_SHIFT 0
>> +#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
>> +#define KVM_ARM_DEVICE_ID_SHIFT 16
>> +#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
>> +
>> +/* Supported device IDs */
>> +#define KVM_ARM_DEVICE_VGIC_V2 0
>> +
>> +/* Supported VGIC address types */
>> +#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
>> +#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
>> +
>> +#define KVM_VGIC_V2_DIST_SIZE 0x1000
>> +#define KVM_VGIC_V2_CPU_SIZE 0x2000
>> +
>> +struct kvm_vcpu_init {
>> + __u32 target;
>> + __u32 features[7];
>> +};
>> +
>> +struct kvm_sregs {
>> +};
>> +
>> +struct kvm_fpu {
>> +};
>> +
>> +struct kvm_guest_debug_arch {
>> +};
>> +
>> +struct kvm_debug_exit_arch {
>> +};
>> +
>
> This is a problem too: structure alignment is different
> in -m32 versus -m64 modes, which will affect the offset of
> the following fields. I think it's best to add a "padding"
> field in there, and size it to multiple of 8 bytes.
Sorry, which field are you referring to?
> I think the same is a good idea for other empty stuctures,
> since otherwise the padding is implicit and
> not initialized by gcc, and it is hard not too leak
> info to userspace when you copy these structures out.
>
> And, it'll be handy if you want to extend the structures
> down the line.
Sure, what would make some sense.
Thanks,
M.
--
Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 08/29] arm64: KVM: user space interface
@ 2013-03-08 3:46 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-08 3:46 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 7 Mar 2013 10:09:03 +0200, "Michael S. Tsirkin" <mst@redhat.com>
wrote:
> On Tue, Mar 05, 2013 at 03:47:24AM +0000, Marc Zyngier wrote:
>> Provide the kvm.h file that defines the user space visible
>> interface.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/include/uapi/asm/kvm.h | 112
>> ++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 112 insertions(+)
>> create mode 100644 arch/arm64/include/uapi/asm/kvm.h
>>
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h
>> b/arch/arm64/include/uapi/asm/kvm.h
>> new file mode 100644
>> index 0000000..f5525f1
>> --- /dev/null
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -0,0 +1,112 @@
>> +/*
>> + * Copyright (C) 2012 - ARM Ltd
>> + * Author: Marc Zyngier <marc.zyngier@arm.com>
>> + *
>> + * Derived from arch/arm/include/uapi/asm/kvm.h:
>> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
>> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
>> + *
>> + * This program is free software; you can redistribute it and/or
modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program. If not, see
<http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __ARM_KVM_H__
>> +#define __ARM_KVM_H__
>> +
>> +#define KVM_SPSR_EL1 0
>> +#define KVM_NR_SPSR 1
>> +
>> +#ifndef __ASSEMBLY__
>> +#include <asm/types.h>
>> +#include <asm/ptrace.h>
>> +
>> +#define __KVM_HAVE_GUEST_DEBUG
>> +#define __KVM_HAVE_IRQ_LINE
>> +
>> +#define KVM_REG_SIZE(id) \
>> + (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
>> +
>> +struct kvm_regs {
>> + struct user_pt_regs regs; /* sp = sp_el0 */
>> +
>> + unsigned long sp_el1;
>> + unsigned long elr_el1;
>> +
>> + unsigned long spsr[KVM_NR_SPSR];
>> +};
>> +
>
> Using long in uapi is generally a mistake: with gcc it has
> different size depending on whether you build in 64 and 32 bit mode.
> I think it is better to make it __u64.
Ah, well spotted. Definitely a mistake.
>> +/* Supported Processor Types */
>> +#define KVM_ARM_TARGET_CORTEX_A57 0
>> +#define KVM_ARM_NUM_TARGETS 1
>> +
>> +/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
>> +#define KVM_ARM_DEVICE_TYPE_SHIFT 0
>> +#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
>> +#define KVM_ARM_DEVICE_ID_SHIFT 16
>> +#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
>> +
>> +/* Supported device IDs */
>> +#define KVM_ARM_DEVICE_VGIC_V2 0
>> +
>> +/* Supported VGIC address types */
>> +#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
>> +#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
>> +
>> +#define KVM_VGIC_V2_DIST_SIZE 0x1000
>> +#define KVM_VGIC_V2_CPU_SIZE 0x2000
>> +
>> +struct kvm_vcpu_init {
>> + __u32 target;
>> + __u32 features[7];
>> +};
>> +
>> +struct kvm_sregs {
>> +};
>> +
>> +struct kvm_fpu {
>> +};
>> +
>> +struct kvm_guest_debug_arch {
>> +};
>> +
>> +struct kvm_debug_exit_arch {
>> +};
>> +
>
> This is a problem too: structure alignment is different
> in -m32 versus -m64 modes, which will affect the offset of
> the following fields. I think it's best to add a "padding"
> field in there, and size it to multiple of 8 bytes.
Sorry, which field are you referring to?
> I think the same is a good idea for other empty stuctures,
> since otherwise the padding is implicit and
> not initialized by gcc, and it is hard not too leak
> info to userspace when you copy these structures out.
>
> And, it'll be handy if you want to extend the structures
> down the line.
Sure, what would make some sense.
Thanks,
M.
--
Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 08/29] arm64: KVM: user space interface
2013-03-08 3:46 ` Marc Zyngier
@ 2013-03-10 9:23 ` Michael S. Tsirkin
-1 siblings, 0 replies; 128+ messages in thread
From: Michael S. Tsirkin @ 2013-03-10 9:23 UTC (permalink / raw)
To: Marc Zyngier; +Cc: catalin.marinas, kvm, linux-arm-kernel, kvmarm
On Fri, Mar 08, 2013 at 04:46:15AM +0100, Marc Zyngier wrote:
> On Thu, 7 Mar 2013 10:09:03 +0200, "Michael S. Tsirkin" <mst@redhat.com>
> wrote:
> > On Tue, Mar 05, 2013 at 03:47:24AM +0000, Marc Zyngier wrote:
> >> Provide the kvm.h file that defines the user space visible
> >> interface.
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >> arch/arm64/include/uapi/asm/kvm.h | 112
> >> ++++++++++++++++++++++++++++++++++++++
> >> 1 file changed, 112 insertions(+)
> >> create mode 100644 arch/arm64/include/uapi/asm/kvm.h
> >>
> >> diff --git a/arch/arm64/include/uapi/asm/kvm.h
> >> b/arch/arm64/include/uapi/asm/kvm.h
> >> new file mode 100644
> >> index 0000000..f5525f1
> >> --- /dev/null
> >> +++ b/arch/arm64/include/uapi/asm/kvm.h
> >> @@ -0,0 +1,112 @@
> >> +/*
> >> + * Copyright (C) 2012 - ARM Ltd
> >> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> >> + *
> >> + * Derived from arch/arm/include/uapi/asm/kvm.h:
> >> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> >> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> >> + *
> >> + * This program is free software; you can redistribute it and/or
> modify
> >> + * it under the terms of the GNU General Public License version 2 as
> >> + * published by the Free Software Foundation.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> >> + * GNU General Public License for more details.
> >> + *
> >> + * You should have received a copy of the GNU General Public License
> >> + * along with this program. If not, see
> <http://www.gnu.org/licenses/>.
> >> + */
> >> +
> >> +#ifndef __ARM_KVM_H__
> >> +#define __ARM_KVM_H__
> >> +
> >> +#define KVM_SPSR_EL1 0
> >> +#define KVM_NR_SPSR 1
> >> +
> >> +#ifndef __ASSEMBLY__
> >> +#include <asm/types.h>
> >> +#include <asm/ptrace.h>
> >> +
> >> +#define __KVM_HAVE_GUEST_DEBUG
> >> +#define __KVM_HAVE_IRQ_LINE
> >> +
> >> +#define KVM_REG_SIZE(id) \
> >> + (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
> >> +
> >> +struct kvm_regs {
> >> + struct user_pt_regs regs; /* sp = sp_el0 */
> >> +
> >> + unsigned long sp_el1;
> >> + unsigned long elr_el1;
> >> +
> >> + unsigned long spsr[KVM_NR_SPSR];
> >> +};
> >> +
> >
> > Using long in uapi is generally a mistake: with gcc it has
> > different size depending on whether you build in 64 and 32 bit mode.
> > I think it is better to make it __u64.
>
> Ah, well spotted. Definitely a mistake.
>
> >> +/* Supported Processor Types */
> >> +#define KVM_ARM_TARGET_CORTEX_A57 0
> >> +#define KVM_ARM_NUM_TARGETS 1
> >> +
> >> +/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
> >> +#define KVM_ARM_DEVICE_TYPE_SHIFT 0
> >> +#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
> >> +#define KVM_ARM_DEVICE_ID_SHIFT 16
> >> +#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
> >> +
> >> +/* Supported device IDs */
> >> +#define KVM_ARM_DEVICE_VGIC_V2 0
> >> +
> >> +/* Supported VGIC address types */
> >> +#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
> >> +#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
> >> +
> >> +#define KVM_VGIC_V2_DIST_SIZE 0x1000
> >> +#define KVM_VGIC_V2_CPU_SIZE 0x2000
> >> +
> >> +struct kvm_vcpu_init {
> >> + __u32 target;
> >> + __u32 features[7];
> >> +};
> >> +
> >> +struct kvm_sregs {
> >> +};
> >> +
> >> +struct kvm_fpu {
> >> +};
> >> +
> >> +struct kvm_guest_debug_arch {
> >> +};
> >> +
> >> +struct kvm_debug_exit_arch {
> >> +};
> >> +
> >
> > This is a problem too: structure alignment is different
> > in -m32 versus -m64 modes, which will affect the offset of
> > the following fields. I think it's best to add a "padding"
> > field in there, and size it to multiple of 8 bytes.
>
> Sorry, which field are you referring to?
Sorry, I was wrong here: while it seems to be compler dependent gcc
returns 0 as size of an empty structure.
> > I think the same is a good idea for other empty stuctures,
> > since otherwise the padding is implicit and
> > not initialized by gcc, and it is hard not too leak
> > info to userspace when you copy these structures out.
> >
> > And, it'll be handy if you want to extend the structures
> > down the line.
>
> Sure, what would make some sense.
>
> Thanks,
>
> M.
> --
> Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 08/29] arm64: KVM: user space interface
@ 2013-03-10 9:23 ` Michael S. Tsirkin
0 siblings, 0 replies; 128+ messages in thread
From: Michael S. Tsirkin @ 2013-03-10 9:23 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Mar 08, 2013 at 04:46:15AM +0100, Marc Zyngier wrote:
> On Thu, 7 Mar 2013 10:09:03 +0200, "Michael S. Tsirkin" <mst@redhat.com>
> wrote:
> > On Tue, Mar 05, 2013 at 03:47:24AM +0000, Marc Zyngier wrote:
> >> Provide the kvm.h file that defines the user space visible
> >> interface.
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >> arch/arm64/include/uapi/asm/kvm.h | 112
> >> ++++++++++++++++++++++++++++++++++++++
> >> 1 file changed, 112 insertions(+)
> >> create mode 100644 arch/arm64/include/uapi/asm/kvm.h
> >>
> >> diff --git a/arch/arm64/include/uapi/asm/kvm.h
> >> b/arch/arm64/include/uapi/asm/kvm.h
> >> new file mode 100644
> >> index 0000000..f5525f1
> >> --- /dev/null
> >> +++ b/arch/arm64/include/uapi/asm/kvm.h
> >> @@ -0,0 +1,112 @@
> >> +/*
> >> + * Copyright (C) 2012 - ARM Ltd
> >> + * Author: Marc Zyngier <marc.zyngier@arm.com>
> >> + *
> >> + * Derived from arch/arm/include/uapi/asm/kvm.h:
> >> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> >> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> >> + *
> >> + * This program is free software; you can redistribute it and/or
> modify
> >> + * it under the terms of the GNU General Public License version 2 as
> >> + * published by the Free Software Foundation.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> >> + * GNU General Public License for more details.
> >> + *
> >> + * You should have received a copy of the GNU General Public License
> >> + * along with this program. If not, see
> <http://www.gnu.org/licenses/>.
> >> + */
> >> +
> >> +#ifndef __ARM_KVM_H__
> >> +#define __ARM_KVM_H__
> >> +
> >> +#define KVM_SPSR_EL1 0
> >> +#define KVM_NR_SPSR 1
> >> +
> >> +#ifndef __ASSEMBLY__
> >> +#include <asm/types.h>
> >> +#include <asm/ptrace.h>
> >> +
> >> +#define __KVM_HAVE_GUEST_DEBUG
> >> +#define __KVM_HAVE_IRQ_LINE
> >> +
> >> +#define KVM_REG_SIZE(id) \
> >> + (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
> >> +
> >> +struct kvm_regs {
> >> + struct user_pt_regs regs; /* sp = sp_el0 */
> >> +
> >> + unsigned long sp_el1;
> >> + unsigned long elr_el1;
> >> +
> >> + unsigned long spsr[KVM_NR_SPSR];
> >> +};
> >> +
> >
> > Using long in uapi is generally a mistake: with gcc it has
> > different size depending on whether you build in 64 and 32 bit mode.
> > I think it is better to make it __u64.
>
> Ah, well spotted. Definitely a mistake.
>
> >> +/* Supported Processor Types */
> >> +#define KVM_ARM_TARGET_CORTEX_A57 0
> >> +#define KVM_ARM_NUM_TARGETS 1
> >> +
> >> +/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
> >> +#define KVM_ARM_DEVICE_TYPE_SHIFT 0
> >> +#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
> >> +#define KVM_ARM_DEVICE_ID_SHIFT 16
> >> +#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
> >> +
> >> +/* Supported device IDs */
> >> +#define KVM_ARM_DEVICE_VGIC_V2 0
> >> +
> >> +/* Supported VGIC address types */
> >> +#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
> >> +#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
> >> +
> >> +#define KVM_VGIC_V2_DIST_SIZE 0x1000
> >> +#define KVM_VGIC_V2_CPU_SIZE 0x2000
> >> +
> >> +struct kvm_vcpu_init {
> >> + __u32 target;
> >> + __u32 features[7];
> >> +};
> >> +
> >> +struct kvm_sregs {
> >> +};
> >> +
> >> +struct kvm_fpu {
> >> +};
> >> +
> >> +struct kvm_guest_debug_arch {
> >> +};
> >> +
> >> +struct kvm_debug_exit_arch {
> >> +};
> >> +
> >
> > This is a problem too: structure alignment is different
> > in -m32 versus -m64 modes, which will affect the offset of
> > the following fields. I think it's best to add a "padding"
> > field in there, and size it to multiple of 8 bytes.
>
> Sorry, which field are you referring to?
Sorry, I was wrong here: while it seems to be compler dependent gcc
returns 0 as size of an empty structure.
> > I think the same is a good idea for other empty stuctures,
> > since otherwise the padding is implicit and
> > not initialized by gcc, and it is hard not too leak
> > info to userspace when you copy these structures out.
> >
> > And, it'll be handy if you want to extend the structures
> > down the line.
>
> Sure, what would make some sense.
>
> Thanks,
>
> M.
> --
> Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-12 13:20 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 13:20 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
Here are a few minor questions and suggestions.
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Define the saved/restored registers for 64bit guests.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_asm.h | 68 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 68 insertions(+)
> create mode 100644 arch/arm64/include/asm/kvm_asm.h
[...]
> +/*
> + * 0 is reserved as an invalid value.
> + * Order *must* be kept in sync with the hyp switch code.
> + */
> +#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
> +#define CSSELR_EL1 2 /* Cache Size Selection Register */
> +#define SCTLR_EL1 3 /* System Control Register */
> +#define ACTLR_EL1 4 /* Auxilliary Control Register */
> +#define CPACR_EL1 5 /* Coprocessor Access Control */
> +#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
> +#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
> +#define TCR_EL1 8 /* Translation Control Register */
> +#define ESR_EL1 9 /* Exception Syndrome Register */
> +#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
> +#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
> +#define FAR_EL1 12 /* Fault Address Register */
> +#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
> +#define VBAR_EL1 14 /* Vector Base Address Register */
> +#define CONTEXTIDR_EL1 15 /* Context ID Register */
> +#define TPIDR_EL0 16 /* Thread ID, User R/W */
> +#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
> +#define TPIDR_EL1 18 /* Thread ID, Privileged */
> +#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
> +#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
Any particular reason to have CNTKCTL_EL1 enumerated here and handled by
(dump|load)_sysregs, but all the other timer registers handled by
(save|restore)_timer_state in hyp.S?
> +#define NR_SYS_REGS 21
[...]
> +/* Hyp Syndrome Register (HSR) bits */
> +#define ESR_EL2_EC_SHIFT (26)
> +#define ESR_EL2_EC (0x3fU << ESR_EL2_EC_SHIFT)
> +#define ESR_EL2_IL (1U << 25)
> +#define ESR_EL2_ISS (ESR_EL2_IL - 1)
> +#define ESR_EL2_ISV_SHIFT (24)
> +#define ESR_EL2_ISV (1U << ESR_EL2_ISV_SHIFT)
> +#define ESR_EL2_SAS_SHIFT (22)
> +#define ESR_EL2_SAS (3U << ESR_EL2_SAS_SHIFT)
> +#define ESR_EL2_SSE (1 << 21)
> +#define ESR_EL2_SRT_SHIFT (16)
> +#define ESR_EL2_SRT_MASK (0x1f << ESR_EL2_SRT_SHIFT)
> +#define ESR_EL2_SF (1 << 15)
> +#define ESR_EL2_AR (1 << 14)
> +#define ESR_EL2_EA (1 << 9)
> +#define ESR_EL2_CM (1 << 8)
> +#define ESR_EL2_S1PTW (1 << 7)
> +#define ESR_EL2_WNR (1 << 6)
> +#define ESR_EL2_FSC (0x3f)
> +#define ESR_EL2_FSC_TYPE (0x3c)
> +
> +#define ESR_EL2_CV_SHIFT (24)
> +#define ESR_EL2_CV (1U << ESR_EL2_CV_SHIFT)
> +#define ESR_EL2_COND_SHIFT (20)
> +#define ESR_EL2_COND (0xfU << ESR_EL2_COND_SHIFT)
[...]
> +#define ESR_EL2_EC_UNKNOWN (0x00)
> +#define ESR_EL2_EC_WFI (0x01)
> +#define ESR_EL2_EC_CP15_32 (0x03)
> +#define ESR_EL2_EC_CP15_64 (0x04)
> +#define ESR_EL2_EC_CP14_MR (0x05)
> +#define ESR_EL2_EC_CP14_LS (0x06)
> +#define ESR_EL2_EC_SIMD_FP (0x07)
> +#define ESR_EL2_EC_CP10_ID (0x08)
> +#define ESR_EL2_EC_CP14_64 (0x0C)
> +#define ESR_EL2_EC_ILL_ISS (0x0E)
> +#define ESR_EL2_EC_SVC32 (0x11)
> +#define ESR_EL2_EC_HVC32 (0x12)
> +#define ESR_EL2_EC_SMC32 (0x13)
> +#define ESR_EL2_EC_SVC64 (0x14)
> +#define ESR_EL2_EC_HVC64 (0x16)
> +#define ESR_EL2_EC_SMC64 (0x17)
> +#define ESR_EL2_EC_SYS64 (0x18)
> +#define ESR_EL2_EC_IABT (0x20)
> +#define ESR_EL2_EC_IABT_HYP (0x21)
> +#define ESR_EL2_EC_PC_ALIGN (0x22)
> +#define ESR_EL2_EC_DABT (0x24)
> +#define ESR_EL2_EC_DABT_HYP (0x25)
> +#define ESR_EL2_EC_SP_ALIGN (0x26)
> +#define ESR_EL2_EC_FP32 (0x28)
> +#define ESR_EL2_EC_FP64 (0x2C)
> +#define ESR_EL2_EC_SERRROR (0x2F)
> +#define ESR_EL2_EC_BREAKPT (0x30)
> +#define ESR_EL2_EC_BREAKPT_HYP (0x31)
> +#define ESR_EL2_EC_SOFTSTP (0x32)
> +#define ESR_EL2_EC_SOFTSTP_HYP (0x33)
> +#define ESR_EL2_EC_WATCHPT (0x34)
> +#define ESR_EL2_EC_WATCHPT_HYP (0x35)
> +#define ESR_EL2_EC_BKPT32 (0x38)
> +#define ESR_EL2_EC_VECTOR32 (0x3A)
> +#define ESR_EL2_EC_BKPT64 (0x3C)
Is there any functional difference between these fields and bits at EL2 and
these fields at other exception levels? If not, you could consider omitting
"_EL2".
[...]
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
@ 2013-03-12 13:20 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 13:20 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
Here are a few minor questions and suggestions.
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Define the saved/restored registers for 64bit guests.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_asm.h | 68 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 68 insertions(+)
> create mode 100644 arch/arm64/include/asm/kvm_asm.h
[...]
> +/*
> + * 0 is reserved as an invalid value.
> + * Order *must* be kept in sync with the hyp switch code.
> + */
> +#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
> +#define CSSELR_EL1 2 /* Cache Size Selection Register */
> +#define SCTLR_EL1 3 /* System Control Register */
> +#define ACTLR_EL1 4 /* Auxilliary Control Register */
> +#define CPACR_EL1 5 /* Coprocessor Access Control */
> +#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
> +#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
> +#define TCR_EL1 8 /* Translation Control Register */
> +#define ESR_EL1 9 /* Exception Syndrome Register */
> +#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
> +#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
> +#define FAR_EL1 12 /* Fault Address Register */
> +#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
> +#define VBAR_EL1 14 /* Vector Base Address Register */
> +#define CONTEXTIDR_EL1 15 /* Context ID Register */
> +#define TPIDR_EL0 16 /* Thread ID, User R/W */
> +#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
> +#define TPIDR_EL1 18 /* Thread ID, Privileged */
> +#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
> +#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
Any particular reason to have CNTKCTL_EL1 enumerated here and handled by
(dump|load)_sysregs, but all the other timer registers handled by
(save|restore)_timer_state in hyp.S?
> +#define NR_SYS_REGS 21
[...]
> +/* Hyp Syndrome Register (HSR) bits */
> +#define ESR_EL2_EC_SHIFT (26)
> +#define ESR_EL2_EC (0x3fU << ESR_EL2_EC_SHIFT)
> +#define ESR_EL2_IL (1U << 25)
> +#define ESR_EL2_ISS (ESR_EL2_IL - 1)
> +#define ESR_EL2_ISV_SHIFT (24)
> +#define ESR_EL2_ISV (1U << ESR_EL2_ISV_SHIFT)
> +#define ESR_EL2_SAS_SHIFT (22)
> +#define ESR_EL2_SAS (3U << ESR_EL2_SAS_SHIFT)
> +#define ESR_EL2_SSE (1 << 21)
> +#define ESR_EL2_SRT_SHIFT (16)
> +#define ESR_EL2_SRT_MASK (0x1f << ESR_EL2_SRT_SHIFT)
> +#define ESR_EL2_SF (1 << 15)
> +#define ESR_EL2_AR (1 << 14)
> +#define ESR_EL2_EA (1 << 9)
> +#define ESR_EL2_CM (1 << 8)
> +#define ESR_EL2_S1PTW (1 << 7)
> +#define ESR_EL2_WNR (1 << 6)
> +#define ESR_EL2_FSC (0x3f)
> +#define ESR_EL2_FSC_TYPE (0x3c)
> +
> +#define ESR_EL2_CV_SHIFT (24)
> +#define ESR_EL2_CV (1U << ESR_EL2_CV_SHIFT)
> +#define ESR_EL2_COND_SHIFT (20)
> +#define ESR_EL2_COND (0xfU << ESR_EL2_COND_SHIFT)
[...]
> +#define ESR_EL2_EC_UNKNOWN (0x00)
> +#define ESR_EL2_EC_WFI (0x01)
> +#define ESR_EL2_EC_CP15_32 (0x03)
> +#define ESR_EL2_EC_CP15_64 (0x04)
> +#define ESR_EL2_EC_CP14_MR (0x05)
> +#define ESR_EL2_EC_CP14_LS (0x06)
> +#define ESR_EL2_EC_SIMD_FP (0x07)
> +#define ESR_EL2_EC_CP10_ID (0x08)
> +#define ESR_EL2_EC_CP14_64 (0x0C)
> +#define ESR_EL2_EC_ILL_ISS (0x0E)
> +#define ESR_EL2_EC_SVC32 (0x11)
> +#define ESR_EL2_EC_HVC32 (0x12)
> +#define ESR_EL2_EC_SMC32 (0x13)
> +#define ESR_EL2_EC_SVC64 (0x14)
> +#define ESR_EL2_EC_HVC64 (0x16)
> +#define ESR_EL2_EC_SMC64 (0x17)
> +#define ESR_EL2_EC_SYS64 (0x18)
> +#define ESR_EL2_EC_IABT (0x20)
> +#define ESR_EL2_EC_IABT_HYP (0x21)
> +#define ESR_EL2_EC_PC_ALIGN (0x22)
> +#define ESR_EL2_EC_DABT (0x24)
> +#define ESR_EL2_EC_DABT_HYP (0x25)
> +#define ESR_EL2_EC_SP_ALIGN (0x26)
> +#define ESR_EL2_EC_FP32 (0x28)
> +#define ESR_EL2_EC_FP64 (0x2C)
> +#define ESR_EL2_EC_SERRROR (0x2F)
> +#define ESR_EL2_EC_BREAKPT (0x30)
> +#define ESR_EL2_EC_BREAKPT_HYP (0x31)
> +#define ESR_EL2_EC_SOFTSTP (0x32)
> +#define ESR_EL2_EC_SOFTSTP_HYP (0x33)
> +#define ESR_EL2_EC_WATCHPT (0x34)
> +#define ESR_EL2_EC_WATCHPT_HYP (0x35)
> +#define ESR_EL2_EC_BKPT32 (0x38)
> +#define ESR_EL2_EC_VECTOR32 (0x3A)
> +#define ESR_EL2_EC_BKPT64 (0x3C)
Is there any functional difference between these fields and bits at EL2 and
these fields at other exception levels? If not, you could consider omitting
"_EL2".
[...]
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 06/29] arm64: KVM: fault injection into a guest
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-12 13:20 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 13:20 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
I noticed you went through the trouble of defining several constants in an
earlier patch. Perhaps you could put them to use here?
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Implement the injection of a fault (undefined, data abort or
> prefetch abort) into a 64bit guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kvm/inject_fault.c | 117 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 117 insertions(+)
> create mode 100644 arch/arm64/kvm/inject_fault.c
[...]
> +static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
> +{
> + unsigned long cpsr = *vcpu_cpsr(vcpu);
> + int is_aarch32;
> + u32 esr = 0;
> +
> + is_aarch32 = vcpu_mode_is_32bit(vcpu);
> +
> + *vcpu_spsr(vcpu) = cpsr;
> + vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
> +
> + *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | PSR_I_BIT;
> + *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
> +
> + vcpu->arch.sys_regs[FAR_EL1] = addr;
> +
> + /*
> + * Build an {i,d}abort, depending on the level and the
> + * instruction set. Report an external synchronous abort.
> + */
> + if (kvm_vcpu_trap_il_is32bit(vcpu))
> + esr |= (1 << 25);
ESR_EL2_IL
> + if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t)
> + esr |= (0x20 << 26);
ESR_EL2_EC_IABT << ESR_EL2_EC_SHIFT
> + else
> + esr |= (0x21 << 26);
ESR_EL2_EC_IABT_HYP << ESR_EL2_EC_SHIFT
> +
> + if (!is_iabt)
> + esr |= (1 << 28);
ESR_EL2_EC_DABT << ESR_EL2_EC_SHIFT
> +
> + vcpu->arch.sys_regs[ESR_EL1] = esr | 0x10;
> +}
> +
> +static void inject_undef64(struct kvm_vcpu *vcpu)
> +{
> + unsigned long cpsr = *vcpu_cpsr(vcpu);
> + u32 esr = 0;
> +
> + *vcpu_spsr(vcpu) = cpsr;
> + vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
> +
> + *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_F_BIT | PSR_I_BIT;
> + *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
> +
> + /*
> + * Build an unknown exception, depending on the instruction
> + * set.
> + */
> + if (kvm_vcpu_trap_il_is32bit(vcpu))
> + esr |= (1 << 25);
ESR_EL2_IL
> +
> + vcpu->arch.sys_regs[ESR_EL1] = esr;
> +}
[...]
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 06/29] arm64: KVM: fault injection into a guest
@ 2013-03-12 13:20 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 13:20 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
I noticed you went through the trouble of defining several constants in an
earlier patch. Perhaps you could put them to use here?
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Implement the injection of a fault (undefined, data abort or
> prefetch abort) into a 64bit guest.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kvm/inject_fault.c | 117 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 117 insertions(+)
> create mode 100644 arch/arm64/kvm/inject_fault.c
[...]
> +static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
> +{
> + unsigned long cpsr = *vcpu_cpsr(vcpu);
> + int is_aarch32;
> + u32 esr = 0;
> +
> + is_aarch32 = vcpu_mode_is_32bit(vcpu);
> +
> + *vcpu_spsr(vcpu) = cpsr;
> + vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
> +
> + *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | PSR_I_BIT;
> + *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
> +
> + vcpu->arch.sys_regs[FAR_EL1] = addr;
> +
> + /*
> + * Build an {i,d}abort, depending on the level and the
> + * instruction set. Report an external synchronous abort.
> + */
> + if (kvm_vcpu_trap_il_is32bit(vcpu))
> + esr |= (1 << 25);
ESR_EL2_IL
> + if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t)
> + esr |= (0x20 << 26);
ESR_EL2_EC_IABT << ESR_EL2_EC_SHIFT
> + else
> + esr |= (0x21 << 26);
ESR_EL2_EC_IABT_HYP << ESR_EL2_EC_SHIFT
> +
> + if (!is_iabt)
> + esr |= (1 << 28);
ESR_EL2_EC_DABT << ESR_EL2_EC_SHIFT
> +
> + vcpu->arch.sys_regs[ESR_EL1] = esr | 0x10;
> +}
> +
> +static void inject_undef64(struct kvm_vcpu *vcpu)
> +{
> + unsigned long cpsr = *vcpu_cpsr(vcpu);
> + u32 esr = 0;
> +
> + *vcpu_spsr(vcpu) = cpsr;
> + vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
> +
> + *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_F_BIT | PSR_I_BIT;
> + *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
> +
> + /*
> + * Build an unknown exception, depending on the instruction
> + * set.
> + */
> + if (kvm_vcpu_trap_il_is32bit(vcpu))
> + esr |= (1 << 25);
ESR_EL2_IL
> +
> + vcpu->arch.sys_regs[ESR_EL1] = esr;
> +}
[...]
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
2013-03-12 13:20 ` Christopher Covington
@ 2013-03-12 13:41 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 13:41 UTC (permalink / raw)
To: Marc Zyngier; +Cc: catalin.marinas, kvm, linux-arm-kernel, kvmarm
I now realize I accidentally appended some of the contents of the kvm_arm.h
patch (03/29) and corresponding comment to my reply to the kvm_asm.h patch
(04/29). If it's not clear what I meant, please let me know.
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
@ 2013-03-12 13:41 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 13:41 UTC (permalink / raw)
To: linux-arm-kernel
I now realize I accidentally appended some of the contents of the kvm_arm.h
patch (03/29) and corresponding comment to my reply to the kvm_asm.h patch
(04/29). If it's not clear what I meant, please let me know.
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
2013-03-12 13:20 ` Christopher Covington
@ 2013-03-12 13:50 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-12 13:50 UTC (permalink / raw)
To: Christopher Covington; +Cc: linux-arm-kernel, kvm, kvmarm, Catalin Marinas
On 12/03/13 13:20, Christopher Covington wrote:
Hi Christopher,
> Here are a few minor questions and suggestions.
>
> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>> Define the saved/restored registers for 64bit guests.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_asm.h | 68 ++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 68 insertions(+)
>> create mode 100644 arch/arm64/include/asm/kvm_asm.h
>
> [...]
>
>> +/*
>> + * 0 is reserved as an invalid value.
>> + * Order *must* be kept in sync with the hyp switch code.
>> + */
>> +#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
>> +#define CSSELR_EL1 2 /* Cache Size Selection Register */
>> +#define SCTLR_EL1 3 /* System Control Register */
>> +#define ACTLR_EL1 4 /* Auxilliary Control Register */
>> +#define CPACR_EL1 5 /* Coprocessor Access Control */
>> +#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
>> +#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
>> +#define TCR_EL1 8 /* Translation Control Register */
>> +#define ESR_EL1 9 /* Exception Syndrome Register */
>> +#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
>> +#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
>> +#define FAR_EL1 12 /* Fault Address Register */
>> +#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
>> +#define VBAR_EL1 14 /* Vector Base Address Register */
>> +#define CONTEXTIDR_EL1 15 /* Context ID Register */
>> +#define TPIDR_EL0 16 /* Thread ID, User R/W */
>> +#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
>> +#define TPIDR_EL1 18 /* Thread ID, Privileged */
>> +#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
>> +#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
>
> Any particular reason to have CNTKCTL_EL1 enumerated here and handled by
> (dump|load)_sysregs, but all the other timer registers handled by
> (save|restore)_timer_state in hyp.S?
This one is a bit of an odd one, as it controls the guest kernel
decision to expose its timer/counter to its own userspace. It is not
directly involved into the timekeeping itself.
As such, my choice was to make it part of the CPU state rather than the
timer state. But I admit this may not be the most obvious choice.
>> +#define NR_SYS_REGS 21
>
> [...]
>
>> +/* Hyp Syndrome Register (HSR) bits */
>> +#define ESR_EL2_EC_SHIFT (26)
>> +#define ESR_EL2_EC (0x3fU << ESR_EL2_EC_SHIFT)
>> +#define ESR_EL2_IL (1U << 25)
>> +#define ESR_EL2_ISS (ESR_EL2_IL - 1)
>> +#define ESR_EL2_ISV_SHIFT (24)
>> +#define ESR_EL2_ISV (1U << ESR_EL2_ISV_SHIFT)
>> +#define ESR_EL2_SAS_SHIFT (22)
>> +#define ESR_EL2_SAS (3U << ESR_EL2_SAS_SHIFT)
>> +#define ESR_EL2_SSE (1 << 21)
>> +#define ESR_EL2_SRT_SHIFT (16)
>> +#define ESR_EL2_SRT_MASK (0x1f << ESR_EL2_SRT_SHIFT)
>> +#define ESR_EL2_SF (1 << 15)
>> +#define ESR_EL2_AR (1 << 14)
>> +#define ESR_EL2_EA (1 << 9)
>> +#define ESR_EL2_CM (1 << 8)
>> +#define ESR_EL2_S1PTW (1 << 7)
>> +#define ESR_EL2_WNR (1 << 6)
>> +#define ESR_EL2_FSC (0x3f)
>> +#define ESR_EL2_FSC_TYPE (0x3c)
>> +
>> +#define ESR_EL2_CV_SHIFT (24)
>> +#define ESR_EL2_CV (1U << ESR_EL2_CV_SHIFT)
>> +#define ESR_EL2_COND_SHIFT (20)
>> +#define ESR_EL2_COND (0xfU << ESR_EL2_COND_SHIFT)
>
> [...]
>
>> +#define ESR_EL2_EC_UNKNOWN (0x00)
>> +#define ESR_EL2_EC_WFI (0x01)
>> +#define ESR_EL2_EC_CP15_32 (0x03)
>> +#define ESR_EL2_EC_CP15_64 (0x04)
>> +#define ESR_EL2_EC_CP14_MR (0x05)
>> +#define ESR_EL2_EC_CP14_LS (0x06)
>> +#define ESR_EL2_EC_SIMD_FP (0x07)
>> +#define ESR_EL2_EC_CP10_ID (0x08)
>> +#define ESR_EL2_EC_CP14_64 (0x0C)
>> +#define ESR_EL2_EC_ILL_ISS (0x0E)
>> +#define ESR_EL2_EC_SVC32 (0x11)
>> +#define ESR_EL2_EC_HVC32 (0x12)
>> +#define ESR_EL2_EC_SMC32 (0x13)
>> +#define ESR_EL2_EC_SVC64 (0x14)
>> +#define ESR_EL2_EC_HVC64 (0x16)
>> +#define ESR_EL2_EC_SMC64 (0x17)
>> +#define ESR_EL2_EC_SYS64 (0x18)
>> +#define ESR_EL2_EC_IABT (0x20)
>> +#define ESR_EL2_EC_IABT_HYP (0x21)
>> +#define ESR_EL2_EC_PC_ALIGN (0x22)
>> +#define ESR_EL2_EC_DABT (0x24)
>> +#define ESR_EL2_EC_DABT_HYP (0x25)
>> +#define ESR_EL2_EC_SP_ALIGN (0x26)
>> +#define ESR_EL2_EC_FP32 (0x28)
>> +#define ESR_EL2_EC_FP64 (0x2C)
>> +#define ESR_EL2_EC_SERRROR (0x2F)
>> +#define ESR_EL2_EC_BREAKPT (0x30)
>> +#define ESR_EL2_EC_BREAKPT_HYP (0x31)
>> +#define ESR_EL2_EC_SOFTSTP (0x32)
>> +#define ESR_EL2_EC_SOFTSTP_HYP (0x33)
>> +#define ESR_EL2_EC_WATCHPT (0x34)
>> +#define ESR_EL2_EC_WATCHPT_HYP (0x35)
>> +#define ESR_EL2_EC_BKPT32 (0x38)
>> +#define ESR_EL2_EC_VECTOR32 (0x3A)
>> +#define ESR_EL2_EC_BKPT64 (0x3C)
>
> Is there any functional difference between these fields and bits at EL2 and
> these fields at other exception levels? If not, you could consider omitting
> "_EL2".
>
> [...]
A few values here are EL2 specific (ESR_EL2_EC_*_HYP,
ESR_EL2_EC_HVC{32,64}, ESR_EL2_EC_CP1*...).
The fields themselves should be broadly compatible (I should go and
verify this though). Now, what I want to avoid is any sort of confusion
between exception levels, and make clear which level we're operating on.
If I can be sure we always operate in a non-ambiguous context, then _EL2
can go indeed.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests
@ 2013-03-12 13:50 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-12 13:50 UTC (permalink / raw)
To: linux-arm-kernel
On 12/03/13 13:20, Christopher Covington wrote:
Hi Christopher,
> Here are a few minor questions and suggestions.
>
> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>> Define the saved/restored registers for 64bit guests.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_asm.h | 68 ++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 68 insertions(+)
>> create mode 100644 arch/arm64/include/asm/kvm_asm.h
>
> [...]
>
>> +/*
>> + * 0 is reserved as an invalid value.
>> + * Order *must* be kept in sync with the hyp switch code.
>> + */
>> +#define MPIDR_EL1 1 /* MultiProcessor Affinity Register */
>> +#define CSSELR_EL1 2 /* Cache Size Selection Register */
>> +#define SCTLR_EL1 3 /* System Control Register */
>> +#define ACTLR_EL1 4 /* Auxilliary Control Register */
>> +#define CPACR_EL1 5 /* Coprocessor Access Control */
>> +#define TTBR0_EL1 6 /* Translation Table Base Register 0 */
>> +#define TTBR1_EL1 7 /* Translation Table Base Register 1 */
>> +#define TCR_EL1 8 /* Translation Control Register */
>> +#define ESR_EL1 9 /* Exception Syndrome Register */
>> +#define AFSR0_EL1 10 /* Auxilary Fault Status Register 0 */
>> +#define AFSR1_EL1 11 /* Auxilary Fault Status Register 1 */
>> +#define FAR_EL1 12 /* Fault Address Register */
>> +#define MAIR_EL1 13 /* Memory Attribute Indirection Register */
>> +#define VBAR_EL1 14 /* Vector Base Address Register */
>> +#define CONTEXTIDR_EL1 15 /* Context ID Register */
>> +#define TPIDR_EL0 16 /* Thread ID, User R/W */
>> +#define TPIDRRO_EL0 17 /* Thread ID, User R/O */
>> +#define TPIDR_EL1 18 /* Thread ID, Privileged */
>> +#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
>> +#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
>
> Any particular reason to have CNTKCTL_EL1 enumerated here and handled by
> (dump|load)_sysregs, but all the other timer registers handled by
> (save|restore)_timer_state in hyp.S?
This one is a bit of an odd one, as it controls the guest kernel
decision to expose its timer/counter to its own userspace. It is not
directly involved into the timekeeping itself.
As such, my choice was to make it part of the CPU state rather than the
timer state. But I admit this may not be the most obvious choice.
>> +#define NR_SYS_REGS 21
>
> [...]
>
>> +/* Hyp Syndrome Register (HSR) bits */
>> +#define ESR_EL2_EC_SHIFT (26)
>> +#define ESR_EL2_EC (0x3fU << ESR_EL2_EC_SHIFT)
>> +#define ESR_EL2_IL (1U << 25)
>> +#define ESR_EL2_ISS (ESR_EL2_IL - 1)
>> +#define ESR_EL2_ISV_SHIFT (24)
>> +#define ESR_EL2_ISV (1U << ESR_EL2_ISV_SHIFT)
>> +#define ESR_EL2_SAS_SHIFT (22)
>> +#define ESR_EL2_SAS (3U << ESR_EL2_SAS_SHIFT)
>> +#define ESR_EL2_SSE (1 << 21)
>> +#define ESR_EL2_SRT_SHIFT (16)
>> +#define ESR_EL2_SRT_MASK (0x1f << ESR_EL2_SRT_SHIFT)
>> +#define ESR_EL2_SF (1 << 15)
>> +#define ESR_EL2_AR (1 << 14)
>> +#define ESR_EL2_EA (1 << 9)
>> +#define ESR_EL2_CM (1 << 8)
>> +#define ESR_EL2_S1PTW (1 << 7)
>> +#define ESR_EL2_WNR (1 << 6)
>> +#define ESR_EL2_FSC (0x3f)
>> +#define ESR_EL2_FSC_TYPE (0x3c)
>> +
>> +#define ESR_EL2_CV_SHIFT (24)
>> +#define ESR_EL2_CV (1U << ESR_EL2_CV_SHIFT)
>> +#define ESR_EL2_COND_SHIFT (20)
>> +#define ESR_EL2_COND (0xfU << ESR_EL2_COND_SHIFT)
>
> [...]
>
>> +#define ESR_EL2_EC_UNKNOWN (0x00)
>> +#define ESR_EL2_EC_WFI (0x01)
>> +#define ESR_EL2_EC_CP15_32 (0x03)
>> +#define ESR_EL2_EC_CP15_64 (0x04)
>> +#define ESR_EL2_EC_CP14_MR (0x05)
>> +#define ESR_EL2_EC_CP14_LS (0x06)
>> +#define ESR_EL2_EC_SIMD_FP (0x07)
>> +#define ESR_EL2_EC_CP10_ID (0x08)
>> +#define ESR_EL2_EC_CP14_64 (0x0C)
>> +#define ESR_EL2_EC_ILL_ISS (0x0E)
>> +#define ESR_EL2_EC_SVC32 (0x11)
>> +#define ESR_EL2_EC_HVC32 (0x12)
>> +#define ESR_EL2_EC_SMC32 (0x13)
>> +#define ESR_EL2_EC_SVC64 (0x14)
>> +#define ESR_EL2_EC_HVC64 (0x16)
>> +#define ESR_EL2_EC_SMC64 (0x17)
>> +#define ESR_EL2_EC_SYS64 (0x18)
>> +#define ESR_EL2_EC_IABT (0x20)
>> +#define ESR_EL2_EC_IABT_HYP (0x21)
>> +#define ESR_EL2_EC_PC_ALIGN (0x22)
>> +#define ESR_EL2_EC_DABT (0x24)
>> +#define ESR_EL2_EC_DABT_HYP (0x25)
>> +#define ESR_EL2_EC_SP_ALIGN (0x26)
>> +#define ESR_EL2_EC_FP32 (0x28)
>> +#define ESR_EL2_EC_FP64 (0x2C)
>> +#define ESR_EL2_EC_SERRROR (0x2F)
>> +#define ESR_EL2_EC_BREAKPT (0x30)
>> +#define ESR_EL2_EC_BREAKPT_HYP (0x31)
>> +#define ESR_EL2_EC_SOFTSTP (0x32)
>> +#define ESR_EL2_EC_SOFTSTP_HYP (0x33)
>> +#define ESR_EL2_EC_WATCHPT (0x34)
>> +#define ESR_EL2_EC_WATCHPT_HYP (0x35)
>> +#define ESR_EL2_EC_BKPT32 (0x38)
>> +#define ESR_EL2_EC_VECTOR32 (0x3A)
>> +#define ESR_EL2_EC_BKPT64 (0x3C)
>
> Is there any functional difference between these fields and bits at EL2 and
> these fields at other exception levels? If not, you could consider omitting
> "_EL2".
>
> [...]
A few values here are EL2 specific (ESR_EL2_EC_*_HYP,
ESR_EL2_EC_HVC{32,64}, ESR_EL2_EC_CP1*...).
The fields themselves should be broadly compatible (I should go and
verify this though). Now, what I want to avoid is any sort of confusion
between exception levels, and make clear which level we're operating on.
If I can be sure we always operate in a non-ambiguous context, then _EL2
can go indeed.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 06/29] arm64: KVM: fault injection into a guest
2013-03-12 13:20 ` Christopher Covington
@ 2013-03-12 14:25 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-12 14:25 UTC (permalink / raw)
To: Christopher Covington; +Cc: linux-arm-kernel, kvm, kvmarm, Catalin Marinas
On 12/03/13 13:20, Christopher Covington wrote:
Hi Christopher,
> I noticed you went through the trouble of defining several constants in an
> earlier patch. Perhaps you could put them to use here?
>
> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>> Implement the injection of a fault (undefined, data abort or
>> prefetch abort) into a 64bit guest.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/kvm/inject_fault.c | 117 ++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 117 insertions(+)
>> create mode 100644 arch/arm64/kvm/inject_fault.c
>
> [...]
>
>> +static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
>> +{
>> + unsigned long cpsr = *vcpu_cpsr(vcpu);
>> + int is_aarch32;
>> + u32 esr = 0;
>> +
>> + is_aarch32 = vcpu_mode_is_32bit(vcpu);
>> +
>> + *vcpu_spsr(vcpu) = cpsr;
>> + vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
>> +
>> + *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | PSR_I_BIT;
>> + *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
>> +
>> + vcpu->arch.sys_regs[FAR_EL1] = addr;
>> +
>> + /*
>> + * Build an {i,d}abort, depending on the level and the
>> + * instruction set. Report an external synchronous abort.
>> + */
>> + if (kvm_vcpu_trap_il_is32bit(vcpu))
>> + esr |= (1 << 25);
>
> ESR_EL2_IL
This is the illustration of what I was saying earlier about confusing
levels. Here, we're dealing with the guest's EL1. Using the _EL2 define
is semantically wrong, even if it has the same value.
>
>> + if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t)
>> + esr |= (0x20 << 26);
>
> ESR_EL2_EC_IABT << ESR_EL2_EC_SHIFT
>
>> + else
>> + esr |= (0x21 << 26);
>
> ESR_EL2_EC_IABT_HYP << ESR_EL2_EC_SHIFT
Even worse here. What this actually mean is "Exception at the current
level", which is EL1. Having _HYP here is completely misleading.
Now, maybe I should review all these defines and give them a more
general meaning. Then we'd be able to share the defines across levels
(arm/arch64/kernel/entry.S could use some defines too...). But overall,
I'm quite reluctant to start mixing ESR_EL1 and ESR_EL2.
>> +
>> + if (!is_iabt)
>> + esr |= (1 << 28);
>
> ESR_EL2_EC_DABT << ESR_EL2_EC_SHIFT
Nasty. Works, but very nasty... ;-)
>> +
>> + vcpu->arch.sys_regs[ESR_EL1] = esr | 0x10;
>> +}
>> +
>> +static void inject_undef64(struct kvm_vcpu *vcpu)
>> +{
>> + unsigned long cpsr = *vcpu_cpsr(vcpu);
>> + u32 esr = 0;
>> +
>> + *vcpu_spsr(vcpu) = cpsr;
>> + vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
>> +
>> + *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_F_BIT | PSR_I_BIT;
>> + *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
>> +
>> + /*
>> + * Build an unknown exception, depending on the instruction
>> + * set.
>> + */
>> + if (kvm_vcpu_trap_il_is32bit(vcpu))
>> + esr |= (1 << 25);
>
> ESR_EL2_IL
>
>> +
>> + vcpu->arch.sys_regs[ESR_EL1] = esr;
>> +}
Thanks for reviewing,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 06/29] arm64: KVM: fault injection into a guest
@ 2013-03-12 14:25 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-12 14:25 UTC (permalink / raw)
To: linux-arm-kernel
On 12/03/13 13:20, Christopher Covington wrote:
Hi Christopher,
> I noticed you went through the trouble of defining several constants in an
> earlier patch. Perhaps you could put them to use here?
>
> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>> Implement the injection of a fault (undefined, data abort or
>> prefetch abort) into a 64bit guest.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/kvm/inject_fault.c | 117 ++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 117 insertions(+)
>> create mode 100644 arch/arm64/kvm/inject_fault.c
>
> [...]
>
>> +static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
>> +{
>> + unsigned long cpsr = *vcpu_cpsr(vcpu);
>> + int is_aarch32;
>> + u32 esr = 0;
>> +
>> + is_aarch32 = vcpu_mode_is_32bit(vcpu);
>> +
>> + *vcpu_spsr(vcpu) = cpsr;
>> + vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
>> +
>> + *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | PSR_I_BIT;
>> + *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
>> +
>> + vcpu->arch.sys_regs[FAR_EL1] = addr;
>> +
>> + /*
>> + * Build an {i,d}abort, depending on the level and the
>> + * instruction set. Report an external synchronous abort.
>> + */
>> + if (kvm_vcpu_trap_il_is32bit(vcpu))
>> + esr |= (1 << 25);
>
> ESR_EL2_IL
This is the illustration of what I was saying earlier about confusing
levels. Here, we're dealing with the guest's EL1. Using the _EL2 define
is semantically wrong, even if it has the same value.
>
>> + if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t)
>> + esr |= (0x20 << 26);
>
> ESR_EL2_EC_IABT << ESR_EL2_EC_SHIFT
>
>> + else
>> + esr |= (0x21 << 26);
>
> ESR_EL2_EC_IABT_HYP << ESR_EL2_EC_SHIFT
Even worse here. What this actually mean is "Exception@the current
level", which is EL1. Having _HYP here is completely misleading.
Now, maybe I should review all these defines and give them a more
general meaning. Then we'd be able to share the defines across levels
(arm/arch64/kernel/entry.S could use some defines too...). But overall,
I'm quite reluctant to start mixing ESR_EL1 and ESR_EL2.
>> +
>> + if (!is_iabt)
>> + esr |= (1 << 28);
>
> ESR_EL2_EC_DABT << ESR_EL2_EC_SHIFT
Nasty. Works, but very nasty... ;-)
>> +
>> + vcpu->arch.sys_regs[ESR_EL1] = esr | 0x10;
>> +}
>> +
>> +static void inject_undef64(struct kvm_vcpu *vcpu)
>> +{
>> + unsigned long cpsr = *vcpu_cpsr(vcpu);
>> + u32 esr = 0;
>> +
>> + *vcpu_spsr(vcpu) = cpsr;
>> + vcpu->arch.regs.elr_el1 = *vcpu_pc(vcpu);
>> +
>> + *vcpu_cpsr(vcpu) = PSR_MODE_EL1h | PSR_F_BIT | PSR_I_BIT;
>> + *vcpu_pc(vcpu) = vcpu->arch.sys_regs[VBAR_EL1] + 0x200;
>> +
>> + /*
>> + * Build an unknown exception, depending on the instruction
>> + * set.
>> + */
>> + if (kvm_vcpu_trap_il_is32bit(vcpu))
>> + esr |= (1 << 25);
>
> ESR_EL2_IL
>
>> +
>> + vcpu->arch.sys_regs[ESR_EL1] = esr;
>> +}
Thanks for reviewing,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 12/29] arm64: KVM: kvm_arch and kvm_vcpu_arch definitions
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-12 17:30 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 17:30 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Provide the architecture dependent structures for VM and
> vcpu abstractions.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 178 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 178 insertions(+)
> create mode 100644 arch/arm64/include/asm/kvm_host.h
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> new file mode 100644
> index 0000000..d1095d1
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_host.h
[...]
> +struct kvm_vcpu_fault_info {
> + u32 esr_el2; /* Hyp Syndrom Register */
Syndrome
> + u64 far_el2; /* Hyp Fault Address Register */
> + u64 hpfar_el2; /* Hyp IPA Fault Address Register */
> +};
[...]
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 12/29] arm64: KVM: kvm_arch and kvm_vcpu_arch definitions
@ 2013-03-12 17:30 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 17:30 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Provide the architecture dependent structures for VM and
> vcpu abstractions.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 178 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 178 insertions(+)
> create mode 100644 arch/arm64/include/asm/kvm_host.h
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> new file mode 100644
> index 0000000..d1095d1
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_host.h
[...]
> +struct kvm_vcpu_fault_info {
> + u32 esr_el2; /* Hyp Syndrom Register */
Syndrome
> + u64 far_el2; /* Hyp Fault Address Register */
> + u64 hpfar_el2; /* Hyp IPA Fault Address Register */
> +};
[...]
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 14/29] arm64: KVM: guest one-reg interface
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-12 17:31 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 17:31 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, Catalin Marinas
Hi Marc,
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Let userspace play with the guest registers.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 240 insertions(+)
> create mode 100644 arch/arm64/kvm/guest.c
[...]
> +int __attribute_const__ kvm_target_cpu(void)
> +{
> + unsigned long implementor = read_cpuid_implementor();
> + unsigned long part_number = read_cpuid_part_number();
> +
> + if (implementor != ARM_CPU_IMP_ARM)
> + return -EINVAL;
> +
> + switch (part_number) {
> + case ARM_CPU_PART_AEM_V8:
> + case ARM_CPU_PART_FOUNDATION:
> + /* Treat the models just as an A57 for the time being */
> + case ARM_CPU_PART_CORTEX_A57:
> + return KVM_ARM_TARGET_CORTEX_A57;
> + default:
> + return -EINVAL;
> + }
> +}
What is the motivation behind these checks? Why not let any ARMv8 system that
has EL2 host a virtualized Cortex A57 guest?
[...]
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 14/29] arm64: KVM: guest one-reg interface
@ 2013-03-12 17:31 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 17:31 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Let userspace play with the guest registers.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 240 insertions(+)
> create mode 100644 arch/arm64/kvm/guest.c
[...]
> +int __attribute_const__ kvm_target_cpu(void)
> +{
> + unsigned long implementor = read_cpuid_implementor();
> + unsigned long part_number = read_cpuid_part_number();
> +
> + if (implementor != ARM_CPU_IMP_ARM)
> + return -EINVAL;
> +
> + switch (part_number) {
> + case ARM_CPU_PART_AEM_V8:
> + case ARM_CPU_PART_FOUNDATION:
> + /* Treat the models just as an A57 for the time being */
> + case ARM_CPU_PART_CORTEX_A57:
> + return KVM_ARM_TARGET_CORTEX_A57;
> + default:
> + return -EINVAL;
> + }
> +}
What is the motivation behind these checks? Why not let any ARMv8 system that
has EL2 host a virtualized Cortex A57 guest?
[...]
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 14/29] arm64: KVM: guest one-reg interface
2013-03-12 17:31 ` Christopher Covington
@ 2013-03-12 18:05 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-12 18:05 UTC (permalink / raw)
To: Christopher Covington; +Cc: linux-arm-kernel, kvm, kvmarm, Catalin Marinas
On 12/03/13 17:31, Christopher Covington wrote:
> Hi Marc,
>
> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>> Let userspace play with the guest registers.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 240 insertions(+)
>> create mode 100644 arch/arm64/kvm/guest.c
>
> [...]
>
>> +int __attribute_const__ kvm_target_cpu(void)
>> +{
>> + unsigned long implementor = read_cpuid_implementor();
>> + unsigned long part_number = read_cpuid_part_number();
>> +
>> + if (implementor != ARM_CPU_IMP_ARM)
>> + return -EINVAL;
>> +
>> + switch (part_number) {
>> + case ARM_CPU_PART_AEM_V8:
>> + case ARM_CPU_PART_FOUNDATION:
>> + /* Treat the models just as an A57 for the time being */
>> + case ARM_CPU_PART_CORTEX_A57:
>> + return KVM_ARM_TARGET_CORTEX_A57;
>> + default:
>> + return -EINVAL;
>> + }
>> +}
>
> What is the motivation behind these checks? Why not let any ARMv8 system that
> has EL2 host a virtualized Cortex A57 guest?
The main reason is errata management. How do you deal with errata in the
guest when you hide the underlying host CPU? I don't have an answer to
that. So for the time being, we only allow the guest to see the same CPU
as the host, and require that new CPUs are added to this function.
We went the same way for KVM/ARM.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 14/29] arm64: KVM: guest one-reg interface
@ 2013-03-12 18:05 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-12 18:05 UTC (permalink / raw)
To: linux-arm-kernel
On 12/03/13 17:31, Christopher Covington wrote:
> Hi Marc,
>
> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>> Let userspace play with the guest registers.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 240 insertions(+)
>> create mode 100644 arch/arm64/kvm/guest.c
>
> [...]
>
>> +int __attribute_const__ kvm_target_cpu(void)
>> +{
>> + unsigned long implementor = read_cpuid_implementor();
>> + unsigned long part_number = read_cpuid_part_number();
>> +
>> + if (implementor != ARM_CPU_IMP_ARM)
>> + return -EINVAL;
>> +
>> + switch (part_number) {
>> + case ARM_CPU_PART_AEM_V8:
>> + case ARM_CPU_PART_FOUNDATION:
>> + /* Treat the models just as an A57 for the time being */
>> + case ARM_CPU_PART_CORTEX_A57:
>> + return KVM_ARM_TARGET_CORTEX_A57;
>> + default:
>> + return -EINVAL;
>> + }
>> +}
>
> What is the motivation behind these checks? Why not let any ARMv8 system that
> has EL2 host a virtualized Cortex A57 guest?
The main reason is errata management. How do you deal with errata in the
guest when you hide the underlying host CPU? I don't have an answer to
that. So for the time being, we only allow the guest to see the same CPU
as the host, and require that new CPUs are added to this function.
We went the same way for KVM/ARM.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 14/29] arm64: KVM: guest one-reg interface
2013-03-12 18:05 ` Marc Zyngier
@ 2013-03-12 22:07 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 22:07 UTC (permalink / raw)
To: Marc Zyngier; +Cc: Catalin Marinas, kvm, linux-arm-kernel, kvmarm
Hi Marc,
On 03/12/2013 02:05 PM, Marc Zyngier wrote:
> On 12/03/13 17:31, Christopher Covington wrote:
>> Hi Marc,
>>
>> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>>> Let userspace play with the guest registers.
>>>
>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>> ---
>>> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 240 insertions(+)
>>> create mode 100644 arch/arm64/kvm/guest.c
>>
>> [...]
>>
>>> +int __attribute_const__ kvm_target_cpu(void)
>>> +{
>>> + unsigned long implementor = read_cpuid_implementor();
>>> + unsigned long part_number = read_cpuid_part_number();
>>> +
>>> + if (implementor != ARM_CPU_IMP_ARM)
>>> + return -EINVAL;
>>> +
>>> + switch (part_number) {
>>> + case ARM_CPU_PART_AEM_V8:
>>> + case ARM_CPU_PART_FOUNDATION:
>>> + /* Treat the models just as an A57 for the time being */
>>> + case ARM_CPU_PART_CORTEX_A57:
>>> + return KVM_ARM_TARGET_CORTEX_A57;
>>> + default:
>>> + return -EINVAL;
>>> + }
>>> +}
>>
>> What is the motivation behind these checks? Why not let any ARMv8 system that
>> has EL2 host a virtualized Cortex A57 guest?
>
> The main reason is errata management. How do you deal with errata in the
> guest when you hide the underlying host CPU? I don't have an answer to
> that. So for the time being, we only allow the guest to see the same CPU
> as the host, and require that new CPUs are added to this function.
Can you please elaborate on how this code ensures the guest is seeing the same
CPU as the host? It looks rather unlike VPIDR = MIDR.
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 14/29] arm64: KVM: guest one-reg interface
@ 2013-03-12 22:07 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-12 22:07 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On 03/12/2013 02:05 PM, Marc Zyngier wrote:
> On 12/03/13 17:31, Christopher Covington wrote:
>> Hi Marc,
>>
>> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>>> Let userspace play with the guest registers.
>>>
>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>> ---
>>> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 240 insertions(+)
>>> create mode 100644 arch/arm64/kvm/guest.c
>>
>> [...]
>>
>>> +int __attribute_const__ kvm_target_cpu(void)
>>> +{
>>> + unsigned long implementor = read_cpuid_implementor();
>>> + unsigned long part_number = read_cpuid_part_number();
>>> +
>>> + if (implementor != ARM_CPU_IMP_ARM)
>>> + return -EINVAL;
>>> +
>>> + switch (part_number) {
>>> + case ARM_CPU_PART_AEM_V8:
>>> + case ARM_CPU_PART_FOUNDATION:
>>> + /* Treat the models just as an A57 for the time being */
>>> + case ARM_CPU_PART_CORTEX_A57:
>>> + return KVM_ARM_TARGET_CORTEX_A57;
>>> + default:
>>> + return -EINVAL;
>>> + }
>>> +}
>>
>> What is the motivation behind these checks? Why not let any ARMv8 system that
>> has EL2 host a virtualized Cortex A57 guest?
>
> The main reason is errata management. How do you deal with errata in the
> guest when you hide the underlying host CPU? I don't have an answer to
> that. So for the time being, we only allow the guest to see the same CPU
> as the host, and require that new CPUs are added to this function.
Can you please elaborate on how this code ensures the guest is seeing the same
CPU as the host? It looks rather unlike VPIDR = MIDR.
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 14/29] arm64: KVM: guest one-reg interface
2013-03-12 22:07 ` Christopher Covington
@ 2013-03-13 7:48 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-13 7:48 UTC (permalink / raw)
To: Christopher Covington; +Cc: Catalin Marinas, kvm, linux-arm-kernel, kvmarm
On 12/03/13 22:07, Christopher Covington wrote:
Hi Christopher,
> On 03/12/2013 02:05 PM, Marc Zyngier wrote:
>> On 12/03/13 17:31, Christopher Covington wrote:
>>> Hi Marc,
>>>
>>> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>>>> Let userspace play with the guest registers.
>>>>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> ---
>>>> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
>>>> 1 file changed, 240 insertions(+)
>>>> create mode 100644 arch/arm64/kvm/guest.c
>>>
>>> [...]
>>>
>>>> +int __attribute_const__ kvm_target_cpu(void)
>>>> +{
>>>> + unsigned long implementor = read_cpuid_implementor();
>>>> + unsigned long part_number = read_cpuid_part_number();
>>>> +
>>>> + if (implementor != ARM_CPU_IMP_ARM)
>>>> + return -EINVAL;
>>>> +
>>>> + switch (part_number) {
>>>> + case ARM_CPU_PART_AEM_V8:
>>>> + case ARM_CPU_PART_FOUNDATION:
>>>> + /* Treat the models just as an A57 for the time being */
>>>> + case ARM_CPU_PART_CORTEX_A57:
>>>> + return KVM_ARM_TARGET_CORTEX_A57;
>>>> + default:
>>>> + return -EINVAL;
>>>> + }
>>>> +}
>>>
>>> What is the motivation behind these checks? Why not let any ARMv8 system that
>>> has EL2 host a virtualized Cortex A57 guest?
>>
>> The main reason is errata management. How do you deal with errata in the
>> guest when you hide the underlying host CPU? I don't have an answer to
>> that. So for the time being, we only allow the guest to see the same CPU
>> as the host, and require that new CPUs are added to this function.
>
> Can you please elaborate on how this code ensures the guest is seeing the same
> CPU as the host? It looks rather unlike VPIDR = MIDR.
I was merely elaborating on the "why". For the how:
- vmidr_el2 is set in arch/arm64/kernel/head.S and never changed,
ensuring the guest sees the same thing as the kernel.
- Some additional code in guest.c ensures that both the host and the CPU
requested by userspace for the guest are the same
- KVM_ARM_TARGET_CORTEX_A57 is used in sys_regs_a57.c to register the
sys_reg/cp15 handlers.
Cheers,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 14/29] arm64: KVM: guest one-reg interface
@ 2013-03-13 7:48 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-13 7:48 UTC (permalink / raw)
To: linux-arm-kernel
On 12/03/13 22:07, Christopher Covington wrote:
Hi Christopher,
> On 03/12/2013 02:05 PM, Marc Zyngier wrote:
>> On 12/03/13 17:31, Christopher Covington wrote:
>>> Hi Marc,
>>>
>>> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>>>> Let userspace play with the guest registers.
>>>>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> ---
>>>> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
>>>> 1 file changed, 240 insertions(+)
>>>> create mode 100644 arch/arm64/kvm/guest.c
>>>
>>> [...]
>>>
>>>> +int __attribute_const__ kvm_target_cpu(void)
>>>> +{
>>>> + unsigned long implementor = read_cpuid_implementor();
>>>> + unsigned long part_number = read_cpuid_part_number();
>>>> +
>>>> + if (implementor != ARM_CPU_IMP_ARM)
>>>> + return -EINVAL;
>>>> +
>>>> + switch (part_number) {
>>>> + case ARM_CPU_PART_AEM_V8:
>>>> + case ARM_CPU_PART_FOUNDATION:
>>>> + /* Treat the models just as an A57 for the time being */
>>>> + case ARM_CPU_PART_CORTEX_A57:
>>>> + return KVM_ARM_TARGET_CORTEX_A57;
>>>> + default:
>>>> + return -EINVAL;
>>>> + }
>>>> +}
>>>
>>> What is the motivation behind these checks? Why not let any ARMv8 system that
>>> has EL2 host a virtualized Cortex A57 guest?
>>
>> The main reason is errata management. How do you deal with errata in the
>> guest when you hide the underlying host CPU? I don't have an answer to
>> that. So for the time being, we only allow the guest to see the same CPU
>> as the host, and require that new CPUs are added to this function.
>
> Can you please elaborate on how this code ensures the guest is seeing the same
> CPU as the host? It looks rather unlike VPIDR = MIDR.
I was merely elaborating on the "why". For the how:
- vmidr_el2 is set in arch/arm64/kernel/head.S and never changed,
ensuring the guest sees the same thing as the kernel.
- Some additional code in guest.c ensures that both the host and the CPU
requested by userspace for the guest are the same
- KVM_ARM_TARGET_CORTEX_A57 is used in sys_regs_a57.c to register the
sys_reg/cp15 handlers.
Cheers,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 10/29] arm64: KVM: Cortex-A57 specific system registers handling
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-13 18:30 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-13 18:30 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
I wonder if two of these registers could be handled in a generic fashion.
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Add the support code for Cortex-A57 specific system registers.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kvm/sys_regs_a57.c | 96 +++++++++++++++++++++++++++++++++++++++++++
[...]
> +#define MPIDR_EL1_AFF0_MASK 0xff
> +
> +static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> + /*
> + * Simply map the vcpu_id into the Aff0 field of the MPIDR.
> + */
> + vcpu->arch.sys_regs[MPIDR_EL1] = (1 << 31) | (vcpu->vcpu_id & MPIDR_EL1_AFF0_MASK);
> +}
What's A57-specific about this MPIDR behavior?
[...]
> +/*
> + * A57-specific sys-reg registers.
> + * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
> + */
> +static const struct sys_reg_desc a57_sys_regs[] = {
> + { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101), /* MPIDR_EL1 */
> + NULL, reset_mpidr, MPIDR_EL1 },
> + { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000), /* SCTLR_EL1 */
> + NULL, reset_val, SCTLR_EL1, 0x00C50078 },
> + { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b001), /* ACTLR_EL1 */
> + access_actlr, reset_actlr, ACTLR_EL1 },
> + { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010), /* CPACR_EL1 */
> + NULL, reset_val, CPACR_EL1, 0 },
What's A57-specific about this CPACR behavior?
> +};
[...]
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 10/29] arm64: KVM: Cortex-A57 specific system registers handling
@ 2013-03-13 18:30 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-13 18:30 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
I wonder if two of these registers could be handled in a generic fashion.
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Add the support code for Cortex-A57 specific system registers.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kvm/sys_regs_a57.c | 96 +++++++++++++++++++++++++++++++++++++++++++
[...]
> +#define MPIDR_EL1_AFF0_MASK 0xff
> +
> +static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> + /*
> + * Simply map the vcpu_id into the Aff0 field of the MPIDR.
> + */
> + vcpu->arch.sys_regs[MPIDR_EL1] = (1 << 31) | (vcpu->vcpu_id & MPIDR_EL1_AFF0_MASK);
> +}
What's A57-specific about this MPIDR behavior?
[...]
> +/*
> + * A57-specific sys-reg registers.
> + * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
> + */
> +static const struct sys_reg_desc a57_sys_regs[] = {
> + { Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101), /* MPIDR_EL1 */
> + NULL, reset_mpidr, MPIDR_EL1 },
> + { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000), /* SCTLR_EL1 */
> + NULL, reset_val, SCTLR_EL1, 0x00C50078 },
> + { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b001), /* ACTLR_EL1 */
> + access_actlr, reset_actlr, ACTLR_EL1 },
> + { Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b010), /* CPACR_EL1 */
> + NULL, reset_val, CPACR_EL1, 0 },
What's A57-specific about this CPACR behavior?
> +};
[...]
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 16/29] arm64: KVM: HYP mode world switch implementation
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-13 19:59 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-13 19:59 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
I like how you were able to use a common fpsimd_(save|restore) macro, and
wonder if you can't do the same sort of thing for the general purpose
registers and system registers. In the end, both guest and host are EL1
software, and while they may differ in terms of things like VTTBR settings and
physical timer access, my intuition is that which general purpose and system
registers need to be saved and restored on context switches is shared.
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> The HYP mode world switch in all its glory.
>
> Implements save/restore of host/guest registers, EL2 trapping,
> IPA resolution, and additional services (tlb invalidation).
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kernel/asm-offsets.c | 33 ++
> arch/arm64/kvm/hyp.S | 756 ++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 789 insertions(+)
> create mode 100644 arch/arm64/kvm/hyp.S
[...]
> diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
[...]
> +.macro save_host_regs
> + push x19, x20
> + push x21, x22
> + push x23, x24
> + push x25, x26
> + push x27, x28
> + push x29, lr
> +
> + mrs x19, sp_el0
> + mrs x20, sp_el1
> + mrs x21, elr_el1
> + mrs x22, spsr_el1
> + mrs x23, elr_el2
> + mrs x24, spsr_el2
> +
> + push x19, x20
> + push x21, x22
> + push x23, x24
> +.endm
[...]
> +.macro save_guest_regs
> + // x0 is the vcpu address.
> + // x1 is the return code, do not corrupt!
> + // Guest's x0-x3 are on the stack
> +
> + // Compute base to save registers
> + add x2, x0, #REG_OFFSET(4)
> + mrs x3, sp_el0
> + stp x4, x5, [x2], #16
> + stp x6, x7, [x2], #16
> + stp x8, x9, [x2], #16
> + stp x10, x11, [x2], #16
> + stp x12, x13, [x2], #16
> + stp x14, x15, [x2], #16
> + stp x16, x17, [x2], #16
> + stp x18, x19, [x2], #16
> + stp x20, x21, [x2], #16
> + stp x22, x23, [x2], #16
> + stp x24, x25, [x2], #16
> + stp x26, x27, [x2], #16
> + stp x28, x29, [x2], #16
> + stp lr, x3, [x2], #16 // LR, SP_EL0
> +
> + mrs x4, elr_el2 // PC
> + mrs x5, spsr_el2 // CPSR
> + stp x4, x5, [x2], #16
> +
> + pop x6, x7 // x2, x3
> + pop x4, x5 // x0, x1
> +
> + add x2, x0, #REG_OFFSET(0)
> + stp x4, x5, [x2], #16
> + stp x6, x7, [x2], #16
> +
> + // EL1 state
> + mrs x4, sp_el1
> + mrs x5, elr_el1
> + mrs x6, spsr_el1
> + str x4, [x0, #VCPU_SP_EL1]
> + str x5, [x0, #VCPU_ELR_EL1]
> + str x6, [x0, #SPSR_OFFSET(KVM_SPSR_EL1)]
> +.endm
There are two relatively easily reconciled differences in my mind that tend to
obscure the similarity between these pieces of code. The first is the use of
push and pop macros standing in for the underlying stp and ldp instructions
and the second is the order in which the registers are stored. I may be
missing something, but my impression is that the order doesn't really matter,
as long as there is universal agreement on what the order will be.
It seems to me then that the fundamental differences are the base address of
the load and store operations and which registers have already been saved by
other code.
What if the base address for the loads and stores, sp versus x2, was made a
macro argument? If it's not straightforward to make the direction of the guest
and host stores the same then the increment value or its sign could also be
made an argument. Alternatively, you could consider storing the host registers
in a slimmed-down vcpu structure for hosts, rather than on the stack.
I need to study the call graph to better understand the asymmetry in which
registers are already saved off by the time we get here. I wonder if there's
more opportunity to unify code there. Short of that perhaps more ideal
solution one could still share the GPR's 19-29 and system register code, but
have the guest version save off GPR's 4-18 before going down an at least
source-level shared path.
[...]
> +/*
> + * Macros to perform system register save/restore.
> + *
> + * Ordering here is absolutely critical, and must be kept consistent
> + * in dump_sysregs, load_sysregs, {save,restore}_guest_sysregs,
> + * {save,restore}_guest_32bit_state, and in kvm_asm.h.
> + *
> + * In other words, don't touch any of these unless you know what
> + * you are doing.
> + */
> +.macro dump_sysregs
> + mrs x4, mpidr_el1
Maybe this should be taken out of the shared code and put in save_host_sysregs
if it only applies to hosts? Also, is the use of mpidr_el1 here and vmpidr_el2
in load_sysregs intentional? If so it might be nice to add a note about that
to your existing comment.
> + mrs x5, csselr_el1
> + mrs x6, sctlr_el1
> + mrs x7, actlr_el1
> + mrs x8, cpacr_el1
> + mrs x9, ttbr0_el1
> + mrs x10, ttbr1_el1
> + mrs x11, tcr_el1
> + mrs x12, esr_el1
> + mrs x13, afsr0_el1
> + mrs x14, afsr1_el1
> + mrs x15, far_el1
> + mrs x16, mair_el1
> + mrs x17, vbar_el1
> + mrs x18, contextidr_el1
> + mrs x19, tpidr_el0
> + mrs x20, tpidrro_el0
> + mrs x21, tpidr_el1
> + mrs x22, amair_el1
> + mrs x23, cntkctl_el1
> +.endm
[...]
> +.macro save_guest_sysregs
> + dump_sysregs
> + add x2, x0, #SYSREG_OFFSET(CSSELR_EL1) // MIPDR_EL2 not written back
MPIDR_EL1
[...]
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 16/29] arm64: KVM: HYP mode world switch implementation
@ 2013-03-13 19:59 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-13 19:59 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
I like how you were able to use a common fpsimd_(save|restore) macro, and
wonder if you can't do the same sort of thing for the general purpose
registers and system registers. In the end, both guest and host are EL1
software, and while they may differ in terms of things like VTTBR settings and
physical timer access, my intuition is that which general purpose and system
registers need to be saved and restored on context switches is shared.
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> The HYP mode world switch in all its glory.
>
> Implements save/restore of host/guest registers, EL2 trapping,
> IPA resolution, and additional services (tlb invalidation).
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kernel/asm-offsets.c | 33 ++
> arch/arm64/kvm/hyp.S | 756 ++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 789 insertions(+)
> create mode 100644 arch/arm64/kvm/hyp.S
[...]
> diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
[...]
> +.macro save_host_regs
> + push x19, x20
> + push x21, x22
> + push x23, x24
> + push x25, x26
> + push x27, x28
> + push x29, lr
> +
> + mrs x19, sp_el0
> + mrs x20, sp_el1
> + mrs x21, elr_el1
> + mrs x22, spsr_el1
> + mrs x23, elr_el2
> + mrs x24, spsr_el2
> +
> + push x19, x20
> + push x21, x22
> + push x23, x24
> +.endm
[...]
> +.macro save_guest_regs
> + // x0 is the vcpu address.
> + // x1 is the return code, do not corrupt!
> + // Guest's x0-x3 are on the stack
> +
> + // Compute base to save registers
> + add x2, x0, #REG_OFFSET(4)
> + mrs x3, sp_el0
> + stp x4, x5, [x2], #16
> + stp x6, x7, [x2], #16
> + stp x8, x9, [x2], #16
> + stp x10, x11, [x2], #16
> + stp x12, x13, [x2], #16
> + stp x14, x15, [x2], #16
> + stp x16, x17, [x2], #16
> + stp x18, x19, [x2], #16
> + stp x20, x21, [x2], #16
> + stp x22, x23, [x2], #16
> + stp x24, x25, [x2], #16
> + stp x26, x27, [x2], #16
> + stp x28, x29, [x2], #16
> + stp lr, x3, [x2], #16 // LR, SP_EL0
> +
> + mrs x4, elr_el2 // PC
> + mrs x5, spsr_el2 // CPSR
> + stp x4, x5, [x2], #16
> +
> + pop x6, x7 // x2, x3
> + pop x4, x5 // x0, x1
> +
> + add x2, x0, #REG_OFFSET(0)
> + stp x4, x5, [x2], #16
> + stp x6, x7, [x2], #16
> +
> + // EL1 state
> + mrs x4, sp_el1
> + mrs x5, elr_el1
> + mrs x6, spsr_el1
> + str x4, [x0, #VCPU_SP_EL1]
> + str x5, [x0, #VCPU_ELR_EL1]
> + str x6, [x0, #SPSR_OFFSET(KVM_SPSR_EL1)]
> +.endm
There are two relatively easily reconciled differences in my mind that tend to
obscure the similarity between these pieces of code. The first is the use of
push and pop macros standing in for the underlying stp and ldp instructions
and the second is the order in which the registers are stored. I may be
missing something, but my impression is that the order doesn't really matter,
as long as there is universal agreement on what the order will be.
It seems to me then that the fundamental differences are the base address of
the load and store operations and which registers have already been saved by
other code.
What if the base address for the loads and stores, sp versus x2, was made a
macro argument? If it's not straightforward to make the direction of the guest
and host stores the same then the increment value or its sign could also be
made an argument. Alternatively, you could consider storing the host registers
in a slimmed-down vcpu structure for hosts, rather than on the stack.
I need to study the call graph to better understand the asymmetry in which
registers are already saved off by the time we get here. I wonder if there's
more opportunity to unify code there. Short of that perhaps more ideal
solution one could still share the GPR's 19-29 and system register code, but
have the guest version save off GPR's 4-18 before going down an at least
source-level shared path.
[...]
> +/*
> + * Macros to perform system register save/restore.
> + *
> + * Ordering here is absolutely critical, and must be kept consistent
> + * in dump_sysregs, load_sysregs, {save,restore}_guest_sysregs,
> + * {save,restore}_guest_32bit_state, and in kvm_asm.h.
> + *
> + * In other words, don't touch any of these unless you know what
> + * you are doing.
> + */
> +.macro dump_sysregs
> + mrs x4, mpidr_el1
Maybe this should be taken out of the shared code and put in save_host_sysregs
if it only applies to hosts? Also, is the use of mpidr_el1 here and vmpidr_el2
in load_sysregs intentional? If so it might be nice to add a note about that
to your existing comment.
> + mrs x5, csselr_el1
> + mrs x6, sctlr_el1
> + mrs x7, actlr_el1
> + mrs x8, cpacr_el1
> + mrs x9, ttbr0_el1
> + mrs x10, ttbr1_el1
> + mrs x11, tcr_el1
> + mrs x12, esr_el1
> + mrs x13, afsr0_el1
> + mrs x14, afsr1_el1
> + mrs x15, far_el1
> + mrs x16, mair_el1
> + mrs x17, vbar_el1
> + mrs x18, contextidr_el1
> + mrs x19, tpidr_el0
> + mrs x20, tpidrro_el0
> + mrs x21, tpidr_el1
> + mrs x22, amair_el1
> + mrs x23, cntkctl_el1
> +.endm
[...]
> +.macro save_guest_sysregs
> + dump_sysregs
> + add x2, x0, #SYSREG_OFFSET(CSSELR_EL1) // MIPDR_EL2 not written back
MPIDR_EL1
[...]
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 14/29] arm64: KVM: guest one-reg interface
2013-03-13 7:48 ` Marc Zyngier
@ 2013-03-13 20:34 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-13 20:34 UTC (permalink / raw)
To: Marc Zyngier; +Cc: Catalin Marinas, linux-arm-kernel, kvm, kvmarm
Hi Marc,
On 03/13/2013 03:48 AM, Marc Zyngier wrote:
> On 12/03/13 22:07, Christopher Covington wrote:
>
> Hi Christopher,
>
>> On 03/12/2013 02:05 PM, Marc Zyngier wrote:
>>> On 12/03/13 17:31, Christopher Covington wrote:
>>>> Hi Marc,
>>>>
>>>> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>>>>> Let userspace play with the guest registers.
>>>>>
>>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>>> ---
>>>>> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>> 1 file changed, 240 insertions(+)
>>>>> create mode 100644 arch/arm64/kvm/guest.c
>>>>
>>>> [...]
>>>>
>>>>> +int __attribute_const__ kvm_target_cpu(void)
>>>>> +{
>>>>> + unsigned long implementor = read_cpuid_implementor();
>>>>> + unsigned long part_number = read_cpuid_part_number();
>>>>> +
>>>>> + if (implementor != ARM_CPU_IMP_ARM)
>>>>> + return -EINVAL;
>>>>> +
>>>>> + switch (part_number) {
>>>>> + case ARM_CPU_PART_AEM_V8:
>>>>> + case ARM_CPU_PART_FOUNDATION:
>>>>> + /* Treat the models just as an A57 for the time being */
>>>>> + case ARM_CPU_PART_CORTEX_A57:
>>>>> + return KVM_ARM_TARGET_CORTEX_A57;
>>>>> + default:
>>>>> + return -EINVAL;
>>>>> + }
>>>>> +}
>>>>
>>>> What is the motivation behind these checks? Why not let any ARMv8 system that
>>>> has EL2 host a virtualized Cortex A57 guest?
>>>
>>> The main reason is errata management. How do you deal with errata in the
>>> guest when you hide the underlying host CPU? I don't have an answer to
>>> that. So for the time being, we only allow the guest to see the same CPU
>>> as the host, and require that new CPUs are added to this function.
>>
>> Can you please elaborate on how this code ensures the guest is seeing the same
>> CPU as the host? It looks rather unlike VPIDR = MIDR.
>
> I was merely elaborating on the "why". For the how:
> - vmidr_el2 is set in arch/arm64/kernel/head.S and never changed,
> ensuring the guest sees the same thing as the kernel.
> - Some additional code in guest.c ensures that both the host and the CPU
> requested by userspace for the guest are the same
> - KVM_ARM_TARGET_CORTEX_A57 is used in sys_regs_a57.c to register the
> sys_reg/cp15 handlers.
As I believe your response indicates, the code cited above in this email isn't
ensuring that the host is the same as the guest. That's done elsewhere. The
code here is doing something different, something that seems to me is going to
make building upon the ARM64 KVM infrastructure more of a hassle than it
should be.
My guess at the goal of the code cited above in this email is that it's trying
to sanity check that virtualization will work. Rather than taking a default
deny approach with a hand-maintained white list of virtualization-supporting
machine identifiers, why not check that EL2 is implemented on the current
system and if it's not implied by that, that the timer and interrupt
controller are suitable as well?
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 14/29] arm64: KVM: guest one-reg interface
@ 2013-03-13 20:34 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-13 20:34 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On 03/13/2013 03:48 AM, Marc Zyngier wrote:
> On 12/03/13 22:07, Christopher Covington wrote:
>
> Hi Christopher,
>
>> On 03/12/2013 02:05 PM, Marc Zyngier wrote:
>>> On 12/03/13 17:31, Christopher Covington wrote:
>>>> Hi Marc,
>>>>
>>>> On 03/04/2013 10:47 PM, Marc Zyngier wrote:
>>>>> Let userspace play with the guest registers.
>>>>>
>>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>>> ---
>>>>> arch/arm64/kvm/guest.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>> 1 file changed, 240 insertions(+)
>>>>> create mode 100644 arch/arm64/kvm/guest.c
>>>>
>>>> [...]
>>>>
>>>>> +int __attribute_const__ kvm_target_cpu(void)
>>>>> +{
>>>>> + unsigned long implementor = read_cpuid_implementor();
>>>>> + unsigned long part_number = read_cpuid_part_number();
>>>>> +
>>>>> + if (implementor != ARM_CPU_IMP_ARM)
>>>>> + return -EINVAL;
>>>>> +
>>>>> + switch (part_number) {
>>>>> + case ARM_CPU_PART_AEM_V8:
>>>>> + case ARM_CPU_PART_FOUNDATION:
>>>>> + /* Treat the models just as an A57 for the time being */
>>>>> + case ARM_CPU_PART_CORTEX_A57:
>>>>> + return KVM_ARM_TARGET_CORTEX_A57;
>>>>> + default:
>>>>> + return -EINVAL;
>>>>> + }
>>>>> +}
>>>>
>>>> What is the motivation behind these checks? Why not let any ARMv8 system that
>>>> has EL2 host a virtualized Cortex A57 guest?
>>>
>>> The main reason is errata management. How do you deal with errata in the
>>> guest when you hide the underlying host CPU? I don't have an answer to
>>> that. So for the time being, we only allow the guest to see the same CPU
>>> as the host, and require that new CPUs are added to this function.
>>
>> Can you please elaborate on how this code ensures the guest is seeing the same
>> CPU as the host? It looks rather unlike VPIDR = MIDR.
>
> I was merely elaborating on the "why". For the how:
> - vmidr_el2 is set in arch/arm64/kernel/head.S and never changed,
> ensuring the guest sees the same thing as the kernel.
> - Some additional code in guest.c ensures that both the host and the CPU
> requested by userspace for the guest are the same
> - KVM_ARM_TARGET_CORTEX_A57 is used in sys_regs_a57.c to register the
> sys_reg/cp15 handlers.
As I believe your response indicates, the code cited above in this email isn't
ensuring that the host is the same as the guest. That's done elsewhere. The
code here is doing something different, something that seems to me is going to
make building upon the ARM64 KVM infrastructure more of a hassle than it
should be.
My guess at the goal of the code cited above in this email is that it's trying
to sanity check that virtualization will work. Rather than taking a default
deny approach with a hand-maintained white list of virtualization-supporting
machine identifiers, why not check that EL2 is implemented on the current
system and if it's not implied by that, that the timer and interrupt
controller are suitable as well?
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 14/29] arm64: KVM: guest one-reg interface
2013-03-13 20:34 ` Christopher Covington
@ 2013-03-14 8:57 ` Peter Maydell
-1 siblings, 0 replies; 128+ messages in thread
From: Peter Maydell @ 2013-03-14 8:57 UTC (permalink / raw)
To: Christopher Covington
Cc: Marc Zyngier, Catalin Marinas, kvm, linux-arm-kernel, kvmarm
On 13 March 2013 20:34, Christopher Covington <cov@codeaurora.org> wrote:
> My guess at the goal of the code cited above in this email is that it's trying
> to sanity check that virtualization will work. Rather than taking a default
> deny approach with a hand-maintained white list of virtualization-supporting
> machine identifiers, why not check that EL2 is implemented on the current
> system and if it's not implied by that, that the timer and interrupt
> controller are suitable as well?
I think the question of "how much do we need to virtualize to allow
us to expose a different CPU to the guest than the host" is not yet
one that's been answered; so for the moment we support only
"guest == host == A57" [or == A15 on armv7]. When somebody has added
support for a second type of host/guest CPU then I think the process
of going through that work will make it much clearer how much
'new cpu' support is needed and what can be coded to work with any
virt-capable cpu. Until then I think it's safer to simply lock out
the unsupported combinations. That way anybody trying to run KVM on
a different CPU will be able to see that they have work to do and it's
not expected to work out of the box just yet.
We don't support other guests than A57 currently because you need to
implement emulation code for the imp-def registers for a guest CPU.
Again, I suspect the process of adding support for a 2nd guest CPU
will make it obvious what can be done generically and what really
does need to be per-CPU.
One thing I'm a bit worried about is the possibility that we accidentally
by-default allow the guest access to some new sysreg that didn't
exist on the A57 [or A15 for 32 bit] that turns out to be a security
hole (since both guest and host kernel run at EL1). But I think
trapping the whole of the impdef sysreg space should cover that.
-- PMM
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 14/29] arm64: KVM: guest one-reg interface
@ 2013-03-14 8:57 ` Peter Maydell
0 siblings, 0 replies; 128+ messages in thread
From: Peter Maydell @ 2013-03-14 8:57 UTC (permalink / raw)
To: linux-arm-kernel
On 13 March 2013 20:34, Christopher Covington <cov@codeaurora.org> wrote:
> My guess at the goal of the code cited above in this email is that it's trying
> to sanity check that virtualization will work. Rather than taking a default
> deny approach with a hand-maintained white list of virtualization-supporting
> machine identifiers, why not check that EL2 is implemented on the current
> system and if it's not implied by that, that the timer and interrupt
> controller are suitable as well?
I think the question of "how much do we need to virtualize to allow
us to expose a different CPU to the guest than the host" is not yet
one that's been answered; so for the moment we support only
"guest == host == A57" [or == A15 on armv7]. When somebody has added
support for a second type of host/guest CPU then I think the process
of going through that work will make it much clearer how much
'new cpu' support is needed and what can be coded to work with any
virt-capable cpu. Until then I think it's safer to simply lock out
the unsupported combinations. That way anybody trying to run KVM on
a different CPU will be able to see that they have work to do and it's
not expected to work out of the box just yet.
We don't support other guests than A57 currently because you need to
implement emulation code for the imp-def registers for a guest CPU.
Again, I suspect the process of adding support for a 2nd guest CPU
will make it obvious what can be done generically and what really
does need to be per-CPU.
One thing I'm a bit worried about is the possibility that we accidentally
by-default allow the guest access to some new sysreg that didn't
exist on the A57 [or A15 for 32 bit] that turns out to be a security
hole (since both guest and host kernel run at EL1). But I think
trapping the whole of the impdef sysreg space should cover that.
-- PMM
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 10/29] arm64: KVM: Cortex-A57 specific system registers handling
2013-03-13 18:30 ` Christopher Covington
@ 2013-03-14 10:26 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-14 10:26 UTC (permalink / raw)
To: Christopher Covington; +Cc: linux-arm-kernel, kvm, kvmarm, Catalin Marinas
On 13/03/13 18:30, Christopher Covington wrote:
Hi Christopher,
> I wonder if two of these registers could be handled in a generic fashion.
[...]
> What's A57-specific about this MPIDR behavior?
[...]
> What's A57-specific about this CPACR behavior?
In both cases, nothing I can think of. They both should be moved to the
generic sys_reg code.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 10/29] arm64: KVM: Cortex-A57 specific system registers handling
@ 2013-03-14 10:26 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-14 10:26 UTC (permalink / raw)
To: linux-arm-kernel
On 13/03/13 18:30, Christopher Covington wrote:
Hi Christopher,
> I wonder if two of these registers could be handled in a generic fashion.
[...]
> What's A57-specific about this MPIDR behavior?
[...]
> What's A57-specific about this CPACR behavior?
In both cases, nothing I can think of. They both should be moved to the
generic sys_reg code.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 23/29] arm64: KVM: 32bit GP register access
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-16 0:24 ` Geoff Levand
-1 siblings, 0 replies; 128+ messages in thread
From: Geoff Levand @ 2013-03-16 0:24 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
On Tue, 2013-03-05 at 03:47 +0000, Marc Zyngier wrote:
> diff --git a/arch/arm64/kvm/regmap.c b/arch/arm64/kvm/regmap.c
> new file mode 100644
> index 0000000..f8d4a0c
> --- /dev/null
> +++ b/arch/arm64/kvm/regmap.c
...
> + switch (mode) {
> + case COMPAT_PSR_MODE_USR...COMPAT_PSR_MODE_SVC:
I think it would be safer to have this with spaces in case someone
changes the macro defs or copies this to make some new code and screws
up their defs:
case COMPAT_PSR_MODE_USR ... COMPAT_PSR_MODE_SVC:
See: http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Case-Ranges.html
-Geoff
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 23/29] arm64: KVM: 32bit GP register access
@ 2013-03-16 0:24 ` Geoff Levand
0 siblings, 0 replies; 128+ messages in thread
From: Geoff Levand @ 2013-03-16 0:24 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On Tue, 2013-03-05 at 03:47 +0000, Marc Zyngier wrote:
> diff --git a/arch/arm64/kvm/regmap.c b/arch/arm64/kvm/regmap.c
> new file mode 100644
> index 0000000..f8d4a0c
> --- /dev/null
> +++ b/arch/arm64/kvm/regmap.c
...
> + switch (mode) {
> + case COMPAT_PSR_MODE_USR...COMPAT_PSR_MODE_SVC:
I think it would be safer to have this with spaces in case someone
changes the macro defs or copies this to make some new code and screws
up their defs:
case COMPAT_PSR_MODE_USR ... COMPAT_PSR_MODE_SVC:
See: http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Case-Ranges.html
-Geoff
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 05/29] arm64: KVM: Basic ESR_EL2 helpers and vcpu register access
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-16 0:55 ` Geoff Levand
-1 siblings, 0 replies; 128+ messages in thread
From: Geoff Levand @ 2013-03-16 0:55 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
On Tue, 2013-03-05 at 03:47 +0000, Marc Zyngier wrote:
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_emulate.h
...
> +static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
> +{
> + return false; /* 32bit? Bahhh... */
> +}
> +
> +static inline bool kvm_condition_valid(struct kvm_vcpu *vcpu)
> +{
> + return true; /* No conditionals on arm64 */
> +}
Does it make sense to have these routines take a const object?
static inline bool vcpu_mode_is_32bit(struct kvm_vcpu const *vcpu)
-Geoff
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 05/29] arm64: KVM: Basic ESR_EL2 helpers and vcpu register access
@ 2013-03-16 0:55 ` Geoff Levand
0 siblings, 0 replies; 128+ messages in thread
From: Geoff Levand @ 2013-03-16 0:55 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On Tue, 2013-03-05 at 03:47 +0000, Marc Zyngier wrote:
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_emulate.h
...
> +static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
> +{
> + return false; /* 32bit? Bahhh... */
> +}
> +
> +static inline bool kvm_condition_valid(struct kvm_vcpu *vcpu)
> +{
> + return true; /* No conditionals on arm64 */
> +}
Does it make sense to have these routines take a const object?
static inline bool vcpu_mode_is_32bit(struct kvm_vcpu const *vcpu)
-Geoff
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 06/29] arm64: KVM: fault injection into a guest
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-16 1:03 ` Geoff Levand
-1 siblings, 0 replies; 128+ messages in thread
From: Geoff Levand @ 2013-03-16 1:03 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
On Tue, 2013-03-05 at 03:47 +0000, Marc Zyngier wrote:
> --- /dev/null
> +++ b/arch/arm64/kvm/inject_fault.c
> @@ -0,0 +1,117 @@
...
> + * kvm_inject_undefined - inject a undefined instruction into the guest
s/ a undefined/ an undefined/
-Geoff
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 06/29] arm64: KVM: fault injection into a guest
@ 2013-03-16 1:03 ` Geoff Levand
0 siblings, 0 replies; 128+ messages in thread
From: Geoff Levand @ 2013-03-16 1:03 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On Tue, 2013-03-05 at 03:47 +0000, Marc Zyngier wrote:
> --- /dev/null
> +++ b/arch/arm64/kvm/inject_fault.c
> @@ -0,0 +1,117 @@
...
> + * kvm_inject_undefined - inject a undefined instruction into the guest
s/ a undefined/ an undefined/
-Geoff
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 22/29] arm64: KVM: define 32bit specific registers
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-18 17:03 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-18 17:03 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Define the 32bit specific registers (SPSRs, cp15...).
>
> Most CPU registers are directly mapped to a 64bit register
> (r0->x0...). Only the SPSRs have separate registers.
>
> cp15 registers are also mapped into their 64bit counterpart in most
> cases.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_asm.h | 38 +++++++++++++++++++++++++++++++++++++-
[...]
> +/* 32bit mapping */
> +#define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */
> +#define c0_CSSELR (CSSELR_EL1 * 2)/* Cache Size Selection Register */
> +#define c1_SCTLR (SCTLR_EL1 * 2) /* System Control Register */
> +#define c1_ACTLR (ACTLR_EL1 * 2) /* Auxilliary Control Register */
Auxiliary
> +#define c1_CPACR (CPACR_EL1 * 2) /* Coprocessor Access Control */
> +#define c2_TTBR0 (TTBR0_EL1 * 2) /* Translation Table Base Register 0 */
> +#define c2_TTBR0_high (c2_TTBR0 + 1) /* TTBR0 top 32 bits */
> +#define c2_TTBR1 (TTBR1_EL1 * 2) /* Translation Table Base Register 1 */
> +#define c2_TTBR1_high (c2_TTBR1 + 1) /* TTBR1 top 32 bits */
> +#define c2_TTBCR (TCR_EL1 * 2) /* Translation Table Base Control R. */
> +#define c3_DACR (DACR32_EL2 * 2)/* Domain Access Control Register */
> +#define c5_DFSR (ESR_EL1 * 2) /* Data Fault Status Register */
> +#define c5_IFSR (IFSR32_EL2 * 2)/* Instruction Fault Status Register */
> +#define c5_ADFSR (AFSR0_EL1 * 2) /* Auxilary Data Fault Status R */
> +#define c5_AIFSR (AFSR1_EL1 * 2) /* Auxilary Instr Fault Status R */
Auxiliary
> +#define c6_DFAR (FAR_EL1 * 2) /* Data Fault Address Register */
> +#define c6_IFAR (c6_DFAR + 1) /* Instruction Fault Address Register */
> +#define c10_PRRR (MAIR_EL1 * 2) /* Primary Region Remap Register */
> +#define c10_NMRR (c10_PRRR + 1) /* Normal Memory Remap Register */
> +#define c12_VBAR (VBAR_EL1 * 2) /* Vector Base Address Register */
> +#define c13_CID (CONTEXTIDR_EL1 * 2) /* Context ID Register */
> +#define c13_TID_URW (TPIDR_EL0 * 2) /* Thread ID, User R/W */
> +#define c13_TID_URO (TPIDRRO_EL0 * 2)/* Thread ID, User R/O */
> +#define c13_TID_PRIV (TPIDR_EL1 * 2) /* Thread ID, Priveleged */
Privileged
> +#define c10_AMAIR (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
> +#define c14_CNTKCTL (CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
> +#define NR_CP15_REGS (NR_SYS_REGS * 2)
[...]
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 22/29] arm64: KVM: define 32bit specific registers
@ 2013-03-18 17:03 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-18 17:03 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Define the 32bit specific registers (SPSRs, cp15...).
>
> Most CPU registers are directly mapped to a 64bit register
> (r0->x0...). Only the SPSRs have separate registers.
>
> cp15 registers are also mapped into their 64bit counterpart in most
> cases.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/include/asm/kvm_asm.h | 38 +++++++++++++++++++++++++++++++++++++-
[...]
> +/* 32bit mapping */
> +#define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */
> +#define c0_CSSELR (CSSELR_EL1 * 2)/* Cache Size Selection Register */
> +#define c1_SCTLR (SCTLR_EL1 * 2) /* System Control Register */
> +#define c1_ACTLR (ACTLR_EL1 * 2) /* Auxilliary Control Register */
Auxiliary
> +#define c1_CPACR (CPACR_EL1 * 2) /* Coprocessor Access Control */
> +#define c2_TTBR0 (TTBR0_EL1 * 2) /* Translation Table Base Register 0 */
> +#define c2_TTBR0_high (c2_TTBR0 + 1) /* TTBR0 top 32 bits */
> +#define c2_TTBR1 (TTBR1_EL1 * 2) /* Translation Table Base Register 1 */
> +#define c2_TTBR1_high (c2_TTBR1 + 1) /* TTBR1 top 32 bits */
> +#define c2_TTBCR (TCR_EL1 * 2) /* Translation Table Base Control R. */
> +#define c3_DACR (DACR32_EL2 * 2)/* Domain Access Control Register */
> +#define c5_DFSR (ESR_EL1 * 2) /* Data Fault Status Register */
> +#define c5_IFSR (IFSR32_EL2 * 2)/* Instruction Fault Status Register */
> +#define c5_ADFSR (AFSR0_EL1 * 2) /* Auxilary Data Fault Status R */
> +#define c5_AIFSR (AFSR1_EL1 * 2) /* Auxilary Instr Fault Status R */
Auxiliary
> +#define c6_DFAR (FAR_EL1 * 2) /* Data Fault Address Register */
> +#define c6_IFAR (c6_DFAR + 1) /* Instruction Fault Address Register */
> +#define c10_PRRR (MAIR_EL1 * 2) /* Primary Region Remap Register */
> +#define c10_NMRR (c10_PRRR + 1) /* Normal Memory Remap Register */
> +#define c12_VBAR (VBAR_EL1 * 2) /* Vector Base Address Register */
> +#define c13_CID (CONTEXTIDR_EL1 * 2) /* Context ID Register */
> +#define c13_TID_URW (TPIDR_EL0 * 2) /* Thread ID, User R/W */
> +#define c13_TID_URO (TPIDRRO_EL0 * 2)/* Thread ID, User R/O */
> +#define c13_TID_PRIV (TPIDR_EL1 * 2) /* Thread ID, Priveleged */
Privileged
> +#define c10_AMAIR (AMAIR_EL1 * 2) /* Aux Memory Attr Indirection Reg */
> +#define c14_CNTKCTL (CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
> +#define NR_CP15_REGS (NR_SYS_REGS * 2)
[...]
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 24/29] arm64: KVM: 32bit conditional execution emulation
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-18 17:04 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-18 17:04 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> As conditionnal instructions can trap on AArch32, add the thinest
conditional
> possible emulation layer to keep 32bit guests happy.
[...]
> diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
> new file mode 100644
> index 0000000..6b3dbc3
> --- /dev/null
> +++ b/arch/arm64/kvm/emulate.c
> @@ -0,0 +1,154 @@
> +/*
> + * (not much of an) Emulation layer for 32bit guests.
> + *
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * This program is free software: you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/kvm_host.h>
> +#include <asm/kvm_emulate.h>
> +
> +/*
> + * stolen from arch/arm/kernel/opcodes.c
> + *
> + * condition code lookup table
> + * index into the table is test code: EQ, NE, ... LT, GT, AL, NV
> + *
> + * bit position in short is condition code: NZCV
> + */
> +static const unsigned short cc_map[16] = {
> + 0xF0F0, /* EQ == Z set */
> + 0x0F0F, /* NE */
> + 0xCCCC, /* CS == C set */
> + 0x3333, /* CC */
> + 0xFF00, /* MI == N set */
> + 0x00FF, /* PL */
> + 0xAAAA, /* VS == V set */
> + 0x5555, /* VC */
> + 0x0C0C, /* HI == C set && Z clear */
> + 0xF3F3, /* LS == C clear || Z set */
> + 0xAA55, /* GE == (N==V) */
> + 0x55AA, /* LT == (N!=V) */
> + 0x0A05, /* GT == (!Z && (N==V)) */
> + 0xF5FA, /* LE == (Z || (N!=V)) */
> + 0xFFFF, /* AL always */
> + 0 /* NV */
> +};
> +
> +static int kvm_vcpu_get_condition(struct kvm_vcpu *vcpu)
> +{
> + u32 esr = kvm_vcpu_get_hsr(vcpu);
> +
> + if (esr & ESR_EL2_CV)
> + return (esr & ESR_EL2_COND) >> ESR_EL2_COND_SHIFT;
> +
> + return -1;
> +}
> +
> +/*
> + * Check if a trapped instruction should have been executed or not.
> + */
> +bool kvm_condition_valid32(struct kvm_vcpu *vcpu)
> +{
> + unsigned long cpsr;
> + u32 cpsr_cond;
> + int cond;
> +
> + /* Top two bits non-zero? Unconditional. */
> + if (kvm_vcpu_get_hsr(vcpu) >> 30)
> + return true;
> +
> + /* Is condition field valid? */
> + cond = kvm_vcpu_get_condition(vcpu);
> + if (cond == 0xE)
> + return true;
> +
> + cpsr = *vcpu_cpsr(vcpu);
> +
> + if (cond < 0) {
> + /* This can happen in Thumb mode: examine IT state. */
> + unsigned long it;
> +
> + it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
> +
> + /* it == 0 => unconditional. */
> + if (it == 0)
> + return true;
> +
> + /* The cond for this insn works out as the top 4 bits. */
> + cond = (it >> 4);
> + }
> +
> + cpsr_cond = cpsr >> 28;
> +
> + if (!((cc_map[cond] >> cpsr_cond) & 1))
> + return false;
> +
> + return true;
> +}
> +
> +/**
> + * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
> + * @vcpu: The VCPU pointer
> + *
> + * When exceptions occur while instructions are executed in Thumb IF-THEN
> + * blocks, the ITSTATE field of the CPSR is not advanved (updated), so we have
> + * to do this little bit of work manually. The fields map like this:
> + *
> + * IT[7:0] -> CPSR[26:25],CPSR[15:10]
> + */
> +static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
> +{
> + unsigned long itbits, cond;
> + unsigned long cpsr = *vcpu_cpsr(vcpu);
> + bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
> +
> + BUG_ON(is_arm && (cpsr & COMPAT_PSR_IT_MASK));
> +
> + if (!(cpsr & COMPAT_PSR_IT_MASK))
> + return;
> +
> + cond = (cpsr & 0xe000) >> 13;
> + itbits = (cpsr & 0x1c00) >> (10 - 2);
> + itbits |= (cpsr & (0x3 << 25)) >> 25;
> +
> + /* Perform ITAdvance (see page A-52 in ARM DDI 0406C) */
> + if ((itbits & 0x7) == 0)
> + itbits = cond = 0;
> + else
> + itbits = (itbits << 1) & 0x1f;
> +
> + cpsr &= ~COMPAT_PSR_IT_MASK;
> + cpsr |= cond << 13;
> + cpsr |= (itbits & 0x1c) << (10 - 2);
> + cpsr |= (itbits & 0x3) << 25;
> + *vcpu_cpsr(vcpu) = cpsr;
> +}
Maybe I'm spoiled by the breadth of the 64-bit definitions, but I wonder if
fewer magic numbers in the 32-bit emulation might make it an easier read.
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 24/29] arm64: KVM: 32bit conditional execution emulation
@ 2013-03-18 17:04 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-18 17:04 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> As conditionnal instructions can trap on AArch32, add the thinest
conditional
> possible emulation layer to keep 32bit guests happy.
[...]
> diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
> new file mode 100644
> index 0000000..6b3dbc3
> --- /dev/null
> +++ b/arch/arm64/kvm/emulate.c
> @@ -0,0 +1,154 @@
> +/*
> + * (not much of an) Emulation layer for 32bit guests.
> + *
> + * Copyright (C) 2012 - Virtual Open Systems and Columbia University
> + * Author: Christoffer Dall <c.dall@virtualopensystems.com>
> + *
> + * This program is free software: you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/kvm_host.h>
> +#include <asm/kvm_emulate.h>
> +
> +/*
> + * stolen from arch/arm/kernel/opcodes.c
> + *
> + * condition code lookup table
> + * index into the table is test code: EQ, NE, ... LT, GT, AL, NV
> + *
> + * bit position in short is condition code: NZCV
> + */
> +static const unsigned short cc_map[16] = {
> + 0xF0F0, /* EQ == Z set */
> + 0x0F0F, /* NE */
> + 0xCCCC, /* CS == C set */
> + 0x3333, /* CC */
> + 0xFF00, /* MI == N set */
> + 0x00FF, /* PL */
> + 0xAAAA, /* VS == V set */
> + 0x5555, /* VC */
> + 0x0C0C, /* HI == C set && Z clear */
> + 0xF3F3, /* LS == C clear || Z set */
> + 0xAA55, /* GE == (N==V) */
> + 0x55AA, /* LT == (N!=V) */
> + 0x0A05, /* GT == (!Z && (N==V)) */
> + 0xF5FA, /* LE == (Z || (N!=V)) */
> + 0xFFFF, /* AL always */
> + 0 /* NV */
> +};
> +
> +static int kvm_vcpu_get_condition(struct kvm_vcpu *vcpu)
> +{
> + u32 esr = kvm_vcpu_get_hsr(vcpu);
> +
> + if (esr & ESR_EL2_CV)
> + return (esr & ESR_EL2_COND) >> ESR_EL2_COND_SHIFT;
> +
> + return -1;
> +}
> +
> +/*
> + * Check if a trapped instruction should have been executed or not.
> + */
> +bool kvm_condition_valid32(struct kvm_vcpu *vcpu)
> +{
> + unsigned long cpsr;
> + u32 cpsr_cond;
> + int cond;
> +
> + /* Top two bits non-zero? Unconditional. */
> + if (kvm_vcpu_get_hsr(vcpu) >> 30)
> + return true;
> +
> + /* Is condition field valid? */
> + cond = kvm_vcpu_get_condition(vcpu);
> + if (cond == 0xE)
> + return true;
> +
> + cpsr = *vcpu_cpsr(vcpu);
> +
> + if (cond < 0) {
> + /* This can happen in Thumb mode: examine IT state. */
> + unsigned long it;
> +
> + it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
> +
> + /* it == 0 => unconditional. */
> + if (it == 0)
> + return true;
> +
> + /* The cond for this insn works out as the top 4 bits. */
> + cond = (it >> 4);
> + }
> +
> + cpsr_cond = cpsr >> 28;
> +
> + if (!((cc_map[cond] >> cpsr_cond) & 1))
> + return false;
> +
> + return true;
> +}
> +
> +/**
> + * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
> + * @vcpu: The VCPU pointer
> + *
> + * When exceptions occur while instructions are executed in Thumb IF-THEN
> + * blocks, the ITSTATE field of the CPSR is not advanved (updated), so we have
> + * to do this little bit of work manually. The fields map like this:
> + *
> + * IT[7:0] -> CPSR[26:25],CPSR[15:10]
> + */
> +static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
> +{
> + unsigned long itbits, cond;
> + unsigned long cpsr = *vcpu_cpsr(vcpu);
> + bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
> +
> + BUG_ON(is_arm && (cpsr & COMPAT_PSR_IT_MASK));
> +
> + if (!(cpsr & COMPAT_PSR_IT_MASK))
> + return;
> +
> + cond = (cpsr & 0xe000) >> 13;
> + itbits = (cpsr & 0x1c00) >> (10 - 2);
> + itbits |= (cpsr & (0x3 << 25)) >> 25;
> +
> + /* Perform ITAdvance (see page A-52 in ARM DDI 0406C) */
> + if ((itbits & 0x7) == 0)
> + itbits = cond = 0;
> + else
> + itbits = (itbits << 1) & 0x1f;
> +
> + cpsr &= ~COMPAT_PSR_IT_MASK;
> + cpsr |= cond << 13;
> + cpsr |= (itbits & 0x1c) << (10 - 2);
> + cpsr |= (itbits & 0x3) << 25;
> + *vcpu_cpsr(vcpu) = cpsr;
> +}
Maybe I'm spoiled by the breadth of the 64-bit definitions, but I wonder if
fewer magic numbers in the 32-bit emulation might make it an easier read.
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 28/29] arm64: KVM: 32bit guest fault injection
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-18 18:45 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-18 18:45 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
Here are a few more preprocessor definition suggestions.
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Add fault injection capability for 32bit guests.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kvm/inject_fault.c | 79 ++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 78 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> index 80b245f..85a4548 100644
> --- a/arch/arm64/kvm/inject_fault.c
> +++ b/arch/arm64/kvm/inject_fault.c
> @@ -1,5 +1,5 @@
> /*
> - * Fault injection for 64bit guests.
> + * Fault injection for both 32 and 64bit guests.
> *
> * Copyright (C) 2012 - ARM Ltd
> * Author: Marc Zyngier <marc.zyngier@arm.com>
> @@ -24,6 +24,74 @@
> #include <linux/kvm_host.h>
> #include <asm/kvm_emulate.h>
>
> +static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
> +{
> + unsigned long cpsr;
> + unsigned long new_spsr_value = *vcpu_cpsr(vcpu);
> + bool is_thumb = (new_spsr_value & COMPAT_PSR_T_BIT);
> + u32 return_offset = (is_thumb) ? 4 : 0;
> + u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
> +
> + cpsr = mode | COMPAT_PSR_I_BIT;
> +
> + if (sctlr & (1 << 30))
COMPAT_SCTLR_TE?
> + cpsr |= COMPAT_PSR_T_BIT;
> + if (sctlr & (1 << 25))
SCTLR_EL2_EE
> + cpsr |= COMPAT_PSR_E_BIT;
> +
> + *vcpu_cpsr(vcpu) = cpsr;
> +
> + /* Note: These now point to the banked copies */
> + *vcpu_spsr(vcpu) = new_spsr_value;
> + *vcpu_reg(vcpu, 14) = *vcpu_pc(vcpu) + return_offset;
> +
> + /* Branch to exception vector */
> + if (sctlr & (1 << 13))
COMPAT_SCTLR_V?
> + vect_offset += 0xffff0000;
> + else /* always have security exceptions */
> + vect_offset += vcpu->arch.cp15[c12_VBAR];
> +
> + *vcpu_pc(vcpu) = vect_offset;
> +}
> +
> +static void inject_undef32(struct kvm_vcpu *vcpu)
> +{
> + prepare_fault32(vcpu, COMPAT_PSR_MODE_UND, 4);
> +}
> +
> +/*
> + * Modelled after TakeDataAbortException() and TakePrefetchAbortException
> + * pseudocode.
> + */
> +static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
> + unsigned long addr)
> +{
> + u32 vect_offset;
> + u32 *far, *fsr;
> + bool is_lpae;
> +
> + if (is_pabt) {
> + vect_offset = 12;
> + far = &vcpu->arch.cp15[c6_IFAR];
> + fsr = &vcpu->arch.cp15[c5_IFSR];
> + } else { /* !iabt */
> + vect_offset = 16;
> + far = &vcpu->arch.cp15[c6_DFAR];
> + fsr = &vcpu->arch.cp15[c5_DFSR];
> + }
> +
> + prepare_fault32(vcpu, COMPAT_PSR_MODE_ABT | COMPAT_PSR_A_BIT, vect_offset);
> +
> + *far = addr;
> +
> + /* Always give debug fault for now - should give guest a clue */
> + is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31);
COMPAT_TTBCR_EAE?
> + if (is_lpae)
> + *fsr = 1 << 9 | 0x22;
COMPAT_FSR_LPAE, COMPAT_FSR_LD_FS_DBG?
> + else
> + *fsr = 2;
COMPAT_FSR_SD_FS_DBG?
[...]
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 28/29] arm64: KVM: 32bit guest fault injection
@ 2013-03-18 18:45 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-18 18:45 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
Here are a few more preprocessor definition suggestions.
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Add fault injection capability for 32bit guests.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm64/kvm/inject_fault.c | 79 ++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 78 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> index 80b245f..85a4548 100644
> --- a/arch/arm64/kvm/inject_fault.c
> +++ b/arch/arm64/kvm/inject_fault.c
> @@ -1,5 +1,5 @@
> /*
> - * Fault injection for 64bit guests.
> + * Fault injection for both 32 and 64bit guests.
> *
> * Copyright (C) 2012 - ARM Ltd
> * Author: Marc Zyngier <marc.zyngier@arm.com>
> @@ -24,6 +24,74 @@
> #include <linux/kvm_host.h>
> #include <asm/kvm_emulate.h>
>
> +static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
> +{
> + unsigned long cpsr;
> + unsigned long new_spsr_value = *vcpu_cpsr(vcpu);
> + bool is_thumb = (new_spsr_value & COMPAT_PSR_T_BIT);
> + u32 return_offset = (is_thumb) ? 4 : 0;
> + u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
> +
> + cpsr = mode | COMPAT_PSR_I_BIT;
> +
> + if (sctlr & (1 << 30))
COMPAT_SCTLR_TE?
> + cpsr |= COMPAT_PSR_T_BIT;
> + if (sctlr & (1 << 25))
SCTLR_EL2_EE
> + cpsr |= COMPAT_PSR_E_BIT;
> +
> + *vcpu_cpsr(vcpu) = cpsr;
> +
> + /* Note: These now point to the banked copies */
> + *vcpu_spsr(vcpu) = new_spsr_value;
> + *vcpu_reg(vcpu, 14) = *vcpu_pc(vcpu) + return_offset;
> +
> + /* Branch to exception vector */
> + if (sctlr & (1 << 13))
COMPAT_SCTLR_V?
> + vect_offset += 0xffff0000;
> + else /* always have security exceptions */
> + vect_offset += vcpu->arch.cp15[c12_VBAR];
> +
> + *vcpu_pc(vcpu) = vect_offset;
> +}
> +
> +static void inject_undef32(struct kvm_vcpu *vcpu)
> +{
> + prepare_fault32(vcpu, COMPAT_PSR_MODE_UND, 4);
> +}
> +
> +/*
> + * Modelled after TakeDataAbortException() and TakePrefetchAbortException
> + * pseudocode.
> + */
> +static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
> + unsigned long addr)
> +{
> + u32 vect_offset;
> + u32 *far, *fsr;
> + bool is_lpae;
> +
> + if (is_pabt) {
> + vect_offset = 12;
> + far = &vcpu->arch.cp15[c6_IFAR];
> + fsr = &vcpu->arch.cp15[c5_IFSR];
> + } else { /* !iabt */
> + vect_offset = 16;
> + far = &vcpu->arch.cp15[c6_DFAR];
> + fsr = &vcpu->arch.cp15[c5_DFSR];
> + }
> +
> + prepare_fault32(vcpu, COMPAT_PSR_MODE_ABT | COMPAT_PSR_A_BIT, vect_offset);
> +
> + *far = addr;
> +
> + /* Always give debug fault for now - should give guest a clue */
> + is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31);
COMPAT_TTBCR_EAE?
> + if (is_lpae)
> + *fsr = 1 << 9 | 0x22;
COMPAT_FSR_LPAE, COMPAT_FSR_LD_FS_DBG?
> + else
> + *fsr = 2;
COMPAT_FSR_SD_FS_DBG?
[...]
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 29/29] arm64: KVM: enable initialization of a 32bit vcpu
2013-03-05 3:47 ` Marc Zyngier
@ 2013-03-18 18:56 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-18 18:56 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Wire the init of a 32bit vcpu by allowing 32bit modes in pstate,
> and providing sensible defaults out of reset state.
>
> This feature is of course conditionned by the presence of 32bit
conditioned
> capability on the physical CPU.
[...]
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 29/29] arm64: KVM: enable initialization of a 32bit vcpu
@ 2013-03-18 18:56 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-18 18:56 UTC (permalink / raw)
To: linux-arm-kernel
On 03/04/2013 10:47 PM, Marc Zyngier wrote:
> Wire the init of a 32bit vcpu by allowing 32bit modes in pstate,
> and providing sensible defaults out of reset state.
>
> This feature is of course conditionned by the presence of 32bit
conditioned
> capability on the physical CPU.
[...]
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 16/29] arm64: KVM: HYP mode world switch implementation
2013-03-13 19:59 ` Christopher Covington
@ 2013-03-20 20:04 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-20 20:04 UTC (permalink / raw)
To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, catalin.marinas
Hi Marc,
On 03/13/2013 03:59 PM, Christopher Covington wrote:
[...]
> Alternatively, you could consider storing the host registers in a
> slimmed-down vcpu structure for hosts, rather than on the stack.
One potential argument for storing the host in the same sort of vcpu structure
as the guest rather than on the hypervisor stack is that snapshot and
migration support initially intended for guests might more easily be extended
to work for hosts as well.
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 16/29] arm64: KVM: HYP mode world switch implementation
@ 2013-03-20 20:04 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-20 20:04 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc,
On 03/13/2013 03:59 PM, Christopher Covington wrote:
[...]
> Alternatively, you could consider storing the host registers in a
> slimmed-down vcpu structure for hosts, rather than on the stack.
One potential argument for storing the host in the same sort of vcpu structure
as the guest rather than on the hypervisor stack is that snapshot and
migration support initially intended for guests might more easily be extended
to work for hosts as well.
Regards,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 14/29] arm64: KVM: guest one-reg interface
2013-03-14 8:57 ` Peter Maydell
@ 2013-03-20 20:06 ` Christopher Covington
-1 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-20 20:06 UTC (permalink / raw)
To: Peter Maydell, Marc Zyngier
Cc: Catalin Marinas, kvm, linux-arm-kernel, kvmarm
Hi Marc, Peter,
On 03/14/2013 04:57 AM, Peter Maydell wrote:
> On 13 March 2013 20:34, Christopher Covington <cov@codeaurora.org> wrote:
>> My guess at the goal of the code cited above in this email is that it's trying
>> to sanity check that virtualization will work. Rather than taking a default
>> deny approach with a hand-maintained white list of virtualization-supporting
>> machine identifiers, why not check that EL2 is implemented on the current
>> system and if it's not implied by that, that the timer and interrupt
>> controller are suitable as well?
[...]
> ...you need to implement emulation code for the imp-def registers for a
> guest CPU.
[...]
This is reasonable. In this light the code I was picking out above is simply
converting MIDRs to KVM_ARM_TARGET_* constants. Because the mapping isn't
one-to-one, the hand-maintained list is an acceptable approach.
In the long term, I wonder if some kind of KVM_TARGET_CURRENT_CPU might be handy.
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 14/29] arm64: KVM: guest one-reg interface
@ 2013-03-20 20:06 ` Christopher Covington
0 siblings, 0 replies; 128+ messages in thread
From: Christopher Covington @ 2013-03-20 20:06 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marc, Peter,
On 03/14/2013 04:57 AM, Peter Maydell wrote:
> On 13 March 2013 20:34, Christopher Covington <cov@codeaurora.org> wrote:
>> My guess at the goal of the code cited above in this email is that it's trying
>> to sanity check that virtualization will work. Rather than taking a default
>> deny approach with a hand-maintained white list of virtualization-supporting
>> machine identifiers, why not check that EL2 is implemented on the current
>> system and if it's not implied by that, that the timer and interrupt
>> controller are suitable as well?
[...]
> ...you need to implement emulation code for the imp-def registers for a
> guest CPU.
[...]
This is reasonable. In this light the code I was picking out above is simply
converting MIDRs to KVM_ARM_TARGET_* constants. Because the mapping isn't
one-to-one, the hand-maintained list is an acceptable approach.
In the long term, I wonder if some kind of KVM_TARGET_CURRENT_CPU might be handy.
Thanks,
Christopher
--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [PATCH 16/29] arm64: KVM: HYP mode world switch implementation
2013-03-20 20:04 ` Christopher Covington
@ 2013-03-21 11:54 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-21 11:54 UTC (permalink / raw)
To: Christopher Covington; +Cc: linux-arm-kernel, kvm, kvmarm, Catalin Marinas
On 20/03/13 20:04, Christopher Covington wrote:
> Hi Marc,
>
> On 03/13/2013 03:59 PM, Christopher Covington wrote:
>
> [...]
>
>> Alternatively, you could consider storing the host registers in a
>> slimmed-down vcpu structure for hosts, rather than on the stack.
I am actively implementing this (I'm turning the vfp_host pointer into a
full blown CPU context). It looks promising so far, stay tuned.
> One potential argument for storing the host in the same sort of vcpu structure
> as the guest rather than on the hypervisor stack is that snapshot and
> migration support initially intended for guests might more easily be extended
> to work for hosts as well.
Not sure I'm following you here. Are you thinking of snapshoting both
host and guests, and migrating the whole thing? Ambitious... ;-)
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* [PATCH 16/29] arm64: KVM: HYP mode world switch implementation
@ 2013-03-21 11:54 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-21 11:54 UTC (permalink / raw)
To: linux-arm-kernel
On 20/03/13 20:04, Christopher Covington wrote:
> Hi Marc,
>
> On 03/13/2013 03:59 PM, Christopher Covington wrote:
>
> [...]
>
>> Alternatively, you could consider storing the host registers in a
>> slimmed-down vcpu structure for hosts, rather than on the stack.
I am actively implementing this (I'm turning the vfp_host pointer into a
full blown CPU context). It looks promising so far, stay tuned.
> One potential argument for storing the host in the same sort of vcpu structure
> as the guest rather than on the hypervisor stack is that snapshot and
> migration support initially intended for guests might more easily be extended
> to work for hosts as well.
Not sure I'm following you here. Are you thinking of snapshoting both
host and guests, and migrating the whole thing? Ambitious... ;-)
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 09/29] arm64: KVM: system register handling
2013-03-07 10:30 ` Alexander Graf
@ 2013-03-25 8:19 ` Marc Zyngier
-1 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-25 8:19 UTC (permalink / raw)
To: Alexander Graf; +Cc: catalin.marinas, kvm, linux-arm-kernel, kvmarm
Hi Alex,
On Thu, 7 Mar 2013 11:30:20 +0100, Alexander Graf <agraf@suse.de> wrote:
> On 05.03.2013, at 04:47, Marc Zyngier wrote:
>
>> Provide 64bit system register handling, modeled after the cp15
>> handling for ARM.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
[...]
>> +static int emulate_sys_reg(struct kvm_vcpu *vcpu,
>> + const struct sys_reg_params *params)
>> +{
>> + size_t num;
>> + const struct sys_reg_desc *table, *r;
>> +
>> + table = get_target_table(vcpu->arch.target, &num);
>> +
>> + /* Search target-specific then generic table. */
>> + r = find_reg(params, table, num);
>> + if (!r)
>> + r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
>
> Searching through the whole list sounds quite slow. Especially since the
> TLS register is at the very bottom of it.
>
> Can't you make this a simple switch() statement through a bit of #define
> and maybe #include magic? After all, the sysreg target encoding is all
part
> of the opcode. And from my experience in the PPC instruction emulator,
> switch()es are _a lot_ faster than any other way of lookup I've tried.
So I've had a go at implementing this, and decided it is not worth the
effort if we want to preserve the same level of functionality (ONE_REG
discovery, sanity checking at VM startup...).
Granted, we would gain a faster trap handling. But look at what we're
actually trapping, and how often this happens. Almost nothing, almost
never. So, until shown that we spend too much time iterating over the
sys_reg_desc array, I'll keep it simple.
This is not to say that there's no optimization to be made. Quite the
opposite! Just that this particular one seems a bit overkill.
Anyway, thanks for pushing me into pondering this! :-)
M.
--
Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 09/29] arm64: KVM: system register handling
@ 2013-03-25 8:19 ` Marc Zyngier
0 siblings, 0 replies; 128+ messages in thread
From: Marc Zyngier @ 2013-03-25 8:19 UTC (permalink / raw)
To: linux-arm-kernel
Hi Alex,
On Thu, 7 Mar 2013 11:30:20 +0100, Alexander Graf <agraf@suse.de> wrote:
> On 05.03.2013, at 04:47, Marc Zyngier wrote:
>
>> Provide 64bit system register handling, modeled after the cp15
>> handling for ARM.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
[...]
>> +static int emulate_sys_reg(struct kvm_vcpu *vcpu,
>> + const struct sys_reg_params *params)
>> +{
>> + size_t num;
>> + const struct sys_reg_desc *table, *r;
>> +
>> + table = get_target_table(vcpu->arch.target, &num);
>> +
>> + /* Search target-specific then generic table. */
>> + r = find_reg(params, table, num);
>> + if (!r)
>> + r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
>
> Searching through the whole list sounds quite slow. Especially since the
> TLS register is at the very bottom of it.
>
> Can't you make this a simple switch() statement through a bit of #define
> and maybe #include magic? After all, the sysreg target encoding is all
part
> of the opcode. And from my experience in the PPC instruction emulator,
> switch()es are _a lot_ faster than any other way of lookup I've tried.
So I've had a go at implementing this, and decided it is not worth the
effort if we want to preserve the same level of functionality (ONE_REG
discovery, sanity checking at VM startup...).
Granted, we would gain a faster trap handling. But look at what we're
actually trapping, and how often this happens. Almost nothing, almost
never. So, until shown that we spend too much time iterating over the
sys_reg_desc array, I'll keep it simple.
This is not to say that there's no optimization to be made. Quite the
opposite! Just that this particular one seems a bit overkill.
Anyway, thanks for pushing me into pondering this! :-)
M.
--
Fast, cheap, reliable. Pick two.
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: [kvmarm] [PATCH 09/29] arm64: KVM: system register handling
2013-03-25 8:19 ` Marc Zyngier
@ 2013-04-23 23:07 ` Christoffer Dall
-1 siblings, 0 replies; 128+ messages in thread
From: Christoffer Dall @ 2013-04-23 23:07 UTC (permalink / raw)
To: Marc Zyngier
Cc: Alexander Graf, Catalin Marinas, KVM General, linux-arm-kernel, kvmarm
On Mon, Mar 25, 2013 at 1:19 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> Hi Alex,
>
> On Thu, 7 Mar 2013 11:30:20 +0100, Alexander Graf <agraf@suse.de> wrote:
>> On 05.03.2013, at 04:47, Marc Zyngier wrote:
>>
>>> Provide 64bit system register handling, modeled after the cp15
>>> handling for ARM.
>>>
>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>> ---
>
> [...]
>
>>> +static int emulate_sys_reg(struct kvm_vcpu *vcpu,
>>> + const struct sys_reg_params *params)
>>> +{
>>> + size_t num;
>>> + const struct sys_reg_desc *table, *r;
>>> +
>>> + table = get_target_table(vcpu->arch.target, &num);
>>> +
>>> + /* Search target-specific then generic table. */
>>> + r = find_reg(params, table, num);
>>> + if (!r)
>>> + r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
>>
>> Searching through the whole list sounds quite slow. Especially since the
>> TLS register is at the very bottom of it.
>>
>> Can't you make this a simple switch() statement through a bit of #define
>> and maybe #include magic? After all, the sysreg target encoding is all
> part
>> of the opcode. And from my experience in the PPC instruction emulator,
>> switch()es are _a lot_ faster than any other way of lookup I've tried.
>
> So I've had a go at implementing this, and decided it is not worth the
> effort if we want to preserve the same level of functionality (ONE_REG
> discovery, sanity checking at VM startup...).
>
> Granted, we would gain a faster trap handling. But look at what we're
> actually trapping, and how often this happens. Almost nothing, almost
> never. So, until shown that we spend too much time iterating over the
> sys_reg_desc array, I'll keep it simple.
>
> This is not to say that there's no optimization to be made. Quite the
> opposite! Just that this particular one seems a bit overkill.
>
> Anyway, thanks for pushing me into pondering this! :-)
>
totally late in the game here, but I saw you discussed this briefly on
IRC as well, and I completely agree with Marc here, we have much
bigger fish to fry. On all the measurements I did on the 32-bit side,
this doesn't even begin to show up on the radar.
vgic man, vgic!
-Christoffer
^ permalink raw reply [flat|nested] 128+ messages in thread
* [kvmarm] [PATCH 09/29] arm64: KVM: system register handling
@ 2013-04-23 23:07 ` Christoffer Dall
0 siblings, 0 replies; 128+ messages in thread
From: Christoffer Dall @ 2013-04-23 23:07 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Mar 25, 2013 at 1:19 AM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> Hi Alex,
>
> On Thu, 7 Mar 2013 11:30:20 +0100, Alexander Graf <agraf@suse.de> wrote:
>> On 05.03.2013, at 04:47, Marc Zyngier wrote:
>>
>>> Provide 64bit system register handling, modeled after the cp15
>>> handling for ARM.
>>>
>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>> ---
>
> [...]
>
>>> +static int emulate_sys_reg(struct kvm_vcpu *vcpu,
>>> + const struct sys_reg_params *params)
>>> +{
>>> + size_t num;
>>> + const struct sys_reg_desc *table, *r;
>>> +
>>> + table = get_target_table(vcpu->arch.target, &num);
>>> +
>>> + /* Search target-specific then generic table. */
>>> + r = find_reg(params, table, num);
>>> + if (!r)
>>> + r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
>>
>> Searching through the whole list sounds quite slow. Especially since the
>> TLS register is at the very bottom of it.
>>
>> Can't you make this a simple switch() statement through a bit of #define
>> and maybe #include magic? After all, the sysreg target encoding is all
> part
>> of the opcode. And from my experience in the PPC instruction emulator,
>> switch()es are _a lot_ faster than any other way of lookup I've tried.
>
> So I've had a go at implementing this, and decided it is not worth the
> effort if we want to preserve the same level of functionality (ONE_REG
> discovery, sanity checking at VM startup...).
>
> Granted, we would gain a faster trap handling. But look at what we're
> actually trapping, and how often this happens. Almost nothing, almost
> never. So, until shown that we spend too much time iterating over the
> sys_reg_desc array, I'll keep it simple.
>
> This is not to say that there's no optimization to be made. Quite the
> opposite! Just that this particular one seems a bit overkill.
>
> Anyway, thanks for pushing me into pondering this! :-)
>
totally late in the game here, but I saw you discussed this briefly on
IRC as well, and I completely agree with Marc here, we have much
bigger fish to fry. On all the measurements I did on the 32-bit side,
this doesn't even begin to show up on the radar.
vgic man, vgic!
-Christoffer
^ permalink raw reply [flat|nested] 128+ messages in thread
end of thread, other threads:[~2013-04-23 23:07 UTC | newest]
Thread overview: 128+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-05 3:47 [PATCH 00/29] Port of KVM to arm64 Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 01/29] arm64: KVM: define HYP and Stage-2 translation page flags Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 02/29] arm64: KVM: HYP mode idmap support Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 03/29] arm64: KVM: EL2 register definitions Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 04/29] arm64: KVM: system register definitions for 64bit guests Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-07 10:33 ` [kvmarm] " Alexander Graf
2013-03-07 10:33 ` Alexander Graf
2013-03-08 3:23 ` Marc Zyngier
2013-03-08 3:23 ` Marc Zyngier
2013-03-12 13:20 ` Christopher Covington
2013-03-12 13:20 ` Christopher Covington
2013-03-12 13:41 ` Christopher Covington
2013-03-12 13:41 ` Christopher Covington
2013-03-12 13:50 ` Marc Zyngier
2013-03-12 13:50 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 05/29] arm64: KVM: Basic ESR_EL2 helpers and vcpu register access Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-16 0:55 ` Geoff Levand
2013-03-16 0:55 ` Geoff Levand
2013-03-05 3:47 ` [PATCH 06/29] arm64: KVM: fault injection into a guest Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-12 13:20 ` Christopher Covington
2013-03-12 13:20 ` Christopher Covington
2013-03-12 14:25 ` Marc Zyngier
2013-03-12 14:25 ` Marc Zyngier
2013-03-16 1:03 ` Geoff Levand
2013-03-16 1:03 ` Geoff Levand
2013-03-05 3:47 ` [PATCH 07/29] arm64: KVM: architecture specific MMU backend Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 08/29] arm64: KVM: user space interface Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-07 8:09 ` Michael S. Tsirkin
2013-03-07 8:09 ` Michael S. Tsirkin
2013-03-08 3:46 ` [kvmarm] " Marc Zyngier
2013-03-08 3:46 ` Marc Zyngier
2013-03-10 9:23 ` Michael S. Tsirkin
2013-03-10 9:23 ` Michael S. Tsirkin
2013-03-05 3:47 ` [PATCH 09/29] arm64: KVM: system register handling Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-07 10:30 ` [kvmarm] " Alexander Graf
2013-03-07 10:30 ` Alexander Graf
2013-03-08 3:29 ` Marc Zyngier
2013-03-08 3:29 ` Marc Zyngier
2013-03-25 8:19 ` Marc Zyngier
2013-03-25 8:19 ` Marc Zyngier
2013-04-23 23:07 ` Christoffer Dall
2013-04-23 23:07 ` Christoffer Dall
2013-03-05 3:47 ` [PATCH 10/29] arm64: KVM: Cortex-A57 specific system registers handling Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-13 18:30 ` Christopher Covington
2013-03-13 18:30 ` Christopher Covington
2013-03-14 10:26 ` Marc Zyngier
2013-03-14 10:26 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 11/29] arm64: KVM: virtual CPU reset Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 12/29] arm64: KVM: kvm_arch and kvm_vcpu_arch definitions Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-12 17:30 ` Christopher Covington
2013-03-12 17:30 ` Christopher Covington
2013-03-05 3:47 ` [PATCH 13/29] arm64: KVM: MMIO access backend Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 14/29] arm64: KVM: guest one-reg interface Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-12 17:31 ` Christopher Covington
2013-03-12 17:31 ` Christopher Covington
2013-03-12 18:05 ` Marc Zyngier
2013-03-12 18:05 ` Marc Zyngier
2013-03-12 22:07 ` Christopher Covington
2013-03-12 22:07 ` Christopher Covington
2013-03-13 7:48 ` Marc Zyngier
2013-03-13 7:48 ` Marc Zyngier
2013-03-13 20:34 ` Christopher Covington
2013-03-13 20:34 ` Christopher Covington
2013-03-14 8:57 ` [kvmarm] " Peter Maydell
2013-03-14 8:57 ` Peter Maydell
2013-03-20 20:06 ` Christopher Covington
2013-03-20 20:06 ` Christopher Covington
2013-03-05 3:47 ` [PATCH 15/29] arm64: KVM: hypervisor initialization code Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 16/29] arm64: KVM: HYP mode world switch implementation Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-13 19:59 ` Christopher Covington
2013-03-13 19:59 ` Christopher Covington
2013-03-20 20:04 ` Christopher Covington
2013-03-20 20:04 ` Christopher Covington
2013-03-21 11:54 ` Marc Zyngier
2013-03-21 11:54 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 17/29] arm64: KVM: Exit handling Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 18/29] arm64: KVM: Plug the VGIC Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 19/29] arm64: KVM: Plug the arch timer Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 20/29] arm64: KVM: PSCI implementation Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 21/29] arm64: KVM: Build system integration Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 22/29] arm64: KVM: define 32bit specific registers Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-18 17:03 ` Christopher Covington
2013-03-18 17:03 ` Christopher Covington
2013-03-05 3:47 ` [PATCH 23/29] arm64: KVM: 32bit GP register access Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-16 0:24 ` Geoff Levand
2013-03-16 0:24 ` Geoff Levand
2013-03-05 3:47 ` [PATCH 24/29] arm64: KVM: 32bit conditional execution emulation Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-18 17:04 ` Christopher Covington
2013-03-18 17:04 ` Christopher Covington
2013-03-05 3:47 ` [PATCH 25/29] arm64: KVM: 32bit handling of coprocessor traps Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 26/29] arm64: KVM: 32bit coprocessor access for Cortex-A57 Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 27/29] arm64: KVM: 32bit specific register world switch Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-05 3:47 ` [PATCH 28/29] arm64: KVM: 32bit guest fault injection Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-18 18:45 ` Christopher Covington
2013-03-18 18:45 ` Christopher Covington
2013-03-05 3:47 ` [PATCH 29/29] arm64: KVM: enable initialization of a 32bit vcpu Marc Zyngier
2013-03-05 3:47 ` Marc Zyngier
2013-03-18 18:56 ` Christopher Covington
2013-03-18 18:56 ` Christopher Covington
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.