All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 00/11] hvf: Implement Apple Silicon Support
@ 2021-09-12 23:07 Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 01/11] arm: Move PMC register definitions to cpu.h Alexander Graf
                   ` (10 more replies)
  0 siblings, 11 replies; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

Now that Apple Silicon is widely available, people are obviously excited
to try and run virtualized workloads on them, such as Linux and Windows.

This patch set implements a fully functional version to get the ball
going on that. With this applied, I can successfully run both Linux and
Windows as guests. I am not aware of any limitations specific to
Hypervisor.framework apart from:

  - gdbstub debugging (breakpoints)
  - missing GICv3 support

To use hvf support, please make sure to run -M virt,highmem=off to fit
in M1's physical address space limits and use -cpu host.


Enjoy!

Alex

v1 -> v2:

  - New patch: hvf: Actually set SIG_IPI mask
  - New patch: hvf: Introduce hvf vcpu struct
  - New patch: hvf: arm: Mark CPU as dirty on reset
  - Removed patch: hw/arm/virt: Disable highmem when on hypervisor.framework
  - Removed patch: arm: Synchronize CPU on PSCI on
  - Fix build on 32bit arm
  - Merge vcpu kick function patch into ARM enablement
  - Implement WFI handling (allows vCPUs to sleep)
  - Synchronize system registers (fixes OVMF crashes and reboot)
  - Don't always call cpu_synchronize_state()
  - Use more fine grained iothread locking
  - Populate aa64mmfr0 from hardware
  - Make safe to ctrl-C entitlement application

v2 -> v3:

  - Removed patch: hvf: Actually set SIG_IPI mask
  - New patch: hvf: arm: Add support for GICv3
  - New patch: hvf: arm: Implement -cpu host
  - Advance PC on SMC
  - Use cp list interface for sysreg syncs
  - Do not set current_cpu
  - Fix sysreg isread mask
  - Move sysreg handling to functions
  - Remove WFI logic again
  - Revert to global iothread locking

v3 -> v4:

  - Removed patch: hvf: arm: Mark CPU as dirty on reset
  - New patch: hvf: Simplify post reset/init/loadvm hooks
  - Remove i386-softmmu target (meson.build for hvf target)
  - Combine both if statements (PSCI)
  - Use hv.h instead of Hypervisor.h for 10.15 compat
  - Remove manual inclusion of Hypervisor.h in common .c files
  - No longer include Hypervisor.h in arm hvf .c files
  - Remove unused exe_full variable
  - Reuse exe_name variable

v4 -> v5:

  - Use g_free() on destroy

v5 -> v6:

  - Switch SYSREG() macro order to the same as asm intrinsics

v6 -> v7:

  - Already merged: hvf: Add hypervisor entitlement to output binaries
  - Already merged: hvf: x86: Remove unused definitions
  - Patch split: hvf: Move common code out
    -> hvf: Move assert_hvf_ok() into common directory
    -> hvf: Move vcpu thread functions into common directory
    -> hvf: Move cpu functions into common directory
    -> hvf: Move hvf internal definitions into common header
    -> hvf: Make hvf_set_phys_mem() static
    -> hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
    -> hvf: Split out common code on vcpu init and destroy
    -> hvf: Use cpu_synchronize_state()
    -> hvf: Make synchronize functions static
    -> hvf: Remove hvf-accel-ops.h
  - New patch: hvf: arm: Implement PSCI handling
  - New patch: arm: Enable Windows 10 trusted SMCCC boot call
  - New patch: hvf: arm: Handle Windows 10 SMC call
  - Removed patch: "arm: Set PSCI to 0.2 for HVF" (included above)
  - Removed patch: "hvf: arm: Add support for GICv3" (deferred to later)
  - Remove osdep.h include from hvf_int.h
  - Synchronize SIMD registers as well
  - Prepend 0x for hex values
  - Convert DPRINTF to trace points
  - Use main event loop (fixes gdbstub issues)
  - Remove PSCI support, inject UDEF on HVC/SMC
  - Change vtimer logic to look at ctl.istatus for vtimer mask sync
  - Add kick callback again (fixes remote CPU notification)
  - Move function define to own header
  - Do not propagate SVE features for HVF
  - Remove stray whitespace change
  - Verify that EL0 and EL1 do not allow AArch32 mode
  - Only probe host CPU features once
  - Move WFI into function
  - Improve comment wording
  - Simplify HVF matching logic in meson build file

v7 -> v8:

  - checkpatch fixes
  - Do not advance for HVC, PC is already updated by hvf
    (fixes Linux boot)

v8 -> v9:

  - [Merged] hvf: Move assert_hvf_ok() into common directory
  - [Merged] hvf: Move vcpu thread functions into common directory
  - [Merged] hvf: Move cpu functions into common directory
  - [Merged] hvf: Move hvf internal definitions into common header
  - [Merged] hvf: Make hvf_set_phys_mem() static
  - [Merged] hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
  - [Merged] hvf: Split out common code on vcpu init and destroy
  - [Merged] hvf: Use cpu_synchronize_state()
  - [Merged] hvf: Make synchronize functions static
  - [Merged] hvf: Remove hvf-accel-ops.h
  - [Merged] hvf: Introduce hvf vcpu struct
  - [Merged] hvf: Simplify post reset/init/loadvm hooks
  - [Dropped] arm: Enable Windows 10 trusted SMCCC boot call
  - [Dropped] hvf: arm: Handle Windows 10 SMC call
  - [New] arm: Move PMC register definitions to cpu.h
  - [New] hvf: Add execute to dirty log permission bitmap
  - [New] hvf: Introduce hvf_arch_init() callback
  - [New] hvf: arm: Implement PSCI handling
  - [New] hvf: arm: Add rudimentary PMC support
  - [New] arm: tcg: Adhere to SMCCC 1.3 section 5.2
  - [New] hvf: arm: Adhere to SMCCC 1.3 section 5.2
  - Make kick function non-weak
  - Use arm_cpu_do_interrupt()
  - Remove CNTPCT_EL0 write case
  - Inject UDEF on invalid sysreg access
  - Add support for OS locking sysregs
  - Remove PMCCNTR_EL0 handling
  - Print PC on unhandled sysreg trace
  - Sync SP (x31) based on SP_EL0/SP_EL1
  - Fix SPSR_EL1 mapping
  - Only sync known sysregs, assert when syncing fails
  - Improve error message on unhandled ec
  - Move vtimer sync to post-exit (fixes disable corner case from
    kvm-unit-tests)
  - Add vtimer offset, migration and pause logic
  - Flush registers only after EXCP checkers (fixes PSCI on race)
  - Remove Windows specifics and just comply with SMCCC spec
  - Zero-initialize host_isar
  - Use M1 SCTLR reset value
  - Add support for cntv offsets
  - Improve code readability
  - Use new hvf_raise_exception() prototype
  - Make cpu_off function void
  - Add comment about return value, use -1 for "not found"
  - Remove cpu_synchronize_state() when halted

Alexander Graf (10):
  arm: Move PMC register definitions to cpu.h
  hvf: Add execute to dirty log permission bitmap
  hvf: Introduce hvf_arch_init() callback
  hvf: Add Apple Silicon support
  hvf: arm: Implement -cpu host
  hvf: arm: Implement PSCI handling
  arm: Add Hypervisor.framework build target
  hvf: arm: Add rudimentary PMC support
  arm: tcg: Adhere to SMCCC 1.3 section 5.2
  hvf: arm: Adhere to SMCCC 1.3 section 5.2

Peter Collingbourne (1):
  arm/hvf: Add a WFI handler

 MAINTAINERS                 |    5 +
 accel/hvf/hvf-accel-ops.c   |   21 +-
 include/sysemu/hvf_int.h    |   12 +-
 meson.build                 |    8 +
 target/arm/cpu.c            |   13 +-
 target/arm/cpu.h            |   46 ++
 target/arm/helper.c         |   44 --
 target/arm/hvf/hvf.c        | 1246 +++++++++++++++++++++++++++++++++++
 target/arm/hvf/meson.build  |    3 +
 target/arm/hvf/trace-events |   11 +
 target/arm/hvf_arm.h        |   19 +
 target/arm/kvm_arm.h        |    2 -
 target/arm/meson.build      |    2 +
 target/arm/psci.c           |   26 +-
 target/i386/hvf/hvf.c       |   10 +
 15 files changed, 1387 insertions(+), 81 deletions(-)
 create mode 100644 target/arm/hvf/hvf.c
 create mode 100644 target/arm/hvf/meson.build
 create mode 100644 target/arm/hvf/trace-events
 create mode 100644 target/arm/hvf_arm.h

-- 
2.30.1 (Apple Git-130)



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v9 01/11] arm: Move PMC register definitions to cpu.h
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-13  8:49   ` Peter Maydell
  2021-09-12 23:07 ` [PATCH v9 02/11] hvf: Add execute to dirty log permission bitmap Alexander Graf
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

We will need PMC register definitions in accel specific code later.
Move all constant definitions to common arm headers so we can reuse
them.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 target/arm/cpu.h    | 44 ++++++++++++++++++++++++++++++++++++++++++++
 target/arm/helper.c | 44 --------------------------------------------
 2 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 6a987f65e4..6d60b64c15 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1550,6 +1550,50 @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
 #define HSTR_TTEE (1 << 16)
 #define HSTR_TJDBX (1 << 17)
 
+/* Definitions for the PMU registers */
+#define PMCRN_MASK  0xf800
+#define PMCRN_SHIFT 11
+#define PMCRLC  0x40
+#define PMCRDP  0x20
+#define PMCRX   0x10
+#define PMCRD   0x8
+#define PMCRC   0x4
+#define PMCRP   0x2
+#define PMCRE   0x1
+/*
+ * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
+ * which can be written as 1 to trigger behaviour but which stay RAZ).
+ */
+#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
+
+#define PMXEVTYPER_P          0x80000000
+#define PMXEVTYPER_U          0x40000000
+#define PMXEVTYPER_NSK        0x20000000
+#define PMXEVTYPER_NSU        0x10000000
+#define PMXEVTYPER_NSH        0x08000000
+#define PMXEVTYPER_M          0x04000000
+#define PMXEVTYPER_MT         0x02000000
+#define PMXEVTYPER_EVTCOUNT   0x0000ffff
+#define PMXEVTYPER_MASK       (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
+                               PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
+                               PMXEVTYPER_M | PMXEVTYPER_MT | \
+                               PMXEVTYPER_EVTCOUNT)
+
+#define PMCCFILTR             0xf8000000
+#define PMCCFILTR_M           PMXEVTYPER_M
+#define PMCCFILTR_EL0         (PMCCFILTR | PMCCFILTR_M)
+
+static inline uint32_t pmu_num_counters(CPUARMState *env)
+{
+  return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
+}
+
+/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
+static inline uint64_t pmu_counter_mask(CPUARMState *env)
+{
+  return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
+}
+
 /* Return the current FPSCR value.  */
 uint32_t vfp_get_fpscr(CPUARMState *env);
 void vfp_set_fpscr(CPUARMState *env, uint32_t val);
diff --git a/target/arm/helper.c b/target/arm/helper.c
index a7ae78146d..17f1b05622 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -1114,50 +1114,6 @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
     REGINFO_SENTINEL
 };
 
-/* Definitions for the PMU registers */
-#define PMCRN_MASK  0xf800
-#define PMCRN_SHIFT 11
-#define PMCRLC  0x40
-#define PMCRDP  0x20
-#define PMCRX   0x10
-#define PMCRD   0x8
-#define PMCRC   0x4
-#define PMCRP   0x2
-#define PMCRE   0x1
-/*
- * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
- * which can be written as 1 to trigger behaviour but which stay RAZ).
- */
-#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
-
-#define PMXEVTYPER_P          0x80000000
-#define PMXEVTYPER_U          0x40000000
-#define PMXEVTYPER_NSK        0x20000000
-#define PMXEVTYPER_NSU        0x10000000
-#define PMXEVTYPER_NSH        0x08000000
-#define PMXEVTYPER_M          0x04000000
-#define PMXEVTYPER_MT         0x02000000
-#define PMXEVTYPER_EVTCOUNT   0x0000ffff
-#define PMXEVTYPER_MASK       (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
-                               PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
-                               PMXEVTYPER_M | PMXEVTYPER_MT | \
-                               PMXEVTYPER_EVTCOUNT)
-
-#define PMCCFILTR             0xf8000000
-#define PMCCFILTR_M           PMXEVTYPER_M
-#define PMCCFILTR_EL0         (PMCCFILTR | PMCCFILTR_M)
-
-static inline uint32_t pmu_num_counters(CPUARMState *env)
-{
-  return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
-}
-
-/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
-static inline uint64_t pmu_counter_mask(CPUARMState *env)
-{
-  return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
-}
-
 typedef struct pm_event {
     uint16_t number; /* PMEVTYPER.evtCount is 16 bits wide */
     /* If the event is supported on this CPU (used to generate PMCEID[01]) */
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 02/11] hvf: Add execute to dirty log permission bitmap
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 01/11] arm: Move PMC register definitions to cpu.h Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 03/11] hvf: Introduce hvf_arch_init() callback Alexander Graf
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

Hvf's permission bitmap during and after dirty logging does not include
the HV_MEMORY_EXEC permission. At least on Apple Silicon, this leads to
instruction faults once dirty logging was enabled.

Add the bit to make it work properly.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 accel/hvf/hvf-accel-ops.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index d1691be989..71cc2fa70f 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -239,12 +239,12 @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
     if (on) {
         slot->flags |= HVF_SLOT_LOG;
         hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
-                      HV_MEMORY_READ);
+                      HV_MEMORY_READ | HV_MEMORY_EXEC);
     /* stop tracking region*/
     } else {
         slot->flags &= ~HVF_SLOT_LOG;
         hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
-                      HV_MEMORY_READ | HV_MEMORY_WRITE);
+                      HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
     }
 }
 
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 03/11] hvf: Introduce hvf_arch_init() callback
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 01/11] arm: Move PMC register definitions to cpu.h Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 02/11] hvf: Add execute to dirty log permission bitmap Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 04/11] hvf: Add Apple Silicon support Alexander Graf
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

We will need to install a migration helper for the ARM hvf backend.
Let's introduce an arch callback for the overall hvf init chain to
do so.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 accel/hvf/hvf-accel-ops.c | 3 ++-
 include/sysemu/hvf_int.h  | 1 +
 target/i386/hvf/hvf.c     | 5 +++++
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 71cc2fa70f..65d431868f 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -324,7 +324,8 @@ static int hvf_accel_init(MachineState *ms)
 
     hvf_state = s;
     memory_listener_register(&hvf_memory_listener, &address_space_memory);
-    return 0;
+
+    return hvf_arch_init();
 }
 
 static void hvf_accel_class_init(ObjectClass *oc, void *data)
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index 8b66a4e7d0..0466106d16 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -48,6 +48,7 @@ struct hvf_vcpu_state {
 };
 
 void assert_hvf_ok(hv_return_t ret);
+int hvf_arch_init(void);
 int hvf_arch_init_vcpu(CPUState *cpu);
 void hvf_arch_vcpu_destroy(CPUState *cpu);
 int hvf_vcpu_exec(CPUState *);
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index 79ba4ed93a..abef24a9c8 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -206,6 +206,11 @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
     return env->apic_bus_freq != 0;
 }
 
+int hvf_arch_init(void)
+{
+    return 0;
+}
+
 int hvf_arch_init_vcpu(CPUState *cpu)
 {
     X86CPU *x86cpu = X86_CPU(cpu);
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 04/11] hvf: Add Apple Silicon support
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (2 preceding siblings ...)
  2021-09-12 23:07 ` [PATCH v9 03/11] hvf: Introduce hvf_arch_init() callback Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 05/11] arm/hvf: Add a WFI handler Alexander Graf
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

With Apple Silicon available to the masses, it's a good time to add support
for driving its virtualization extensions from QEMU.

This patch adds all necessary architecture specific code to get basic VMs
working. It's still pretty raw, but definitely functional.

Known limitations:

  - Vtimer acknowledgement is hacky
  - Should implement more sysregs and fault on invalid ones then
  - WFI handling is missing, need to marry it with vtimer

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>

---

v1 -> v2:

  - Merge vcpu kick function patch
  - Implement WFI handling (allows vCPUs to sleep)
  - Synchronize system registers (fixes OVMF crashes and reboot)
  - Don't always call cpu_synchronize_state()
  - Use more fine grained iothread locking
  - Populate aa64mmfr0 from hardware

v2 -> v3:

  - Advance PC on SMC
  - Use cp list interface for sysreg syncs
  - Do not set current_cpu
  - Fix sysreg isread mask
  - Move sysreg handling to functions
  - Remove WFI logic again
  - Revert to global iothread locking
  - Use Hypervisor.h on arm, hv.h does not contain aarch64 definitions

v3 -> v4:

  - No longer include Hypervisor.h

v5 -> v6:

  - Swap sysreg definition order. This way we're in line with asm outputs.

v6 -> v7:

  - Remove osdep.h include from hvf_int.h
  - Synchronize SIMD registers as well
  - Prepend 0x for hex values
  - Convert DPRINTF to trace points
  - Use main event loop (fixes gdbstub issues)
  - Remove PSCI support, inject UDEF on HVC/SMC
  - Change vtimer logic to look at ctl.istatus for vtimer mask sync
  - Add kick callback again (fixes remote CPU notification)

v7 -> v8:

  - Fix checkpatch errors

v8 -> v9:

  - Make kick function non-weak
  - Use arm_cpu_do_interrupt()
  - Remove CNTPCT_EL0 write case
  - Inject UDEF on invalid sysreg access
  - Add support for OS locking sysregs
  - Remove PMCCNTR_EL0 handling
  - Print PC on unhandled sysreg trace
  - Sync SP (x31) based on SP_EL0/SP_EL1
  - Fix SPSR_EL1 mapping
  - Only sync known sysregs, assert when syncing fails
  - Improve error message on unhandled ec
  - Move vtimer sync to post-exit (fixes disable corner case from
    kvm-unit-tests)
  - Add vtimer offset, migration and pause logic
  - Flush registers only after EXCP checkers (fixes PSCI on race)
---
 MAINTAINERS                 |   5 +
 accel/hvf/hvf-accel-ops.c   |   9 +
 include/sysemu/hvf_int.h    |  10 +-
 meson.build                 |   1 +
 target/arm/hvf/hvf.c        | 793 ++++++++++++++++++++++++++++++++++++
 target/arm/hvf/trace-events |  10 +
 target/i386/hvf/hvf.c       |   5 +
 7 files changed, 832 insertions(+), 1 deletion(-)
 create mode 100644 target/arm/hvf/hvf.c
 create mode 100644 target/arm/hvf/trace-events

diff --git a/MAINTAINERS b/MAINTAINERS
index 6c20634d63..d7915ec128 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -433,6 +433,11 @@ F: accel/accel-*.c
 F: accel/Makefile.objs
 F: accel/stubs/Makefile.objs
 
+Apple Silicon HVF CPUs
+M: Alexander Graf <agraf@csgraf.de>
+S: Maintained
+F: target/arm/hvf/
+
 X86 HVF CPUs
 M: Cameron Esfahani <dirty@apple.com>
 M: Roman Bolshakov <r.bolshakov@yadro.com>
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 65d431868f..4f75927a8e 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -60,6 +60,10 @@
 
 HVFState *hvf_state;
 
+#ifdef __aarch64__
+#define HV_VM_DEFAULT NULL
+#endif
+
 /* Memory slots */
 
 hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
@@ -376,7 +380,11 @@ static int hvf_init_vcpu(CPUState *cpu)
     pthread_sigmask(SIG_BLOCK, NULL, &set);
     sigdelset(&set, SIG_IPI);
 
+#ifdef __aarch64__
+    r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
+#else
     r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
+#endif
     cpu->vcpu_dirty = 1;
     assert_hvf_ok(r);
 
@@ -452,6 +460,7 @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
     AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
 
     ops->create_vcpu_thread = hvf_start_vcpu_thread;
+    ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
 
     ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
     ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index 0466106d16..7c245c7b11 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -11,7 +11,11 @@
 #ifndef HVF_INT_H
 #define HVF_INT_H
 
+#ifdef __aarch64__
+#include <Hypervisor/Hypervisor.h>
+#else
 #include <Hypervisor/hv.h>
+#endif
 
 /* hvf_slot flags */
 #define HVF_SLOT_LOG (1 << 0)
@@ -40,11 +44,14 @@ struct HVFState {
     int num_slots;
 
     hvf_vcpu_caps *hvf_caps;
+    uint64_t vtimer_offset;
 };
 extern HVFState *hvf_state;
 
 struct hvf_vcpu_state {
-    int fd;
+    uint64_t fd;
+    void *exit;
+    bool vtimer_masked;
 };
 
 void assert_hvf_ok(hv_return_t ret);
@@ -55,5 +62,6 @@ int hvf_vcpu_exec(CPUState *);
 hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
 int hvf_put_registers(CPUState *);
 int hvf_get_registers(CPUState *);
+void hvf_kick_vcpu_thread(CPUState *cpu);
 
 #endif
diff --git a/meson.build b/meson.build
index 9a64d16943..a3e9b95846 100644
--- a/meson.build
+++ b/meson.build
@@ -2169,6 +2169,7 @@ if have_system or have_user
     'accel/tcg',
     'hw/core',
     'target/arm',
+    'target/arm/hvf',
     'target/hppa',
     'target/i386',
     'target/i386/kvm',
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
new file mode 100644
index 0000000000..f04324b598
--- /dev/null
+++ b/target/arm/hvf/hvf.c
@@ -0,0 +1,793 @@
+/*
+ * QEMU Hypervisor.framework support for Apple Silicon
+
+ * Copyright 2020 Alexander Graf <agraf@csgraf.de>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "qemu-common.h"
+#include "qemu/error-report.h"
+
+#include "sysemu/runstate.h"
+#include "sysemu/hvf.h"
+#include "sysemu/hvf_int.h"
+#include "sysemu/hw_accel.h"
+
+#include <mach/mach_time.h>
+
+#include "exec/address-spaces.h"
+#include "hw/irq.h"
+#include "qemu/main-loop.h"
+#include "sysemu/cpus.h"
+#include "target/arm/cpu.h"
+#include "target/arm/internals.h"
+#include "trace/trace-target_arm_hvf.h"
+#include "migration/vmstate.h"
+
+#define HVF_SYSREG(crn, crm, op0, op1, op2) \
+        ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
+#define PL1_WRITE_MASK 0x4
+
+#define SYSREG(op0, op1, crn, crm, op2) \
+    ((op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (crm << 1))
+#define SYSREG_MASK           SYSREG(0x3, 0x7, 0xf, 0xf, 0x7)
+#define SYSREG_OSLAR_EL1      SYSREG(2, 0, 1, 0, 4)
+#define SYSREG_OSLSR_EL1      SYSREG(2, 0, 1, 1, 4)
+#define SYSREG_OSDLR_EL1      SYSREG(2, 0, 1, 3, 4)
+#define SYSREG_CNTPCT_EL0     SYSREG(3, 3, 14, 0, 1)
+
+#define WFX_IS_WFE (1 << 0)
+
+#define TMR_CTL_ENABLE  (1 << 0)
+#define TMR_CTL_IMASK   (1 << 1)
+#define TMR_CTL_ISTATUS (1 << 2)
+
+typedef struct HVFVTimer {
+    /* Vtimer value during migration and paused state */
+    uint64_t vtimer_val;
+} HVFVTimer;
+
+static HVFVTimer vtimer;
+
+struct hvf_reg_match {
+    int reg;
+    uint64_t offset;
+};
+
+static const struct hvf_reg_match hvf_reg_match[] = {
+    { HV_REG_X0,   offsetof(CPUARMState, xregs[0]) },
+    { HV_REG_X1,   offsetof(CPUARMState, xregs[1]) },
+    { HV_REG_X2,   offsetof(CPUARMState, xregs[2]) },
+    { HV_REG_X3,   offsetof(CPUARMState, xregs[3]) },
+    { HV_REG_X4,   offsetof(CPUARMState, xregs[4]) },
+    { HV_REG_X5,   offsetof(CPUARMState, xregs[5]) },
+    { HV_REG_X6,   offsetof(CPUARMState, xregs[6]) },
+    { HV_REG_X7,   offsetof(CPUARMState, xregs[7]) },
+    { HV_REG_X8,   offsetof(CPUARMState, xregs[8]) },
+    { HV_REG_X9,   offsetof(CPUARMState, xregs[9]) },
+    { HV_REG_X10,  offsetof(CPUARMState, xregs[10]) },
+    { HV_REG_X11,  offsetof(CPUARMState, xregs[11]) },
+    { HV_REG_X12,  offsetof(CPUARMState, xregs[12]) },
+    { HV_REG_X13,  offsetof(CPUARMState, xregs[13]) },
+    { HV_REG_X14,  offsetof(CPUARMState, xregs[14]) },
+    { HV_REG_X15,  offsetof(CPUARMState, xregs[15]) },
+    { HV_REG_X16,  offsetof(CPUARMState, xregs[16]) },
+    { HV_REG_X17,  offsetof(CPUARMState, xregs[17]) },
+    { HV_REG_X18,  offsetof(CPUARMState, xregs[18]) },
+    { HV_REG_X19,  offsetof(CPUARMState, xregs[19]) },
+    { HV_REG_X20,  offsetof(CPUARMState, xregs[20]) },
+    { HV_REG_X21,  offsetof(CPUARMState, xregs[21]) },
+    { HV_REG_X22,  offsetof(CPUARMState, xregs[22]) },
+    { HV_REG_X23,  offsetof(CPUARMState, xregs[23]) },
+    { HV_REG_X24,  offsetof(CPUARMState, xregs[24]) },
+    { HV_REG_X25,  offsetof(CPUARMState, xregs[25]) },
+    { HV_REG_X26,  offsetof(CPUARMState, xregs[26]) },
+    { HV_REG_X27,  offsetof(CPUARMState, xregs[27]) },
+    { HV_REG_X28,  offsetof(CPUARMState, xregs[28]) },
+    { HV_REG_X29,  offsetof(CPUARMState, xregs[29]) },
+    { HV_REG_X30,  offsetof(CPUARMState, xregs[30]) },
+    { HV_REG_PC,   offsetof(CPUARMState, pc) },
+};
+
+static const struct hvf_reg_match hvf_fpreg_match[] = {
+    { HV_SIMD_FP_REG_Q0,  offsetof(CPUARMState, vfp.zregs[0]) },
+    { HV_SIMD_FP_REG_Q1,  offsetof(CPUARMState, vfp.zregs[1]) },
+    { HV_SIMD_FP_REG_Q2,  offsetof(CPUARMState, vfp.zregs[2]) },
+    { HV_SIMD_FP_REG_Q3,  offsetof(CPUARMState, vfp.zregs[3]) },
+    { HV_SIMD_FP_REG_Q4,  offsetof(CPUARMState, vfp.zregs[4]) },
+    { HV_SIMD_FP_REG_Q5,  offsetof(CPUARMState, vfp.zregs[5]) },
+    { HV_SIMD_FP_REG_Q6,  offsetof(CPUARMState, vfp.zregs[6]) },
+    { HV_SIMD_FP_REG_Q7,  offsetof(CPUARMState, vfp.zregs[7]) },
+    { HV_SIMD_FP_REG_Q8,  offsetof(CPUARMState, vfp.zregs[8]) },
+    { HV_SIMD_FP_REG_Q9,  offsetof(CPUARMState, vfp.zregs[9]) },
+    { HV_SIMD_FP_REG_Q10, offsetof(CPUARMState, vfp.zregs[10]) },
+    { HV_SIMD_FP_REG_Q11, offsetof(CPUARMState, vfp.zregs[11]) },
+    { HV_SIMD_FP_REG_Q12, offsetof(CPUARMState, vfp.zregs[12]) },
+    { HV_SIMD_FP_REG_Q13, offsetof(CPUARMState, vfp.zregs[13]) },
+    { HV_SIMD_FP_REG_Q14, offsetof(CPUARMState, vfp.zregs[14]) },
+    { HV_SIMD_FP_REG_Q15, offsetof(CPUARMState, vfp.zregs[15]) },
+    { HV_SIMD_FP_REG_Q16, offsetof(CPUARMState, vfp.zregs[16]) },
+    { HV_SIMD_FP_REG_Q17, offsetof(CPUARMState, vfp.zregs[17]) },
+    { HV_SIMD_FP_REG_Q18, offsetof(CPUARMState, vfp.zregs[18]) },
+    { HV_SIMD_FP_REG_Q19, offsetof(CPUARMState, vfp.zregs[19]) },
+    { HV_SIMD_FP_REG_Q20, offsetof(CPUARMState, vfp.zregs[20]) },
+    { HV_SIMD_FP_REG_Q21, offsetof(CPUARMState, vfp.zregs[21]) },
+    { HV_SIMD_FP_REG_Q22, offsetof(CPUARMState, vfp.zregs[22]) },
+    { HV_SIMD_FP_REG_Q23, offsetof(CPUARMState, vfp.zregs[23]) },
+    { HV_SIMD_FP_REG_Q24, offsetof(CPUARMState, vfp.zregs[24]) },
+    { HV_SIMD_FP_REG_Q25, offsetof(CPUARMState, vfp.zregs[25]) },
+    { HV_SIMD_FP_REG_Q26, offsetof(CPUARMState, vfp.zregs[26]) },
+    { HV_SIMD_FP_REG_Q27, offsetof(CPUARMState, vfp.zregs[27]) },
+    { HV_SIMD_FP_REG_Q28, offsetof(CPUARMState, vfp.zregs[28]) },
+    { HV_SIMD_FP_REG_Q29, offsetof(CPUARMState, vfp.zregs[29]) },
+    { HV_SIMD_FP_REG_Q30, offsetof(CPUARMState, vfp.zregs[30]) },
+    { HV_SIMD_FP_REG_Q31, offsetof(CPUARMState, vfp.zregs[31]) },
+};
+
+struct hvf_sreg_match {
+    int reg;
+    uint32_t key;
+    uint32_t cp_idx;
+};
+
+static struct hvf_sreg_match hvf_sreg_match[] = {
+    { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) },
+
+#ifdef SYNC_NO_RAW_REGS
+    /*
+     * The registers below are manually synced on init because they are
+     * marked as NO_RAW. We still list them to make number space sync easier.
+     */
+    { HV_SYS_REG_MDCCINT_EL1, HVF_SYSREG(0, 2, 2, 0, 0) },
+    { HV_SYS_REG_MIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 0) },
+    { HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) },
+    { HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) },
+#endif
+    { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) },
+    { HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) },
+    { HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) },
+    { HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) },
+    { HV_SYS_REG_ID_AA64ISAR1_EL1, HVF_SYSREG(0, 6, 3, 0, 1) },
+#ifdef SYNC_NO_MMFR0
+    /* We keep the hardware MMFR0 around. HW limits are there anyway */
+    { HV_SYS_REG_ID_AA64MMFR0_EL1, HVF_SYSREG(0, 7, 3, 0, 0) },
+#endif
+    { HV_SYS_REG_ID_AA64MMFR1_EL1, HVF_SYSREG(0, 7, 3, 0, 1) },
+    { HV_SYS_REG_ID_AA64MMFR2_EL1, HVF_SYSREG(0, 7, 3, 0, 2) },
+
+    { HV_SYS_REG_MDSCR_EL1, HVF_SYSREG(0, 2, 2, 0, 2) },
+    { HV_SYS_REG_SCTLR_EL1, HVF_SYSREG(1, 0, 3, 0, 0) },
+    { HV_SYS_REG_CPACR_EL1, HVF_SYSREG(1, 0, 3, 0, 2) },
+    { HV_SYS_REG_TTBR0_EL1, HVF_SYSREG(2, 0, 3, 0, 0) },
+    { HV_SYS_REG_TTBR1_EL1, HVF_SYSREG(2, 0, 3, 0, 1) },
+    { HV_SYS_REG_TCR_EL1, HVF_SYSREG(2, 0, 3, 0, 2) },
+
+    { HV_SYS_REG_APIAKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 0) },
+    { HV_SYS_REG_APIAKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 1) },
+    { HV_SYS_REG_APIBKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 2) },
+    { HV_SYS_REG_APIBKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 3) },
+    { HV_SYS_REG_APDAKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 0) },
+    { HV_SYS_REG_APDAKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 1) },
+    { HV_SYS_REG_APDBKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 2) },
+    { HV_SYS_REG_APDBKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 3) },
+    { HV_SYS_REG_APGAKEYLO_EL1, HVF_SYSREG(2, 3, 3, 0, 0) },
+    { HV_SYS_REG_APGAKEYHI_EL1, HVF_SYSREG(2, 3, 3, 0, 1) },
+
+    { HV_SYS_REG_SPSR_EL1, HVF_SYSREG(4, 0, 3, 0, 0) },
+    { HV_SYS_REG_ELR_EL1, HVF_SYSREG(4, 0, 3, 0, 1) },
+    { HV_SYS_REG_SP_EL0, HVF_SYSREG(4, 1, 3, 0, 0) },
+    { HV_SYS_REG_AFSR0_EL1, HVF_SYSREG(5, 1, 3, 0, 0) },
+    { HV_SYS_REG_AFSR1_EL1, HVF_SYSREG(5, 1, 3, 0, 1) },
+    { HV_SYS_REG_ESR_EL1, HVF_SYSREG(5, 2, 3, 0, 0) },
+    { HV_SYS_REG_FAR_EL1, HVF_SYSREG(6, 0, 3, 0, 0) },
+    { HV_SYS_REG_PAR_EL1, HVF_SYSREG(7, 4, 3, 0, 0) },
+    { HV_SYS_REG_MAIR_EL1, HVF_SYSREG(10, 2, 3, 0, 0) },
+    { HV_SYS_REG_AMAIR_EL1, HVF_SYSREG(10, 3, 3, 0, 0) },
+    { HV_SYS_REG_VBAR_EL1, HVF_SYSREG(12, 0, 3, 0, 0) },
+    { HV_SYS_REG_CONTEXTIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 1) },
+    { HV_SYS_REG_TPIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 4) },
+    { HV_SYS_REG_CNTKCTL_EL1, HVF_SYSREG(14, 1, 3, 0, 0) },
+    { HV_SYS_REG_CSSELR_EL1, HVF_SYSREG(0, 0, 3, 2, 0) },
+    { HV_SYS_REG_TPIDR_EL0, HVF_SYSREG(13, 0, 3, 3, 2) },
+    { HV_SYS_REG_TPIDRRO_EL0, HVF_SYSREG(13, 0, 3, 3, 3) },
+    { HV_SYS_REG_CNTV_CTL_EL0, HVF_SYSREG(14, 3, 3, 3, 1) },
+    { HV_SYS_REG_CNTV_CVAL_EL0, HVF_SYSREG(14, 3, 3, 3, 2) },
+    { HV_SYS_REG_SP_EL1, HVF_SYSREG(4, 1, 3, 4, 0) },
+};
+
+int hvf_get_registers(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    hv_return_t ret;
+    uint64_t val;
+    hv_simd_fp_uchar16_t fpval;
+    int i;
+
+    for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
+        ret = hv_vcpu_get_reg(cpu->hvf->fd, hvf_reg_match[i].reg, &val);
+        *(uint64_t *)((void *)env + hvf_reg_match[i].offset) = val;
+        assert_hvf_ok(ret);
+    }
+
+    for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
+        ret = hv_vcpu_get_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
+                                      &fpval);
+        memcpy((void *)env + hvf_fpreg_match[i].offset, &fpval, sizeof(fpval));
+        assert_hvf_ok(ret);
+    }
+
+    val = 0;
+    ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPCR, &val);
+    assert_hvf_ok(ret);
+    vfp_set_fpcr(env, val);
+
+    val = 0;
+    ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPSR, &val);
+    assert_hvf_ok(ret);
+    vfp_set_fpsr(env, val);
+
+    ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_CPSR, &val);
+    assert_hvf_ok(ret);
+    pstate_write(env, val);
+
+    for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
+        if (hvf_sreg_match[i].cp_idx == -1) {
+            continue;
+        }
+
+        ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
+        assert_hvf_ok(ret);
+
+        arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx] = val;
+    }
+    assert(write_list_to_cpustate(arm_cpu));
+
+    aarch64_restore_sp(env, arm_current_el(env));
+
+    return 0;
+}
+
+int hvf_put_registers(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    hv_return_t ret;
+    uint64_t val;
+    hv_simd_fp_uchar16_t fpval;
+    int i;
+
+    for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
+        val = *(uint64_t *)((void *)env + hvf_reg_match[i].offset);
+        ret = hv_vcpu_set_reg(cpu->hvf->fd, hvf_reg_match[i].reg, val);
+        assert_hvf_ok(ret);
+    }
+
+    for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
+        memcpy(&fpval, (void *)env + hvf_fpreg_match[i].offset, sizeof(fpval));
+        ret = hv_vcpu_set_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
+                                      fpval);
+        assert_hvf_ok(ret);
+    }
+
+    ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPCR, vfp_get_fpcr(env));
+    assert_hvf_ok(ret);
+
+    ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPSR, vfp_get_fpsr(env));
+    assert_hvf_ok(ret);
+
+    ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_CPSR, pstate_read(env));
+    assert_hvf_ok(ret);
+
+    aarch64_save_sp(env, arm_current_el(env));
+
+    assert(write_cpustate_to_list(arm_cpu, false));
+    for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
+        if (hvf_sreg_match[i].cp_idx == -1) {
+            continue;
+        }
+
+        val = arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx];
+        ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
+        assert_hvf_ok(ret);
+    }
+
+    ret = hv_vcpu_set_vtimer_offset(cpu->hvf->fd, hvf_state->vtimer_offset);
+    assert_hvf_ok(ret);
+
+    return 0;
+}
+
+static void flush_cpu_state(CPUState *cpu)
+{
+    if (cpu->vcpu_dirty) {
+        hvf_put_registers(cpu);
+        cpu->vcpu_dirty = false;
+    }
+}
+
+static void hvf_set_reg(CPUState *cpu, int rt, uint64_t val)
+{
+    hv_return_t r;
+
+    flush_cpu_state(cpu);
+
+    if (rt < 31) {
+        r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_X0 + rt, val);
+        assert_hvf_ok(r);
+    }
+}
+
+static uint64_t hvf_get_reg(CPUState *cpu, int rt)
+{
+    uint64_t val = 0;
+    hv_return_t r;
+
+    flush_cpu_state(cpu);
+
+    if (rt < 31) {
+        r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_X0 + rt, &val);
+        assert_hvf_ok(r);
+    }
+
+    return val;
+}
+
+void hvf_arch_vcpu_destroy(CPUState *cpu)
+{
+}
+
+int hvf_arch_init_vcpu(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    uint32_t sregs_match_len = ARRAY_SIZE(hvf_sreg_match);
+    uint32_t sregs_cnt = 0;
+    uint64_t pfr;
+    hv_return_t ret;
+    int i;
+
+    env->aarch64 = 1;
+    asm volatile("mrs %0, cntfrq_el0" : "=r"(arm_cpu->gt_cntfrq_hz));
+
+    /* Allocate enough space for our sysreg sync */
+    arm_cpu->cpreg_indexes = g_renew(uint64_t, arm_cpu->cpreg_indexes,
+                                     sregs_match_len);
+    arm_cpu->cpreg_values = g_renew(uint64_t, arm_cpu->cpreg_values,
+                                    sregs_match_len);
+    arm_cpu->cpreg_vmstate_indexes = g_renew(uint64_t,
+                                             arm_cpu->cpreg_vmstate_indexes,
+                                             sregs_match_len);
+    arm_cpu->cpreg_vmstate_values = g_renew(uint64_t,
+                                            arm_cpu->cpreg_vmstate_values,
+                                            sregs_match_len);
+
+    memset(arm_cpu->cpreg_values, 0, sregs_match_len * sizeof(uint64_t));
+
+    /* Populate cp list for all known sysregs */
+    for (i = 0; i < sregs_match_len; i++) {
+        const ARMCPRegInfo *ri;
+        uint32_t key = hvf_sreg_match[i].key;
+
+        ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
+        if (ri) {
+            assert(!(ri->type & ARM_CP_NO_RAW));
+            hvf_sreg_match[i].cp_idx = sregs_cnt;
+            arm_cpu->cpreg_indexes[sregs_cnt++] = cpreg_to_kvm_id(key);
+        } else {
+            hvf_sreg_match[i].cp_idx = -1;
+        }
+    }
+    arm_cpu->cpreg_array_len = sregs_cnt;
+    arm_cpu->cpreg_vmstate_array_len = sregs_cnt;
+
+    assert(write_cpustate_to_list(arm_cpu, false));
+
+    /* Set CP_NO_RAW system registers on init */
+    ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MIDR_EL1,
+                              arm_cpu->midr);
+    assert_hvf_ok(ret);
+
+    ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MPIDR_EL1,
+                              arm_cpu->mp_affinity);
+    assert_hvf_ok(ret);
+
+    ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr);
+    assert_hvf_ok(ret);
+    pfr |= env->gicv3state ? (1 << 24) : 0;
+    ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr);
+    assert_hvf_ok(ret);
+
+    /* We're limited to underlying hardware caps, override internal versions */
+    ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64MMFR0_EL1,
+                              &arm_cpu->isar.id_aa64mmfr0);
+    assert_hvf_ok(ret);
+
+    return 0;
+}
+
+void hvf_kick_vcpu_thread(CPUState *cpu)
+{
+    hv_vcpus_exit(&cpu->hvf->fd, 1);
+}
+
+static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
+                                uint32_t syndrome)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+
+    cpu->exception_index = excp;
+    env->exception.target_el = 1;
+    env->exception.syndrome = syndrome;
+
+    arm_cpu_do_interrupt(cpu);
+}
+
+static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    uint64_t val = 0;
+
+    switch (reg) {
+    case SYSREG_CNTPCT_EL0:
+        val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
+              gt_cntfrq_period_ns(arm_cpu);
+        break;
+    case SYSREG_OSLSR_EL1:
+        val = env->cp15.oslsr_el1;
+        break;
+    case SYSREG_OSDLR_EL1:
+        /* Dummy register */
+        break;
+    default:
+        cpu_synchronize_state(cpu);
+        trace_hvf_unhandled_sysreg_read(env->pc, reg,
+                                        (reg >> 20) & 0x3,
+                                        (reg >> 14) & 0x7,
+                                        (reg >> 10) & 0xf,
+                                        (reg >> 1) & 0xf,
+                                        (reg >> 17) & 0x7);
+        hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
+        return 1;
+    }
+
+    trace_hvf_sysreg_read(reg,
+                          (reg >> 20) & 0x3,
+                          (reg >> 14) & 0x7,
+                          (reg >> 10) & 0xf,
+                          (reg >> 1) & 0xf,
+                          (reg >> 17) & 0x7,
+                          val);
+    hvf_set_reg(cpu, rt, val);
+
+    return 0;
+}
+
+static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+
+    trace_hvf_sysreg_write(reg,
+                           (reg >> 20) & 0x3,
+                           (reg >> 14) & 0x7,
+                           (reg >> 10) & 0xf,
+                           (reg >> 1) & 0xf,
+                           (reg >> 17) & 0x7,
+                           val);
+
+    switch (reg) {
+    case SYSREG_OSLAR_EL1:
+        env->cp15.oslsr_el1 = val & 1;
+        break;
+    case SYSREG_OSDLR_EL1:
+        /* Dummy register */
+        break;
+    default:
+        cpu_synchronize_state(cpu);
+        trace_hvf_unhandled_sysreg_write(env->pc, reg,
+                                         (reg >> 20) & 0x3,
+                                         (reg >> 14) & 0x7,
+                                         (reg >> 10) & 0xf,
+                                         (reg >> 1) & 0xf,
+                                         (reg >> 17) & 0x7);
+        hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
+        return 1;
+    }
+
+    return 0;
+}
+
+static int hvf_inject_interrupts(CPUState *cpu)
+{
+    if (cpu->interrupt_request & CPU_INTERRUPT_FIQ) {
+        trace_hvf_inject_fiq();
+        hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_FIQ,
+                                      true);
+    }
+
+    if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
+        trace_hvf_inject_irq();
+        hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_IRQ,
+                                      true);
+    }
+
+    return 0;
+}
+
+static uint64_t hvf_vtimer_val_raw(void)
+{
+    /*
+     * mach_absolute_time() returns the vtimer value without the VM
+     * offset that we define. Add our own offset on top.
+     */
+    return mach_absolute_time() - hvf_state->vtimer_offset;
+}
+
+static void hvf_sync_vtimer(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    hv_return_t r;
+    uint64_t ctl;
+    bool irq_state;
+
+    if (!cpu->hvf->vtimer_masked) {
+        /* We will get notified on vtimer changes by hvf, nothing to do */
+        return;
+    }
+
+    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
+    assert_hvf_ok(r);
+
+    irq_state = (ctl & (TMR_CTL_ENABLE | TMR_CTL_IMASK | TMR_CTL_ISTATUS)) ==
+                (TMR_CTL_ENABLE | TMR_CTL_ISTATUS);
+    qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], irq_state);
+
+    if (!irq_state) {
+        /* Timer no longer asserting, we can unmask it */
+        hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false);
+        cpu->hvf->vtimer_masked = false;
+    }
+}
+
+int hvf_vcpu_exec(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
+    hv_return_t r;
+    bool advance_pc = false;
+
+    if (hvf_inject_interrupts(cpu)) {
+        return EXCP_INTERRUPT;
+    }
+
+    if (cpu->halted) {
+        return EXCP_HLT;
+    }
+
+    flush_cpu_state(cpu);
+
+    qemu_mutex_unlock_iothread();
+    assert_hvf_ok(hv_vcpu_run(cpu->hvf->fd));
+
+    /* handle VMEXIT */
+    uint64_t exit_reason = hvf_exit->reason;
+    uint64_t syndrome = hvf_exit->exception.syndrome;
+    uint32_t ec = syn_get_ec(syndrome);
+
+    qemu_mutex_lock_iothread();
+    switch (exit_reason) {
+    case HV_EXIT_REASON_EXCEPTION:
+        /* This is the main one, handle below. */
+        break;
+    case HV_EXIT_REASON_VTIMER_ACTIVATED:
+        qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1);
+        cpu->hvf->vtimer_masked = true;
+        return 0;
+    case HV_EXIT_REASON_CANCELED:
+        /* we got kicked, no exit to process */
+        return 0;
+    default:
+        assert(0);
+    }
+
+    hvf_sync_vtimer(cpu);
+
+    switch (ec) {
+    case EC_DATAABORT: {
+        bool isv = syndrome & ARM_EL_ISV;
+        bool iswrite = (syndrome >> 6) & 1;
+        bool s1ptw = (syndrome >> 7) & 1;
+        uint32_t sas = (syndrome >> 22) & 3;
+        uint32_t len = 1 << sas;
+        uint32_t srt = (syndrome >> 16) & 0x1f;
+        uint64_t val = 0;
+
+        trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address,
+                             hvf_exit->exception.physical_address, isv,
+                             iswrite, s1ptw, len, srt);
+
+        assert(isv);
+
+        if (iswrite) {
+            val = hvf_get_reg(cpu, srt);
+            address_space_write(&address_space_memory,
+                                hvf_exit->exception.physical_address,
+                                MEMTXATTRS_UNSPECIFIED, &val, len);
+        } else {
+            address_space_read(&address_space_memory,
+                               hvf_exit->exception.physical_address,
+                               MEMTXATTRS_UNSPECIFIED, &val, len);
+            hvf_set_reg(cpu, srt, val);
+        }
+
+        advance_pc = true;
+        break;
+    }
+    case EC_SYSTEMREGISTERTRAP: {
+        bool isread = (syndrome >> 0) & 1;
+        uint32_t rt = (syndrome >> 5) & 0x1f;
+        uint32_t reg = syndrome & SYSREG_MASK;
+        uint64_t val;
+        int ret = 0;
+
+        if (isread) {
+            ret = hvf_sysreg_read(cpu, reg, rt);
+        } else {
+            val = hvf_get_reg(cpu, rt);
+            ret = hvf_sysreg_write(cpu, reg, val);
+        }
+
+        advance_pc = !ret;
+        break;
+    }
+    case EC_WFX_TRAP:
+        advance_pc = true;
+        break;
+    case EC_AA64_HVC:
+        cpu_synchronize_state(cpu);
+        trace_hvf_unknown_hvf(env->xregs[0]);
+        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        break;
+    case EC_AA64_SMC:
+        cpu_synchronize_state(cpu);
+        trace_hvf_unknown_smc(env->xregs[0]);
+        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        break;
+    default:
+        cpu_synchronize_state(cpu);
+        trace_hvf_exit(syndrome, ec, env->pc);
+        error_report("0x%llx: unhandled exception ec=0x%x", env->pc, ec);
+    }
+
+    if (advance_pc) {
+        uint64_t pc;
+
+        flush_cpu_state(cpu);
+
+        r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc);
+        assert_hvf_ok(r);
+        pc += 4;
+        r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
+        assert_hvf_ok(r);
+    }
+
+    return 0;
+}
+
+static const VMStateDescription vmstate_hvf_vtimer = {
+    .name = "hvf-vtimer",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT64(vtimer_val, HVFVTimer),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static void hvf_vm_state_change(void *opaque, bool running, RunState state)
+{
+    HVFVTimer *s = opaque;
+
+    if (running) {
+        /* Update vtimer offset on all CPUs */
+        hvf_state->vtimer_offset = mach_absolute_time() - s->vtimer_val;
+        cpu_synchronize_all_states();
+    } else {
+        /* Remember vtimer value on every pause */
+        s->vtimer_val = hvf_vtimer_val_raw();
+    }
+}
+
+int hvf_arch_init(void)
+{
+    hvf_state->vtimer_offset = mach_absolute_time();
+    vmstate_register(NULL, 0, &vmstate_hvf_vtimer, &vtimer);
+    qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
+    return 0;
+}
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
new file mode 100644
index 0000000000..e972bdd9ce
--- /dev/null
+++ b/target/arm/hvf/trace-events
@@ -0,0 +1,10 @@
+hvf_unhandled_sysreg_read(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg read at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
+hvf_unhandled_sysreg_write(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg write at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
+hvf_inject_fiq(void) "injecting FIQ"
+hvf_inject_irq(void) "injecting IRQ"
+hvf_data_abort(uint64_t pc, uint64_t va, uint64_t pa, bool isv, bool iswrite, bool s1ptw, uint32_t len, uint32_t srt) "data abort: [pc=0x%"PRIx64" va=0x%016"PRIx64" pa=0x%016"PRIx64" isv=%d iswrite=%d s1ptw=%d len=%d srt=%d]"
+hvf_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d) = 0x%016"PRIx64
+hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d, val=0x%016"PRIx64")"
+hvf_unknown_hvf(uint64_t x0) "unknown HVC! 0x%016"PRIx64
+hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
+hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index abef24a9c8..33a4e74980 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -206,6 +206,11 @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
     return env->apic_bus_freq != 0;
 }
 
+void hvf_kick_vcpu_thread(CPUState *cpu)
+{
+    cpus_kick_thread(cpu);
+}
+
 int hvf_arch_init(void)
 {
     return 0;
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 05/11] arm/hvf: Add a WFI handler
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (3 preceding siblings ...)
  2021-09-12 23:07 ` [PATCH v9 04/11] hvf: Add Apple Silicon support Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 06/11] hvf: arm: Implement -cpu host Alexander Graf
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

From: Peter Collingbourne <pcc@google.com>

Sleep on WFI until the VTIMER is due but allow ourselves to be woken
up on IPI.

In this implementation IPI is blocked on the CPU thread at startup and
pselect() is used to atomically unblock the signal and begin sleeping.
The signal is sent unconditionally so there's no need to worry about
races between actually sleeping and the "we think we're sleeping"
state. It may lead to an extra wakeup but that's better than missing
it entirely.

Signed-off-by: Peter Collingbourne <pcc@google.com>
[agraf: Remove unused 'set' variable, always advance PC on WFX trap,
        support vm stop / continue operations and cntv offsets]
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>

---

v6 -> v7:

  - Move WFI into function
  - Improve comment wording

v8 -> v9:

  - Add support for cntv offsets
  - Improve code readability
---
 accel/hvf/hvf-accel-ops.c |  5 ++-
 include/sysemu/hvf_int.h  |  1 +
 target/arm/hvf/hvf.c      | 76 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 79 insertions(+), 3 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 4f75927a8e..93976f4ece 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -370,15 +370,14 @@ static int hvf_init_vcpu(CPUState *cpu)
     cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
 
     /* init cpu signals */
-    sigset_t set;
     struct sigaction sigact;
 
     memset(&sigact, 0, sizeof(sigact));
     sigact.sa_handler = dummy_signal;
     sigaction(SIG_IPI, &sigact, NULL);
 
-    pthread_sigmask(SIG_BLOCK, NULL, &set);
-    sigdelset(&set, SIG_IPI);
+    pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
+    sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
 
 #ifdef __aarch64__
     r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index 7c245c7b11..6545f7cd61 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -52,6 +52,7 @@ struct hvf_vcpu_state {
     uint64_t fd;
     void *exit;
     bool vtimer_masked;
+    sigset_t unblock_ipi_mask;
 };
 
 void assert_hvf_ok(hv_return_t ret);
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index f04324b598..e9291f4b9c 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -2,6 +2,7 @@
  * QEMU Hypervisor.framework support for Apple Silicon
 
  * Copyright 2020 Alexander Graf <agraf@csgraf.de>
+ * Copyright 2020 Google LLC
  *
  * This work is licensed under the terms of the GNU GPL, version 2 or later.
  * See the COPYING file in the top-level directory.
@@ -490,6 +491,7 @@ int hvf_arch_init_vcpu(CPUState *cpu)
 
 void hvf_kick_vcpu_thread(CPUState *cpu)
 {
+    cpus_kick_thread(cpu);
     hv_vcpus_exit(&cpu->hvf->fd, 1);
 }
 
@@ -608,6 +610,77 @@ static uint64_t hvf_vtimer_val_raw(void)
     return mach_absolute_time() - hvf_state->vtimer_offset;
 }
 
+static uint64_t hvf_vtimer_val(void)
+{
+    if (!runstate_is_running()) {
+        /* VM is paused, the vtimer value is in vtimer.vtimer_val */
+        return vtimer.vtimer_val;
+    }
+
+    return hvf_vtimer_val_raw();
+}
+
+static void hvf_wait_for_ipi(CPUState *cpu, struct timespec *ts)
+{
+    /*
+     * Use pselect to sleep so that other threads can IPI us while we're
+     * sleeping.
+     */
+    qatomic_mb_set(&cpu->thread_kicked, false);
+    qemu_mutex_unlock_iothread();
+    pselect(0, 0, 0, 0, ts, &cpu->hvf->unblock_ipi_mask);
+    qemu_mutex_lock_iothread();
+}
+
+static void hvf_wfi(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    hv_return_t r;
+    uint64_t ctl;
+    uint64_t cval;
+    int64_t ticks_to_sleep;
+    uint64_t seconds;
+    uint64_t nanos;
+
+    if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
+        /* Interrupt pending, no need to wait */
+        return;
+    }
+
+    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
+    assert_hvf_ok(r);
+
+    if (!(ctl & 1) || (ctl & 2)) {
+        /* Timer disabled or masked, just wait for an IPI. */
+        hvf_wait_for_ipi(cpu, NULL);
+        return;
+    }
+
+    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0, &cval);
+    assert_hvf_ok(r);
+
+    ticks_to_sleep = cval - hvf_vtimer_val();
+    if (ticks_to_sleep < 0) {
+        return;
+    }
+
+    nanos = ticks_to_sleep * gt_cntfrq_period_ns(arm_cpu);
+    seconds = nanos / NANOSECONDS_PER_SECOND;
+    nanos -= (seconds * NANOSECONDS_PER_SECOND);
+
+    /*
+     * Don't sleep for less than the time a context switch would take,
+     * so that we can satisfy fast timer requests on the same CPU.
+     * Measurements on M1 show the sweet spot to be ~2ms.
+     */
+    if (!seconds && nanos < (2 * SCALE_MS)) {
+        return;
+    }
+
+    struct timespec ts = { seconds, nanos };
+    hvf_wait_for_ipi(cpu, &ts);
+}
+
 static void hvf_sync_vtimer(CPUState *cpu)
 {
     ARMCPU *arm_cpu = ARM_CPU(cpu);
@@ -728,6 +801,9 @@ int hvf_vcpu_exec(CPUState *cpu)
     }
     case EC_WFX_TRAP:
         advance_pc = true;
+        if (!(syndrome & WFX_IS_WFE)) {
+            hvf_wfi(cpu);
+        }
         break;
     case EC_AA64_HVC:
         cpu_synchronize_state(cpu);
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 06/11] hvf: arm: Implement -cpu host
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (4 preceding siblings ...)
  2021-09-12 23:07 ` [PATCH v9 05/11] arm/hvf: Add a WFI handler Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-13  8:54   ` Philippe Mathieu-Daudé
  2021-09-12 23:07 ` [PATCH v9 07/11] hvf: arm: Implement PSCI handling Alexander Graf
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

Now that we have working system register sync, we push more target CPU
properties into the virtual machine. That might be useful in some
situations, but is not the typical case that users want.

So let's add a -cpu host option that allows them to explicitly pass all
CPU capabilities of their host CPU into the guest.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>

---

v6 -> v7:

  - Move function define to own header
  - Do not propagate SVE features for HVF
  - Remove stray whitespace change
  - Verify that EL0 and EL1 do not allow AArch32 mode
  - Only probe host CPU features once

v8 -> v9:

  - Zero-initialize host_isar
  - Use M1 SCTLR reset value
---
 target/arm/cpu.c     |  9 ++++--
 target/arm/cpu.h     |  2 ++
 target/arm/hvf/hvf.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
 target/arm/hvf_arm.h | 19 +++++++++++
 target/arm/kvm_arm.h |  2 --
 5 files changed, 104 insertions(+), 4 deletions(-)
 create mode 100644 target/arm/hvf_arm.h

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index d631c4683c..551b15243d 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -39,6 +39,7 @@
 #include "sysemu/tcg.h"
 #include "sysemu/hw_accel.h"
 #include "kvm_arm.h"
+#include "hvf_arm.h"
 #include "disas/capstone.h"
 #include "fpu/softfloat.h"
 
@@ -2058,15 +2059,19 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
 #endif /* CONFIG_TCG */
 }
 
-#ifdef CONFIG_KVM
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
 static void arm_host_initfn(Object *obj)
 {
     ARMCPU *cpu = ARM_CPU(obj);
 
+#ifdef CONFIG_KVM
     kvm_arm_set_cpu_features_from_host(cpu);
     if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
         aarch64_add_sve_properties(obj);
     }
+#else
+    hvf_arm_set_cpu_features_from_host(cpu);
+#endif
     arm_cpu_post_init(obj);
 }
 
@@ -2126,7 +2131,7 @@ static void arm_cpu_register_types(void)
 {
     type_register_static(&arm_cpu_type_info);
 
-#ifdef CONFIG_KVM
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
     type_register_static(&host_arm_cpu_type_info);
 #endif
 }
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 6d60b64c15..fa9ccafdff 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3060,6 +3060,8 @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
 #define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
 #define CPU_RESOLVING_TYPE TYPE_ARM_CPU
 
+#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
+
 #define cpu_signal_handler cpu_arm_signal_handler
 #define cpu_list arm_cpu_list
 
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index e9291f4b9c..04da0dd4db 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -17,6 +17,7 @@
 #include "sysemu/hvf.h"
 #include "sysemu/hvf_int.h"
 #include "sysemu/hw_accel.h"
+#include "hvf_arm.h"
 
 #include <mach/mach_time.h>
 
@@ -54,6 +55,16 @@ typedef struct HVFVTimer {
 
 static HVFVTimer vtimer;
 
+typedef struct ARMHostCPUFeatures {
+    ARMISARegisters isar;
+    uint64_t features;
+    uint64_t midr;
+    uint32_t reset_sctlr;
+    const char *dtb_compatible;
+} ARMHostCPUFeatures;
+
+static ARMHostCPUFeatures arm_host_cpu_features;
+
 struct hvf_reg_match {
     int reg;
     uint64_t offset;
@@ -416,6 +427,71 @@ static uint64_t hvf_get_reg(CPUState *cpu, int rt)
     return val;
 }
 
+static void hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
+{
+    ARMISARegisters host_isar = {};
+    const struct isar_regs {
+        int reg;
+        uint64_t *val;
+    } regs[] = {
+        { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
+        { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
+        { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
+        { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
+        { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
+        { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
+        { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
+        { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
+        { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
+    };
+    hv_vcpu_t fd;
+    hv_vcpu_exit_t *exit;
+    int i;
+
+    ahcf->dtb_compatible = "arm,arm-v8";
+    ahcf->features = (1ULL << ARM_FEATURE_V8) |
+                     (1ULL << ARM_FEATURE_NEON) |
+                     (1ULL << ARM_FEATURE_AARCH64) |
+                     (1ULL << ARM_FEATURE_PMU) |
+                     (1ULL << ARM_FEATURE_GENERIC_TIMER);
+
+    /* We set up a small vcpu to extract host registers */
+
+    assert_hvf_ok(hv_vcpu_create(&fd, &exit, NULL));
+    for (i = 0; i < ARRAY_SIZE(regs); i++) {
+        assert_hvf_ok(hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val));
+    }
+    assert_hvf_ok(hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr));
+    assert_hvf_ok(hv_vcpu_destroy(fd));
+
+    ahcf->isar = host_isar;
+
+    /* M1 boot SCTLR from https://github.com/AsahiLinux/m1n1/issues/97 */
+    ahcf->reset_sctlr = 0x30100180;
+    /* OVMF chokes on boot if SPAN is not set, so default it to on */
+    ahcf->reset_sctlr |= 0x00800000;
+
+    /* Make sure we don't advertise AArch32 support for EL0/EL1 */
+    g_assert((host_isar.id_aa64pfr0 & 0xff) == 0x11);
+}
+
+void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu)
+{
+    if (!arm_host_cpu_features.dtb_compatible) {
+        if (!hvf_enabled()) {
+            cpu->host_cpu_probe_failed = true;
+            return;
+        }
+        hvf_arm_get_host_cpu_features(&arm_host_cpu_features);
+    }
+
+    cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
+    cpu->isar = arm_host_cpu_features.isar;
+    cpu->env.features = arm_host_cpu_features.features;
+    cpu->midr = arm_host_cpu_features.midr;
+    cpu->reset_sctlr = arm_host_cpu_features.reset_sctlr;
+}
+
 void hvf_arch_vcpu_destroy(CPUState *cpu)
 {
 }
diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h
new file mode 100644
index 0000000000..603074a331
--- /dev/null
+++ b/target/arm/hvf_arm.h
@@ -0,0 +1,19 @@
+/*
+ * QEMU Hypervisor.framework (HVF) support -- ARM specifics
+ *
+ * Copyright (c) 2021 Alexander Graf
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef QEMU_HVF_ARM_H
+#define QEMU_HVF_ARM_H
+
+#include "qemu/accel.h"
+#include "cpu.h"
+
+void hvf_arm_set_cpu_features_from_host(struct ARMCPU *cpu);
+
+#endif
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
index 34f8daa377..828dca4a4a 100644
--- a/target/arm/kvm_arm.h
+++ b/target/arm/kvm_arm.h
@@ -214,8 +214,6 @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
  */
 void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
 
-#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
-
 /**
  * ARMHostCPUFeatures: information about the host CPU (identified
  * by asking the host kernel)
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (5 preceding siblings ...)
  2021-09-12 23:07 ` [PATCH v9 06/11] hvf: arm: Implement -cpu host Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-13  8:54   ` Peter Maydell
  2021-09-12 23:07 ` [PATCH v9 08/11] arm: Add Hypervisor.framework build target Alexander Graf
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

We need to handle PSCI calls. Most of the TCG code works for us,
but we can simplify it to only handle aa64 mode and we need to
handle SUSPEND differently.

This patch takes the TCG code as template and duplicates it in HVF.

To tell the guest that we support PSCI 0.2 now, update the check in
arm_cpu_initfn() as well.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>

---

v6 -> v7:

  - This patch integrates "arm: Set PSCI to 0.2 for HVF"

v7 -> v8:

  - Do not advance for HVC, PC is already updated by hvf
  - Fix checkpatch error

v8 -> v9:

  - Use new hvf_raise_exception() prototype
  - Make cpu_off function void
  - Add comment about return value, use -1 for "not found"
  - Remove cpu_synchronize_state() when halted
---
 target/arm/cpu.c            |   4 +-
 target/arm/hvf/hvf.c        | 127 ++++++++++++++++++++++++++++++++++--
 target/arm/hvf/trace-events |   1 +
 3 files changed, 126 insertions(+), 6 deletions(-)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 551b15243d..c111b2ee32 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -1093,8 +1093,8 @@ static void arm_cpu_initfn(Object *obj)
     cpu->psci_version = 1; /* By default assume PSCI v0.1 */
     cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
 
-    if (tcg_enabled()) {
-        cpu->psci_version = 2; /* TCG implements PSCI 0.2 */
+    if (tcg_enabled() || hvf_enabled()) {
+        cpu->psci_version = 2; /* TCG and HVF implement PSCI 0.2 */
     }
 }
 
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 04da0dd4db..20d795366a 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -25,6 +25,7 @@
 #include "hw/irq.h"
 #include "qemu/main-loop.h"
 #include "sysemu/cpus.h"
+#include "arm-powerctl.h"
 #include "target/arm/cpu.h"
 #include "target/arm/internals.h"
 #include "trace/trace-target_arm_hvf.h"
@@ -48,6 +49,8 @@
 #define TMR_CTL_IMASK   (1 << 1)
 #define TMR_CTL_ISTATUS (1 << 2)
 
+static void hvf_wfi(CPUState *cpu);
+
 typedef struct HVFVTimer {
     /* Vtimer value during migration and paused state */
     uint64_t vtimer_val;
@@ -584,6 +587,116 @@ static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
     arm_cpu_do_interrupt(cpu);
 }
 
+static void hvf_psci_cpu_off(ARMCPU *arm_cpu)
+{
+    int32_t ret = arm_set_cpu_off(arm_cpu->mp_affinity);
+    assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
+}
+
+/*
+ * Handle a PSCI call.
+ *
+ * Returns 0 on success
+ *         -1 when the PSCI call is unknown,
+ */
+static int hvf_handle_psci_call(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    uint64_t param[4] = {
+        env->xregs[0],
+        env->xregs[1],
+        env->xregs[2],
+        env->xregs[3]
+    };
+    uint64_t context_id, mpidr;
+    bool target_aarch64 = true;
+    CPUState *target_cpu_state;
+    ARMCPU *target_cpu;
+    target_ulong entry;
+    int target_el = 1;
+    int32_t ret = 0;
+
+    trace_hvf_psci_call(param[0], param[1], param[2], param[3],
+                        arm_cpu->mp_affinity);
+
+    switch (param[0]) {
+    case QEMU_PSCI_0_2_FN_PSCI_VERSION:
+        ret = QEMU_PSCI_0_2_RET_VERSION_0_2;
+        break;
+    case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
+        ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */
+        break;
+    case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
+    case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
+        mpidr = param[1];
+
+        switch (param[2]) {
+        case 0:
+            target_cpu_state = arm_get_cpu_by_id(mpidr);
+            if (!target_cpu_state) {
+                ret = QEMU_PSCI_RET_INVALID_PARAMS;
+                break;
+            }
+            target_cpu = ARM_CPU(target_cpu_state);
+
+            ret = target_cpu->power_state;
+            break;
+        default:
+            /* Everything above affinity level 0 is always on. */
+            ret = 0;
+        }
+        break;
+    case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
+        qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
+        /* QEMU reset and shutdown are async requests, but PSCI
+         * mandates that we never return from the reset/shutdown
+         * call, so power the CPU off now so it doesn't execute
+         * anything further.
+         */
+        hvf_psci_cpu_off(arm_cpu);
+        break;
+    case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
+        qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
+        hvf_psci_cpu_off(arm_cpu);
+        break;
+    case QEMU_PSCI_0_1_FN_CPU_ON:
+    case QEMU_PSCI_0_2_FN_CPU_ON:
+    case QEMU_PSCI_0_2_FN64_CPU_ON:
+        mpidr = param[1];
+        entry = param[2];
+        context_id = param[3];
+        ret = arm_set_cpu_on(mpidr, entry, context_id,
+                             target_el, target_aarch64);
+        break;
+    case QEMU_PSCI_0_1_FN_CPU_OFF:
+    case QEMU_PSCI_0_2_FN_CPU_OFF:
+        hvf_psci_cpu_off(arm_cpu);
+        break;
+    case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
+    case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
+    case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
+        /* Affinity levels are not supported in QEMU */
+        if (param[1] & 0xfffe0000) {
+            ret = QEMU_PSCI_RET_INVALID_PARAMS;
+            break;
+        }
+        /* Powerdown is not supported, we always go into WFI */
+        env->xregs[0] = 0;
+        hvf_wfi(cpu);
+        break;
+    case QEMU_PSCI_0_1_FN_MIGRATE:
+    case QEMU_PSCI_0_2_FN_MIGRATE:
+        ret = QEMU_PSCI_RET_NOT_SUPPORTED;
+        break;
+    default:
+        return -1;
+    }
+
+    env->xregs[0] = ret;
+    return 0;
+}
+
 static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
 {
     ARMCPU *arm_cpu = ARM_CPU(cpu);
@@ -883,13 +996,19 @@ int hvf_vcpu_exec(CPUState *cpu)
         break;
     case EC_AA64_HVC:
         cpu_synchronize_state(cpu);
-        trace_hvf_unknown_hvf(env->xregs[0]);
-        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        if (hvf_handle_psci_call(cpu)) {
+            trace_hvf_unknown_hvf(env->xregs[0]);
+            hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
+        }
         break;
     case EC_AA64_SMC:
         cpu_synchronize_state(cpu);
-        trace_hvf_unknown_smc(env->xregs[0]);
-        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        if (!hvf_handle_psci_call(cpu)) {
+            advance_pc = true;
+        } else {
+            trace_hvf_unknown_smc(env->xregs[0]);
+            hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
+        }
         break;
     default:
         cpu_synchronize_state(cpu);
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
index e972bdd9ce..cf4fb68f79 100644
--- a/target/arm/hvf/trace-events
+++ b/target/arm/hvf/trace-events
@@ -8,3 +8,4 @@ hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_
 hvf_unknown_hvf(uint64_t x0) "unknown HVC! 0x%016"PRIx64
 hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
 hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
+hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x"
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 08/11] arm: Add Hypervisor.framework build target
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (6 preceding siblings ...)
  2021-09-12 23:07 ` [PATCH v9 07/11] hvf: arm: Implement PSCI handling Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 09/11] hvf: arm: Add rudimentary PMC support Alexander Graf
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

Now that we have all logic in place that we need to handle Hypervisor.framework
on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we
can build it.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sergio Lopez <slp@redhat.com>

---

v1 -> v2:

  - Fix build on 32bit arm

v3 -> v4:

  - Remove i386-softmmu target

v6 -> v7:

  - Simplify HVF matching logic in meson build file
---
 meson.build                | 7 +++++++
 target/arm/hvf/meson.build | 3 +++
 target/arm/meson.build     | 2 ++
 3 files changed, 12 insertions(+)
 create mode 100644 target/arm/hvf/meson.build

diff --git a/meson.build b/meson.build
index a3e9b95846..cf91256c9a 100644
--- a/meson.build
+++ b/meson.build
@@ -77,6 +77,13 @@ else
 endif
 
 accelerator_targets = { 'CONFIG_KVM': kvm_targets }
+
+if cpu in ['aarch64']
+  accelerator_targets += {
+    'CONFIG_HVF': ['aarch64-softmmu']
+  }
+endif
+
 if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
   # i368 emulator provides xenpv machine type for multiple architectures
   accelerator_targets += {
diff --git a/target/arm/hvf/meson.build b/target/arm/hvf/meson.build
new file mode 100644
index 0000000000..855e6cce5a
--- /dev/null
+++ b/target/arm/hvf/meson.build
@@ -0,0 +1,3 @@
+arm_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
+  'hvf.c',
+))
diff --git a/target/arm/meson.build b/target/arm/meson.build
index 25a02bf276..50f152214a 100644
--- a/target/arm/meson.build
+++ b/target/arm/meson.build
@@ -60,5 +60,7 @@ arm_softmmu_ss.add(files(
   'psci.c',
 ))
 
+subdir('hvf')
+
 target_arch += {'arm': arm_ss}
 target_softmmu_arch += {'arm': arm_softmmu_ss}
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 09/11] hvf: arm: Add rudimentary PMC support
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (7 preceding siblings ...)
  2021-09-12 23:07 ` [PATCH v9 08/11] arm: Add Hypervisor.framework build target Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 10/11] arm: tcg: Adhere to SMCCC 1.3 section 5.2 Alexander Graf
  2021-09-12 23:07 ` [PATCH v9 11/11] hvf: arm: " Alexander Graf
  10 siblings, 0 replies; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

We can expose cycle counters on the PMU easily. To be as compatible as
possible, let's do so, but make sure we don't expose any other architectural
counters that we can not model yet.

This allows OSs to work that require PMU support.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 target/arm/hvf/hvf.c | 179 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 179 insertions(+)

diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 20d795366a..b62cfa3976 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -42,6 +42,18 @@
 #define SYSREG_OSLSR_EL1      SYSREG(2, 0, 1, 1, 4)
 #define SYSREG_OSDLR_EL1      SYSREG(2, 0, 1, 3, 4)
 #define SYSREG_CNTPCT_EL0     SYSREG(3, 3, 14, 0, 1)
+#define SYSREG_PMCR_EL0       SYSREG(3, 3, 9, 12, 0)
+#define SYSREG_PMUSERENR_EL0  SYSREG(3, 3, 9, 14, 0)
+#define SYSREG_PMCNTENSET_EL0 SYSREG(3, 3, 9, 12, 1)
+#define SYSREG_PMCNTENCLR_EL0 SYSREG(3, 3, 9, 12, 2)
+#define SYSREG_PMINTENCLR_EL1 SYSREG(3, 0, 9, 14, 2)
+#define SYSREG_PMOVSCLR_EL0   SYSREG(3, 3, 9, 12, 3)
+#define SYSREG_PMSWINC_EL0    SYSREG(3, 3, 9, 12, 4)
+#define SYSREG_PMSELR_EL0     SYSREG(3, 3, 9, 12, 5)
+#define SYSREG_PMCEID0_EL0    SYSREG(3, 3, 9, 12, 6)
+#define SYSREG_PMCEID1_EL0    SYSREG(3, 3, 9, 12, 7)
+#define SYSREG_PMCCNTR_EL0    SYSREG(3, 3, 9, 13, 0)
+#define SYSREG_PMCCFILTR_EL0  SYSREG(3, 3, 14, 15, 7)
 
 #define WFX_IS_WFE (1 << 0)
 
@@ -708,6 +720,40 @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
         val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
               gt_cntfrq_period_ns(arm_cpu);
         break;
+    case SYSREG_PMCR_EL0:
+        val = env->cp15.c9_pmcr;
+        break;
+    case SYSREG_PMCCNTR_EL0:
+        pmu_op_start(env);
+        val = env->cp15.c15_ccnt;
+        pmu_op_finish(env);
+        break;
+    case SYSREG_PMCNTENCLR_EL0:
+        val = env->cp15.c9_pmcnten;
+        break;
+    case SYSREG_PMOVSCLR_EL0:
+        val = env->cp15.c9_pmovsr;
+        break;
+    case SYSREG_PMSELR_EL0:
+        val = env->cp15.c9_pmselr;
+        break;
+    case SYSREG_PMINTENCLR_EL1:
+        val = env->cp15.c9_pminten;
+        break;
+    case SYSREG_PMCCFILTR_EL0:
+        val = env->cp15.pmccfiltr_el0;
+        break;
+    case SYSREG_PMCNTENSET_EL0:
+        val = env->cp15.c9_pmcnten;
+        break;
+    case SYSREG_PMUSERENR_EL0:
+        val = env->cp15.c9_pmuserenr;
+        break;
+    case SYSREG_PMCEID0_EL0:
+    case SYSREG_PMCEID1_EL0:
+        /* We can't really count anything yet, declare all events invalid */
+        val = 0;
+        break;
     case SYSREG_OSLSR_EL1:
         val = env->cp15.oslsr_el1;
         break;
@@ -738,6 +784,82 @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
     return 0;
 }
 
+static void pmu_update_irq(CPUARMState *env)
+{
+    ARMCPU *cpu = env_archcpu(env);
+    qemu_set_irq(cpu->pmu_interrupt, (env->cp15.c9_pmcr & PMCRE) &&
+            (env->cp15.c9_pminten & env->cp15.c9_pmovsr));
+}
+
+static bool pmu_event_supported(uint16_t number)
+{
+    return false;
+}
+
+/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
+ * the current EL, security state, and register configuration.
+ */
+static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
+{
+    uint64_t filter;
+    bool enabled, filtered = true;
+    int el = arm_current_el(env);
+
+    enabled = (env->cp15.c9_pmcr & PMCRE) &&
+              (env->cp15.c9_pmcnten & (1 << counter));
+
+    if (counter == 31) {
+        filter = env->cp15.pmccfiltr_el0;
+    } else {
+        filter = env->cp15.c14_pmevtyper[counter];
+    }
+
+    if (el == 0) {
+        filtered = filter & PMXEVTYPER_U;
+    } else if (el == 1) {
+        filtered = filter & PMXEVTYPER_P;
+    }
+
+    if (counter != 31) {
+        /*
+         * If not checking PMCCNTR, ensure the counter is setup to an event we
+         * support
+         */
+        uint16_t event = filter & PMXEVTYPER_EVTCOUNT;
+        if (!pmu_event_supported(event)) {
+            return false;
+        }
+    }
+
+    return enabled && !filtered;
+}
+
+static void pmswinc_write(CPUARMState *env, uint64_t value)
+{
+    unsigned int i;
+    for (i = 0; i < pmu_num_counters(env); i++) {
+        /* Increment a counter's count iff: */
+        if ((value & (1 << i)) && /* counter's bit is set */
+                /* counter is enabled and not filtered */
+                pmu_counter_enabled(env, i) &&
+                /* counter is SW_INCR */
+                (env->cp15.c14_pmevtyper[i] & PMXEVTYPER_EVTCOUNT) == 0x0) {
+            /*
+             * Detect if this write causes an overflow since we can't predict
+             * PMSWINC overflows like we can for other events
+             */
+            uint32_t new_pmswinc = env->cp15.c14_pmevcntr[i] + 1;
+
+            if (env->cp15.c14_pmevcntr[i] & ~new_pmswinc & INT32_MIN) {
+                env->cp15.c9_pmovsr |= (1 << i);
+                pmu_update_irq(env);
+            }
+
+            env->cp15.c14_pmevcntr[i] = new_pmswinc;
+        }
+    }
+}
+
 static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
 {
     ARMCPU *arm_cpu = ARM_CPU(cpu);
@@ -752,6 +874,63 @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
                            val);
 
     switch (reg) {
+    case SYSREG_PMCCNTR_EL0:
+        pmu_op_start(env);
+        env->cp15.c15_ccnt = val;
+        pmu_op_finish(env);
+        break;
+    case SYSREG_PMCR_EL0:
+        pmu_op_start(env);
+
+        if (val & PMCRC) {
+            /* The counter has been reset */
+            env->cp15.c15_ccnt = 0;
+        }
+
+        if (val & PMCRP) {
+            unsigned int i;
+            for (i = 0; i < pmu_num_counters(env); i++) {
+                env->cp15.c14_pmevcntr[i] = 0;
+            }
+        }
+
+        env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK;
+        env->cp15.c9_pmcr |= (val & PMCR_WRITEABLE_MASK);
+
+        pmu_op_finish(env);
+        break;
+    case SYSREG_PMUSERENR_EL0:
+        env->cp15.c9_pmuserenr = val & 0xf;
+        break;
+    case SYSREG_PMCNTENSET_EL0:
+        env->cp15.c9_pmcnten |= (val & pmu_counter_mask(env));
+        break;
+    case SYSREG_PMCNTENCLR_EL0:
+        env->cp15.c9_pmcnten &= ~(val & pmu_counter_mask(env));
+        break;
+    case SYSREG_PMINTENCLR_EL1:
+        pmu_op_start(env);
+        env->cp15.c9_pminten |= val;
+        pmu_op_finish(env);
+        break;
+    case SYSREG_PMOVSCLR_EL0:
+        pmu_op_start(env);
+        env->cp15.c9_pmovsr &= ~val;
+        pmu_op_finish(env);
+        break;
+    case SYSREG_PMSWINC_EL0:
+        pmu_op_start(env);
+        pmswinc_write(env, val);
+        pmu_op_finish(env);
+        break;
+    case SYSREG_PMSELR_EL0:
+        env->cp15.c9_pmselr = val & 0x1f;
+        break;
+    case SYSREG_PMCCFILTR_EL0:
+        pmu_op_start(env);
+        env->cp15.pmccfiltr_el0 = val & PMCCFILTR_EL0;
+        pmu_op_finish(env);
+        break;
     case SYSREG_OSLAR_EL1:
         env->cp15.oslsr_el1 = val & 1;
         break;
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 10/11] arm: tcg: Adhere to SMCCC 1.3 section 5.2
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (8 preceding siblings ...)
  2021-09-12 23:07 ` [PATCH v9 09/11] hvf: arm: Add rudimentary PMC support Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-13  8:46   ` Peter Maydell
  2021-09-12 23:07 ` [PATCH v9 11/11] hvf: arm: " Alexander Graf
  10 siblings, 1 reply; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

The SMCCC 1.3 spec section 5.2 says

  The Unknown SMC Function Identifier is a sign-extended value of (-1)
  that is returned in the R0, W0 or X0 registers. An implementation must
  return this error code when it receives:

    * An SMC or HVC call with an unknown Function Identifier
    * An SMC or HVC call for a removed Function Identifier
    * An SMC64/HVC64 call from AArch32 state

To comply with these statements, let's always return -1 when we encounter
an unknown HVC or SMC call.

Signed-off-by: Alexander Graf <agraf@csgraf.de>

---

v8 -> v9:

  - Remove Windows specifics and just comply with SMCCC spec
---
 target/arm/psci.c | 26 ++------------------------
 1 file changed, 2 insertions(+), 24 deletions(-)

diff --git a/target/arm/psci.c b/target/arm/psci.c
index 6709e28013..bee4aa8825 100644
--- a/target/arm/psci.c
+++ b/target/arm/psci.c
@@ -35,7 +35,6 @@ bool arm_is_psci_call(ARMCPU *cpu, int excp_type)
      * to EL2 or to EL3).
      */
     CPUARMState *env = &cpu->env;
-    uint64_t param = is_a64(env) ? env->xregs[0] : env->regs[0];
 
     switch (excp_type) {
     case EXCP_HVC:
@@ -52,27 +51,7 @@ bool arm_is_psci_call(ARMCPU *cpu, int excp_type)
         return false;
     }
 
-    switch (param) {
-    case QEMU_PSCI_0_2_FN_PSCI_VERSION:
-    case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
-    case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
-    case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
-    case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
-    case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
-    case QEMU_PSCI_0_1_FN_CPU_ON:
-    case QEMU_PSCI_0_2_FN_CPU_ON:
-    case QEMU_PSCI_0_2_FN64_CPU_ON:
-    case QEMU_PSCI_0_1_FN_CPU_OFF:
-    case QEMU_PSCI_0_2_FN_CPU_OFF:
-    case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
-    case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
-    case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
-    case QEMU_PSCI_0_1_FN_MIGRATE:
-    case QEMU_PSCI_0_2_FN_MIGRATE:
-        return true;
-    default:
-        return false;
-    }
+    return true;
 }
 
 void arm_handle_psci_call(ARMCPU *cpu)
@@ -194,10 +173,9 @@ void arm_handle_psci_call(ARMCPU *cpu)
         break;
     case QEMU_PSCI_0_1_FN_MIGRATE:
     case QEMU_PSCI_0_2_FN_MIGRATE:
+    default:
         ret = QEMU_PSCI_RET_NOT_SUPPORTED;
         break;
-    default:
-        g_assert_not_reached();
     }
 
 err:
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 11/11] hvf: arm: Adhere to SMCCC 1.3 section 5.2
  2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (9 preceding siblings ...)
  2021-09-12 23:07 ` [PATCH v9 10/11] arm: tcg: Adhere to SMCCC 1.3 section 5.2 Alexander Graf
@ 2021-09-12 23:07 ` Alexander Graf
  2021-09-13  8:52   ` Peter Maydell
  10 siblings, 1 reply; 25+ messages in thread
From: Alexander Graf @ 2021-09-12 23:07 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

The SMCCC 1.3 spec section 5.2 says

  The Unknown SMC Function Identifier is a sign-extended value of (-1)
  that is returned in the R0, W0 or X0 registers. An implementation must
  return this error code when it receives:

    * An SMC or HVC call with an unknown Function Identifier
    * An SMC or HVC call for a removed Function Identifier
    * An SMC64/HVC64 call from AArch32 state

To comply with these statements, let's always return -1 when we encounter
an unknown HVC or SMC call.

Signed-off-by: Alexander Graf <agraf@csgraf.de>

---

v7 -> v8:

  - fix checkpatch

v8 -> v9:

  - Remove Windows specifics and just comply with SMCCC spec
---
 target/arm/hvf/hvf.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index b62cfa3976..6a7ccfa91e 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -1177,7 +1177,8 @@ int hvf_vcpu_exec(CPUState *cpu)
         cpu_synchronize_state(cpu);
         if (hvf_handle_psci_call(cpu)) {
             trace_hvf_unknown_hvf(env->xregs[0]);
-            hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
+            /* SMCCC 1.3 section 5.2 says every unknown HVC call returns -1 */
+            env->xregs[0] = -1;
         }
         break;
     case EC_AA64_SMC:
@@ -1186,7 +1187,9 @@ int hvf_vcpu_exec(CPUState *cpu)
             advance_pc = true;
         } else {
             trace_hvf_unknown_smc(env->xregs[0]);
-            hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
+            /* SMCCC 1.3 section 5.2 says every unknown SMC call returns -1 */
+            env->xregs[0] = -1;
+            advance_pc = true;
         }
         break;
     default:
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 10/11] arm: tcg: Adhere to SMCCC 1.3 section 5.2
  2021-09-12 23:07 ` [PATCH v9 10/11] arm: tcg: Adhere to SMCCC 1.3 section 5.2 Alexander Graf
@ 2021-09-13  8:46   ` Peter Maydell
  0 siblings, 0 replies; 25+ messages in thread
From: Peter Maydell @ 2021-09-13  8:46 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Sergio Lopez, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

On Mon, 13 Sept 2021 at 00:08, Alexander Graf <agraf@csgraf.de> wrote:
>
> The SMCCC 1.3 spec section 5.2 says
>
>   The Unknown SMC Function Identifier is a sign-extended value of (-1)
>   that is returned in the R0, W0 or X0 registers. An implementation must
>   return this error code when it receives:
>
>     * An SMC or HVC call with an unknown Function Identifier
>     * An SMC or HVC call for a removed Function Identifier
>     * An SMC64/HVC64 call from AArch32 state
>
> To comply with these statements, let's always return -1 when we encounter
> an unknown HVC or SMC call.
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>

Thanks for tracking down the spec requirements.

I agree with the code changes, but the comment at the top of
arm_is_psci_call() also needs to be updated, as it currently
says that we check r0/x0.

-- PMM


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 01/11] arm: Move PMC register definitions to cpu.h
  2021-09-12 23:07 ` [PATCH v9 01/11] arm: Move PMC register definitions to cpu.h Alexander Graf
@ 2021-09-13  8:49   ` Peter Maydell
  0 siblings, 0 replies; 25+ messages in thread
From: Peter Maydell @ 2021-09-13  8:49 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Sergio Lopez, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

On Mon, 13 Sept 2021 at 00:07, Alexander Graf <agraf@csgraf.de> wrote:
>
> We will need PMC register definitions in accel specific code later.
> Move all constant definitions to common arm headers so we can reuse
> them.
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  target/arm/cpu.h    | 44 ++++++++++++++++++++++++++++++++++++++++++++
>  target/arm/helper.c | 44 --------------------------------------------
>  2 files changed, 44 insertions(+), 44 deletions(-)

Do these need to be in cpu.h, or would target/arm/internals.h
be good enough? (Lots of files all over the codebase include
cpu.h, so if the only users of these defines and functions are
in target/arm, internals.h is better.)

-- PMM


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 11/11] hvf: arm: Adhere to SMCCC 1.3 section 5.2
  2021-09-12 23:07 ` [PATCH v9 11/11] hvf: arm: " Alexander Graf
@ 2021-09-13  8:52   ` Peter Maydell
  0 siblings, 0 replies; 25+ messages in thread
From: Peter Maydell @ 2021-09-13  8:52 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Sergio Lopez, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

On Mon, 13 Sept 2021 at 00:08, Alexander Graf <agraf@csgraf.de> wrote:
>
> The SMCCC 1.3 spec section 5.2 says
>
>   The Unknown SMC Function Identifier is a sign-extended value of (-1)
>   that is returned in the R0, W0 or X0 registers. An implementation must
>   return this error code when it receives:
>
>     * An SMC or HVC call with an unknown Function Identifier
>     * An SMC or HVC call for a removed Function Identifier
>     * An SMC64/HVC64 call from AArch32 state
>
> To comply with these statements, let's always return -1 when we encounter
> an unknown HVC or SMC call.
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
>
> ---
>
> v7 -> v8:
>
>   - fix checkpatch
>
> v8 -> v9:
>
>   - Remove Windows specifics and just comply with SMCCC spec
> ---
>  target/arm/hvf/hvf.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
> index b62cfa3976..6a7ccfa91e 100644
> --- a/target/arm/hvf/hvf.c
> +++ b/target/arm/hvf/hvf.c
> @@ -1177,7 +1177,8 @@ int hvf_vcpu_exec(CPUState *cpu)
>          cpu_synchronize_state(cpu);
>          if (hvf_handle_psci_call(cpu)) {
>              trace_hvf_unknown_hvf(env->xregs[0]);
> -            hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
> +            /* SMCCC 1.3 section 5.2 says every unknown HVC call returns -1 */
> +            env->xregs[0] = -1;
>          }
>          break;
>      case EC_AA64_SMC:
> @@ -1186,7 +1187,9 @@ int hvf_vcpu_exec(CPUState *cpu)
>              advance_pc = true;
>          } else {
>              trace_hvf_unknown_smc(env->xregs[0]);
> -            hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
> +            /* SMCCC 1.3 section 5.2 says every unknown SMC call returns -1 */
> +            env->xregs[0] = -1;
> +            advance_pc = true;
>          }
>          break;
>      default:

This should be squashed into whatever earlier patch added this code.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-12 23:07 ` [PATCH v9 07/11] hvf: arm: Implement PSCI handling Alexander Graf
@ 2021-09-13  8:54   ` Peter Maydell
  2021-09-13 11:07     ` Alexander Graf
  0 siblings, 1 reply; 25+ messages in thread
From: Peter Maydell @ 2021-09-13  8:54 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Sergio Lopez, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

On Mon, 13 Sept 2021 at 00:08, Alexander Graf <agraf@csgraf.de> wrote:
>
> We need to handle PSCI calls. Most of the TCG code works for us,
> but we can simplify it to only handle aa64 mode and we need to
> handle SUSPEND differently.
>
> This patch takes the TCG code as template and duplicates it in HVF.
>
> To tell the guest that we support PSCI 0.2 now, update the check in
> arm_cpu_initfn() as well.
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Reviewed-by: Sergio Lopez <slp@redhat.com>
>
> ---
>
> v6 -> v7:
>
>   - This patch integrates "arm: Set PSCI to 0.2 for HVF"
>
> v7 -> v8:
>
>   - Do not advance for HVC, PC is already updated by hvf
>   - Fix checkpatch error
>
> v8 -> v9:
>
>   - Use new hvf_raise_exception() prototype
>   - Make cpu_off function void
>   - Add comment about return value, use -1 for "not found"
>   - Remove cpu_synchronize_state() when halted
> ---
>  target/arm/cpu.c            |   4 +-
>  target/arm/hvf/hvf.c        | 127 ++++++++++++++++++++++++++++++++++--
>  target/arm/hvf/trace-events |   1 +
>  3 files changed, 126 insertions(+), 6 deletions(-)

Something in here should be checking whether the insn the guest
used matches the PSCI conduit configured for the VM, ie
what arm_is_psci_call() does after your patch 10.

-- PMM


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 06/11] hvf: arm: Implement -cpu host
  2021-09-12 23:07 ` [PATCH v9 06/11] hvf: arm: Implement -cpu host Alexander Graf
@ 2021-09-13  8:54   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 25+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-09-13  8:54 UTC (permalink / raw)
  To: Alexander Graf, QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez, Richard Henderson,
	Cameron Esfahani, Roman Bolshakov, qemu-arm, Frank Yang,
	Paolo Bonzini, Peter Collingbourne

On 9/13/21 1:07 AM, Alexander Graf wrote:
> Now that we have working system register sync, we push more target CPU
> properties into the virtual machine. That might be useful in some
> situations, but is not the typical case that users want.
> 
> So let's add a -cpu host option that allows them to explicitly pass all
> CPU capabilities of their host CPU into the guest.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
> Reviewed-by: Sergio Lopez <slp@redhat.com>

> ---
>  target/arm/cpu.c     |  9 ++++--
>  target/arm/cpu.h     |  2 ++
>  target/arm/hvf/hvf.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
>  target/arm/hvf_arm.h | 19 +++++++++++
>  target/arm/kvm_arm.h |  2 --
>  5 files changed, 104 insertions(+), 4 deletions(-)
>  create mode 100644 target/arm/hvf_arm.h
> 
> diff --git a/target/arm/cpu.c b/target/arm/cpu.c

> @@ -2058,15 +2059,19 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
>  #endif /* CONFIG_TCG */
>  }
>  
> -#ifdef CONFIG_KVM
> +#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
>  static void arm_host_initfn(Object *obj)
>  {
>      ARMCPU *cpu = ARM_CPU(obj);
>  
> +#ifdef CONFIG_KVM
>      kvm_arm_set_cpu_features_from_host(cpu);
>      if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
>          aarch64_add_sve_properties(obj);
>      }
> +#else
> +    hvf_arm_set_cpu_features_from_host(cpu);
> +#endif

Could be cleaner as ARMCPUClass::set_cpu_features_from_host()?



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-13  8:54   ` Peter Maydell
@ 2021-09-13 11:07     ` Alexander Graf
  2021-09-13 11:44       ` Peter Maydell
  0 siblings, 1 reply; 25+ messages in thread
From: Alexander Graf @ 2021-09-13 11:07 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Marc Zyngier, Eduardo Habkost, Sergio Lopez,
	Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne


On 13.09.21 10:54, Peter Maydell wrote:
> On Mon, 13 Sept 2021 at 00:08, Alexander Graf <agraf@csgraf.de> wrote:
>> We need to handle PSCI calls. Most of the TCG code works for us,
>> but we can simplify it to only handle aa64 mode and we need to
>> handle SUSPEND differently.
>>
>> This patch takes the TCG code as template and duplicates it in HVF.
>>
>> To tell the guest that we support PSCI 0.2 now, update the check in
>> arm_cpu_initfn() as well.
>>
>> Signed-off-by: Alexander Graf <agraf@csgraf.de>
>> Reviewed-by: Sergio Lopez <slp@redhat.com>
>>
>> ---
>>
>> v6 -> v7:
>>
>>   - This patch integrates "arm: Set PSCI to 0.2 for HVF"
>>
>> v7 -> v8:
>>
>>   - Do not advance for HVC, PC is already updated by hvf
>>   - Fix checkpatch error
>>
>> v8 -> v9:
>>
>>   - Use new hvf_raise_exception() prototype
>>   - Make cpu_off function void
>>   - Add comment about return value, use -1 for "not found"
>>   - Remove cpu_synchronize_state() when halted
>> ---
>>  target/arm/cpu.c            |   4 +-
>>  target/arm/hvf/hvf.c        | 127 ++++++++++++++++++++++++++++++++++--
>>  target/arm/hvf/trace-events |   1 +
>>  3 files changed, 126 insertions(+), 6 deletions(-)
> Something in here should be checking whether the insn the guest
> used matches the PSCI conduit configured for the VM, ie
> what arm_is_psci_call() does after your patch 10.


It's yet another case where I believe we are both reading the spec
differently :)

  https://documentation-service.arm.com/static/6013e5faeee5236980d08619

Section 2.5.3 speaks about the conduits. It says

    Service calls are expected to be invoked through SMC instructions,
except
    for Standard Hypervisor Calls and Vendor Specific Hypervisor Calls. On
    some platforms, however, SMC instructions are not available, and the
    services can be accessed through HVC instructions. The method that
    is used to invoke the service is referred to as the conduit.

To me, that reads like "Use SMC whenever you can. If your hardware does
not give you a way to handle SMC calls, falling back to HVC is ok. In
that case, indicate that mandate to the OS".

In hvf, we can very easily trap for SMC calls and handle them. Why are
we making OSs implement HVC call paths when SMC would work just as well
for everyone?

To keep your train of thought though, what would you do if we encounter
a conduit that is different from the chosen one? Today, I am aware of 2
different implementations: TCG injects #UD [1] while KVM sets x0 to -1 [2].

IMHO the best way to resolve all of this mess is to consolidate to SMC
as default PSCI handler and for now treat HVC as if it was an SMC call
as well for virtual environments. Once we get nested virtualization, we
will need to move to SMC as default anyway.


Alex

[1]
https://git.qemu.org/?p=qemu.git;a=blob;f=target/arm/op_helper.c;hb=HEAD#l813
[2]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/kvm/handle_exit.c#n52



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-13 11:07     ` Alexander Graf
@ 2021-09-13 11:44       ` Peter Maydell
  2021-09-13 12:02         ` Alexander Graf
  0 siblings, 1 reply; 25+ messages in thread
From: Peter Maydell @ 2021-09-13 11:44 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Marc Zyngier, Eduardo Habkost, Sergio Lopez,
	Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Mon, 13 Sept 2021 at 12:07, Alexander Graf <agraf@csgraf.de> wrote:
>
>
> On 13.09.21 10:54, Peter Maydell wrote:
> > Something in here should be checking whether the insn the guest
> > used matches the PSCI conduit configured for the VM, ie
> > what arm_is_psci_call() does after your patch 10.
>
>
> It's yet another case where I believe we are both reading the spec
> differently :)
>
>   https://documentation-service.arm.com/static/6013e5faeee5236980d08619
>
> Section 2.5.3 speaks about the conduits. It says
>
>     Service calls are expected to be invoked through SMC instructions,
> except
>     for Standard Hypervisor Calls and Vendor Specific Hypervisor Calls. On
>     some platforms, however, SMC instructions are not available, and the
>     services can be accessed through HVC instructions. The method that
>     is used to invoke the service is referred to as the conduit.
>
> To me, that reads like "Use SMC whenever you can. If your hardware does
> not give you a way to handle SMC calls, falling back to HVC is ok. In
> that case, indicate that mandate to the OS".

QEMU here is being the platform, so we define what the conduit is
(or if one even exists). For the virt board this is "if the
guest has EL3 firmware, then the guest firmware is providing PSCI,
and QEMU should not; otherwise if the guest has EL2 then QEMU's
emulated firmware should be at EL3 using SMC, otherwise use HVC".

(So in practice for hvf at the moment this will mean the conduit
is always HVC, since hvf doesn't allow EL3 or EL2 in the guest.)

> In hvf, we can very easily trap for SMC calls and handle them. Why are
> we making OSs implement HVC call paths when SMC would work just as well
> for everyone?

OSes have to handle both anyway, because on real hardware if
there is no EL3 then it is IMPDEF whether SMC is trappable
to the hypervisor or whether it just UNDEFs to EL1.

> To keep your train of thought though, what would you do if we encounter
> a conduit that is different from the chosen one? Today, I am aware of 2
> different implementations: TCG injects #UD [1] while KVM sets x0 to -1 [2].

If the SMC or HVC insn isn't being used for PSCI then it should
have its standard architectural behaviour.

-- PMM


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-13 11:44       ` Peter Maydell
@ 2021-09-13 12:02         ` Alexander Graf
  2021-09-13 12:30           ` Peter Maydell
  0 siblings, 1 reply; 25+ messages in thread
From: Alexander Graf @ 2021-09-13 12:02 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Sergio Lopez, Peter Collingbourne, Marc Zyngier,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé


On 13.09.21 13:44, Peter Maydell wrote:
> On Mon, 13 Sept 2021 at 12:07, Alexander Graf <agraf@csgraf.de> wrote:
>>
>> On 13.09.21 10:54, Peter Maydell wrote:
>>> Something in here should be checking whether the insn the guest
>>> used matches the PSCI conduit configured for the VM, ie
>>> what arm_is_psci_call() does after your patch 10.
>>
>> It's yet another case where I believe we are both reading the spec
>> differently :)
>>
>>   https://documentation-service.arm.com/static/6013e5faeee5236980d08619
>>
>> Section 2.5.3 speaks about the conduits. It says
>>
>>     Service calls are expected to be invoked through SMC instructions,
>> except
>>     for Standard Hypervisor Calls and Vendor Specific Hypervisor Calls. On
>>     some platforms, however, SMC instructions are not available, and the
>>     services can be accessed through HVC instructions. The method that
>>     is used to invoke the service is referred to as the conduit.
>>
>> To me, that reads like "Use SMC whenever you can. If your hardware does
>> not give you a way to handle SMC calls, falling back to HVC is ok. In
>> that case, indicate that mandate to the OS".
> QEMU here is being the platform, so we define what the conduit is
> (or if one even exists). For the virt board this is "if the
> guest has EL3 firmware, then the guest firmware is providing PSCI,
> and QEMU should not; otherwise if the guest has EL2 then QEMU's
> emulated firmware should be at EL3 using SMC, otherwise use HVC".
>
> (So in practice for hvf at the moment this will mean the conduit
> is always HVC, since hvf doesn't allow EL3 or EL2 in the guest.)
>
>> In hvf, we can very easily trap for SMC calls and handle them. Why are
>> we making OSs implement HVC call paths when SMC would work just as well
>> for everyone?
> OSes have to handle both anyway, because on real hardware if
> there is no EL3 then it is IMPDEF whether SMC is trappable
> to the hypervisor or whether it just UNDEFs to EL1.
>
>> To keep your train of thought though, what would you do if we encounter
>> a conduit that is different from the chosen one? Today, I am aware of 2
>> different implementations: TCG injects #UD [1] while KVM sets x0 to -1 [2].
> If the SMC or HVC insn isn't being used for PSCI then it should
> have its standard architectural behaviour.


Why? Also, why does KVM behave differently? And why does Windows rely on
SMC availability on boot?

If you really insist that you don't care about users running Windows
with TCG and EL2=0, so be it. At least you can enable EL2 and it works
then. But I can't on hvf. It's one of the most useful use cases for hvf
on QEMU and I won't break it just because you insist that "SMC behavior
is IMPDEF, so it must be UNDEF". If it's IMPDEF, it may as well be "set
x0 to -1 and add 4 to pc".

And yes, this is a hill I will die on :)


Alex



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-13 12:02         ` Alexander Graf
@ 2021-09-13 12:30           ` Peter Maydell
  2021-09-13 21:29             ` Alexander Graf
  2021-09-15  9:46             ` Marc Zyngier
  0 siblings, 2 replies; 25+ messages in thread
From: Peter Maydell @ 2021-09-13 12:30 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Sergio Lopez, Peter Collingbourne, Marc Zyngier,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

On Mon, 13 Sept 2021 at 13:02, Alexander Graf <agraf@csgraf.de> wrote:
>
>
> On 13.09.21 13:44, Peter Maydell wrote:
> > On Mon, 13 Sept 2021 at 12:07, Alexander Graf <agraf@csgraf.de> wrote:
> >> To keep your train of thought though, what would you do if we encounter
> >> a conduit that is different from the chosen one? Today, I am aware of 2
> >> different implementations: TCG injects #UD [1] while KVM sets x0 to -1 [2].
> > If the SMC or HVC insn isn't being used for PSCI then it should
> > have its standard architectural behaviour.
>
> Why?

QEMU's assumption here is that there are basically two scenarios
for these instructions:
 (1) we're providing an emulation of firmware that uses this
     instruction (and only this insn, not the other one) to
     provide PSCI services
 (2) we're not emulating any firmware at all, we're running it
     in the guest, and that guest firmware is providing PSCI

In case (1) we provide a PSCI ABI on the end of the insn.
In case (2) we provide the architectural behaviour for the insn
so that the guest firmware can use it.

We don't currently have
 (3) we're providing an emulation of firmware that does something
     other than providing PSCI services on this instruction

which is what I think you're asking for. (Alternatively, you might
be after "provide PSCI via SMC, not HVC", ie use a different conduit.
If hvf documents that SMC is guaranteed to trap that would be
possible, I guess.)

> Also, why does KVM behave differently?

Looks like Marc made KVM set x0 to -1 for SMC calls in kernel commit
c0938c72f8070aa; conveniently he's on the cc list here so we can
ask him :-)

> And why does Windows rely on
> SMC availability on boot?

Ask Microsoft, but probably either they don't realize that
SMC might not exist and be trappable, or they only have a limited
set of hosts they care about. CPUs with no EL3 are not that common.

> If you really insist that you don't care about users running Windows
> with TCG and EL2=0, so be it. At least you can enable EL2 and it works
> then. But I can't on hvf. It's one of the most useful use cases for hvf
> on QEMU and I won't break it just because you insist that "SMC behavior
> is IMPDEF, so it must be UNDEF". If it's IMPDEF, it may as well be "set
> x0 to -1 and add 4 to pc".

I am not putting in random hacks for the benefit of specific guest OSes.
If there's a good reason why QEMU's behaviour is wrong then we can change
it, but "I want Windows to boot" doesn't count.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-13 12:30           ` Peter Maydell
@ 2021-09-13 21:29             ` Alexander Graf
  2021-09-15  9:46             ` Marc Zyngier
  1 sibling, 0 replies; 25+ messages in thread
From: Alexander Graf @ 2021-09-13 21:29 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Sergio Lopez, Marc Zyngier, Richard Henderson,
	QEMU Developers, Cameron Esfahani, Philippe Mathieu-Daudé,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne


On 13.09.21 14:30, Peter Maydell wrote:
> On Mon, 13 Sept 2021 at 13:02, Alexander Graf <agraf@csgraf.de> wrote:
>>
>> On 13.09.21 13:44, Peter Maydell wrote:
>>> On Mon, 13 Sept 2021 at 12:07, Alexander Graf <agraf@csgraf.de> wrote:
>>>> To keep your train of thought though, what would you do if we encounter
>>>> a conduit that is different from the chosen one? Today, I am aware of 2
>>>> different implementations: TCG injects #UD [1] while KVM sets x0 to -1 [2].
>>> If the SMC or HVC insn isn't being used for PSCI then it should
>>> have its standard architectural behaviour.
>> Why?
> QEMU's assumption here is that there are basically two scenarios
> for these instructions:
>  (1) we're providing an emulation of firmware that uses this
>      instruction (and only this insn, not the other one) to
>      provide PSCI services
>  (2) we're not emulating any firmware at all, we're running it
>      in the guest, and that guest firmware is providing PSCI
>
> In case (1) we provide a PSCI ABI on the end of the insn.
> In case (2) we provide the architectural behaviour for the insn
> so that the guest firmware can use it.
>
> We don't currently have
>  (3) we're providing an emulation of firmware that does something
>      other than providing PSCI services on this instruction
>
> which is what I think you're asking for. (Alternatively, you might
> be after "provide PSCI via SMC, not HVC", ie use a different conduit.
> If hvf documents that SMC is guaranteed to trap that would be
> possible, I guess.)


Hvf doesn't document anything. The only documentation it has are its C
headers.

However, M1 does not implement EL3, but traps SMC calls. It's the only
chip Apple has out for hvf on ARM today. I would be very surprised if
they started to regress on that functionality.

So, would you be open to changing the default conduit to SMC for
hvf_enabled()? Is that really a better experience than just modeling
behavior after KVM?


>
>> Also, why does KVM behave differently?
> Looks like Marc made KVM set x0 to -1 for SMC calls in kernel commit
> c0938c72f8070aa; conveniently he's on the cc list here so we can
> ask him :-)
>
>> And why does Windows rely on
>> SMC availability on boot?
> Ask Microsoft, but probably either they don't realize that
> SMC might not exist and be trappable, or they only have a limited
> set of hosts they care about. CPUs with no EL3 are not that common.


I'm pretty sure it's the latter :).


>
>> If you really insist that you don't care about users running Windows
>> with TCG and EL2=0, so be it. At least you can enable EL2 and it works
>> then. But I can't on hvf. It's one of the most useful use cases for hvf
>> on QEMU and I won't break it just because you insist that "SMC behavior
>> is IMPDEF, so it must be UNDEF". If it's IMPDEF, it may as well be "set
>> x0 to -1 and add 4 to pc".
> I am not putting in random hacks for the benefit of specific guest OSes.
> If there's a good reason why QEMU's behaviour is wrong then we can change
> it, but "I want Windows to boot" doesn't count.


Ok, so today we have 2 implementations for SMC traps in an EL0/1 only VM:

  * TCG injects #UD
  * KVM sets x0 = -1 and pc += 4.

With v10 of the HVF patch set, I'm following what KVM is doing. Can we
leave it at that for now and sort out with Marc (and maybe ARM spec
writers) what we want to do consistently across all implementations as a
follow-up?


Thanks,

Alex



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-13 12:30           ` Peter Maydell
  2021-09-13 21:29             ` Alexander Graf
@ 2021-09-15  9:46             ` Marc Zyngier
  2021-09-15 10:58               ` Alexander Graf
  1 sibling, 1 reply; 25+ messages in thread
From: Marc Zyngier @ 2021-09-15  9:46 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Sergio Lopez, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, Alexander Graf, qemu-arm, Frank Yang,
	Paolo Bonzini, Philippe Mathieu-Daudé

On Mon, 13 Sep 2021 13:30:57 +0100,
Peter Maydell <peter.maydell@linaro.org> wrote:
> 
> On Mon, 13 Sept 2021 at 13:02, Alexander Graf <agraf@csgraf.de> wrote:
> >
> >
> > On 13.09.21 13:44, Peter Maydell wrote:
> > > On Mon, 13 Sept 2021 at 12:07, Alexander Graf <agraf@csgraf.de> wrote:
> > >> To keep your train of thought though, what would you do if we encounter
> > >> a conduit that is different from the chosen one? Today, I am aware of 2
> > >> different implementations: TCG injects #UD [1] while KVM sets x0 to -1 [2].
> > > If the SMC or HVC insn isn't being used for PSCI then it should
> > > have its standard architectural behaviour.
> >
> > Why?
> 
> QEMU's assumption here is that there are basically two scenarios
> for these instructions:
>  (1) we're providing an emulation of firmware that uses this
>      instruction (and only this insn, not the other one) to
>      provide PSCI services
>  (2) we're not emulating any firmware at all, we're running it
>      in the guest, and that guest firmware is providing PSCI
> 
> In case (1) we provide a PSCI ABI on the end of the insn.
> In case (2) we provide the architectural behaviour for the insn
> so that the guest firmware can use it.
> 
> We don't currently have
>  (3) we're providing an emulation of firmware that does something
>      other than providing PSCI services on this instruction
> 
> which is what I think you're asking for. (Alternatively, you might
> be after "provide PSCI via SMC, not HVC", ie use a different conduit.
> If hvf documents that SMC is guaranteed to trap that would be
> possible, I guess.)
> 
> > Also, why does KVM behave differently?
> 
> Looks like Marc made KVM set x0 to -1 for SMC calls in kernel commit
> c0938c72f8070aa; conveniently he's on the cc list here so we can
> ask him :-)

If we got a SMC trap into KVM, that's because the HW knows about it,
so injecting an UNDEF is rather counter productive (we don't hide the
fact that EL3 actually exists).

However, we don't implement anything on the back of this instruction,
so we just return NOT_IMPLEMENTED (-1). With NV, we actually use it as
a guest hypervisor can use it for PSCI and SMC is guaranteed to trap
even if EL3 doesn't exist in the HW.

For the brain-damaged case where there is no EL3, SMC traps and the
hypervisor doesn't actually advertises EL3, that's likely a guest
bug. Tough luck.

Side note: Not sure where HVF does, but on the M1 running Linux, SMC
appears to trap to EL2 with EC=0x3f, which is a reserved exception
class. This of course results in an UNDEF being injected because as
far as KVM is concerned, this should never happen.

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-15  9:46             ` Marc Zyngier
@ 2021-09-15 10:58               ` Alexander Graf
  2021-09-15 15:07                 ` Marc Zyngier
  0 siblings, 1 reply; 25+ messages in thread
From: Alexander Graf @ 2021-09-15 10:58 UTC (permalink / raw)
  To: Marc Zyngier, Peter Maydell
  Cc: Eduardo Habkost, Sergio Lopez, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé


On 15.09.21 11:46, Marc Zyngier wrote:
> On Mon, 13 Sep 2021 13:30:57 +0100,
> Peter Maydell <peter.maydell@linaro.org> wrote:
>> On Mon, 13 Sept 2021 at 13:02, Alexander Graf <agraf@csgraf.de> wrote:
>>>
>>> On 13.09.21 13:44, Peter Maydell wrote:
>>>> On Mon, 13 Sept 2021 at 12:07, Alexander Graf <agraf@csgraf.de> wrote:
>>>>> To keep your train of thought though, what would you do if we encounter
>>>>> a conduit that is different from the chosen one? Today, I am aware of 2
>>>>> different implementations: TCG injects #UD [1] while KVM sets x0 to -1 [2].
>>>> If the SMC or HVC insn isn't being used for PSCI then it should
>>>> have its standard architectural behaviour.
>>> Why?
>> QEMU's assumption here is that there are basically two scenarios
>> for these instructions:
>>  (1) we're providing an emulation of firmware that uses this
>>      instruction (and only this insn, not the other one) to
>>      provide PSCI services
>>  (2) we're not emulating any firmware at all, we're running it
>>      in the guest, and that guest firmware is providing PSCI
>>
>> In case (1) we provide a PSCI ABI on the end of the insn.
>> In case (2) we provide the architectural behaviour for the insn
>> so that the guest firmware can use it.
>>
>> We don't currently have
>>  (3) we're providing an emulation of firmware that does something
>>      other than providing PSCI services on this instruction
>>
>> which is what I think you're asking for. (Alternatively, you might
>> be after "provide PSCI via SMC, not HVC", ie use a different conduit.
>> If hvf documents that SMC is guaranteed to trap that would be
>> possible, I guess.)
>>
>>> Also, why does KVM behave differently?
>> Looks like Marc made KVM set x0 to -1 for SMC calls in kernel commit
>> c0938c72f8070aa; conveniently he's on the cc list here so we can
>> ask him :-)
> If we got a SMC trap into KVM, that's because the HW knows about it,
> so injecting an UNDEF is rather counter productive (we don't hide the
> fact that EL3 actually exists).


This is the part where you and Peter disagree :). What would you suggest
to do to create consistency between KVM and TCG based EL0/1 only VMs?


> However, we don't implement anything on the back of this instruction,
> so we just return NOT_IMPLEMENTED (-1). With NV, we actually use it as
> a guest hypervisor can use it for PSCI and SMC is guaranteed to trap
> even if EL3 doesn't exist in the HW.
>
> For the brain-damaged case where there is no EL3, SMC traps and the
> hypervisor doesn't actually advertises EL3, that's likely a guest
> bug. Tough luck.
>
> Side note: Not sure where HVF does, but on the M1 running Linux, SMC
> appears to trap to EL2 with EC=0x3f, which is a reserved exception
> class. This of course results in an UNDEF being injected because as
> far as KVM is concerned, this should never happen.


Could that be yet another magical implementation specific MSR bit that
needs to be set? Hvf returns 0x17 (EC_AA64_SMC) for SMC calls.


Alex



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 07/11] hvf: arm: Implement PSCI handling
  2021-09-15 10:58               ` Alexander Graf
@ 2021-09-15 15:07                 ` Marc Zyngier
  0 siblings, 0 replies; 25+ messages in thread
From: Marc Zyngier @ 2021-09-15 15:07 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Sergio Lopez,
	Peter Collingbourne, Richard Henderson, QEMU Developers,
	Cameron Esfahani, Roman Bolshakov, qemu-arm, Frank Yang,
	Paolo Bonzini, Philippe Mathieu-Daudé

On Wed, 15 Sep 2021 11:58:29 +0100,
Alexander Graf <agraf@csgraf.de> wrote:
> 
> 
> On 15.09.21 11:46, Marc Zyngier wrote:
> > On Mon, 13 Sep 2021 13:30:57 +0100,
> > Peter Maydell <peter.maydell@linaro.org> wrote:
> >> On Mon, 13 Sept 2021 at 13:02, Alexander Graf <agraf@csgraf.de> wrote:
> >>>
> >>> On 13.09.21 13:44, Peter Maydell wrote:
> >>>> On Mon, 13 Sept 2021 at 12:07, Alexander Graf <agraf@csgraf.de> wrote:
> >>>>> To keep your train of thought though, what would you do if we encounter
> >>>>> a conduit that is different from the chosen one? Today, I am aware of 2
> >>>>> different implementations: TCG injects #UD [1] while KVM sets x0 to -1 [2].
> >>>> If the SMC or HVC insn isn't being used for PSCI then it should
> >>>> have its standard architectural behaviour.
> >>> Why?
> >> QEMU's assumption here is that there are basically two scenarios
> >> for these instructions:
> >>  (1) we're providing an emulation of firmware that uses this
> >>      instruction (and only this insn, not the other one) to
> >>      provide PSCI services
> >>  (2) we're not emulating any firmware at all, we're running it
> >>      in the guest, and that guest firmware is providing PSCI
> >>
> >> In case (1) we provide a PSCI ABI on the end of the insn.
> >> In case (2) we provide the architectural behaviour for the insn
> >> so that the guest firmware can use it.
> >>
> >> We don't currently have
> >>  (3) we're providing an emulation of firmware that does something
> >>      other than providing PSCI services on this instruction
> >>
> >> which is what I think you're asking for. (Alternatively, you might
> >> be after "provide PSCI via SMC, not HVC", ie use a different conduit.
> >> If hvf documents that SMC is guaranteed to trap that would be
> >> possible, I guess.)
> >>
> >>> Also, why does KVM behave differently?
> >> Looks like Marc made KVM set x0 to -1 for SMC calls in kernel commit
> >> c0938c72f8070aa; conveniently he's on the cc list here so we can
> >> ask him :-)
> > If we got a SMC trap into KVM, that's because the HW knows about it,
> > so injecting an UNDEF is rather counter productive (we don't hide the
> > fact that EL3 actually exists).
> 
> 
> This is the part where you and Peter disagree :). What would you suggest
> to do to create consistency between KVM and TCG based EL0/1 only VMs?

I don't think we disagree. We simply have different implementation
choices. The KVM "firmware" can only be used with HVC, and not
SMC. SMC is reserved for cases where the guest talks to the actual
EL3, or an emulation of it in the case of NV.

As for consistency between TGC and KVM, I have no plan for that
whatsoever. Both implementations are valid, and they don't have to be
identical. Even more, diversity is important, as it weeds out silly
assumptions that are baked in non-portable SW.

Windows doesn't boot? I won't loose any sleep over it.

> 
> > However, we don't implement anything on the back of this instruction,
> > so we just return NOT_IMPLEMENTED (-1). With NV, we actually use it as
> > a guest hypervisor can use it for PSCI and SMC is guaranteed to trap
> > even if EL3 doesn't exist in the HW.
> >
> > For the brain-damaged case where there is no EL3, SMC traps and the
> > hypervisor doesn't actually advertises EL3, that's likely a guest
> > bug. Tough luck.
> >
> > Side note: Not sure where HVF does, but on the M1 running Linux, SMC
> > appears to trap to EL2 with EC=0x3f, which is a reserved exception
> > class. This of course results in an UNDEF being injected because as
> > far as KVM is concerned, this should never happen.
>
> Could that be yet another magical implementation specific MSR bit that
> needs to be set? Hvf returns 0x17 (EC_AA64_SMC) for SMC calls.

That's possible, but that's not something KVM will do. Also, from what
I understand of HVF, this value is what you get in userspace, and it
says nothing of what the kernel side does. It could well be
translating the invalid EC into something else, after having read the
instruction from the guest for all I know.

It is pretty obvious that this HW is not a valid implementation of the
architecture and if it decides to screw itself up, I'm happy to
oblige.

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2021-09-15 15:10 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-12 23:07 [PATCH v9 00/11] hvf: Implement Apple Silicon Support Alexander Graf
2021-09-12 23:07 ` [PATCH v9 01/11] arm: Move PMC register definitions to cpu.h Alexander Graf
2021-09-13  8:49   ` Peter Maydell
2021-09-12 23:07 ` [PATCH v9 02/11] hvf: Add execute to dirty log permission bitmap Alexander Graf
2021-09-12 23:07 ` [PATCH v9 03/11] hvf: Introduce hvf_arch_init() callback Alexander Graf
2021-09-12 23:07 ` [PATCH v9 04/11] hvf: Add Apple Silicon support Alexander Graf
2021-09-12 23:07 ` [PATCH v9 05/11] arm/hvf: Add a WFI handler Alexander Graf
2021-09-12 23:07 ` [PATCH v9 06/11] hvf: arm: Implement -cpu host Alexander Graf
2021-09-13  8:54   ` Philippe Mathieu-Daudé
2021-09-12 23:07 ` [PATCH v9 07/11] hvf: arm: Implement PSCI handling Alexander Graf
2021-09-13  8:54   ` Peter Maydell
2021-09-13 11:07     ` Alexander Graf
2021-09-13 11:44       ` Peter Maydell
2021-09-13 12:02         ` Alexander Graf
2021-09-13 12:30           ` Peter Maydell
2021-09-13 21:29             ` Alexander Graf
2021-09-15  9:46             ` Marc Zyngier
2021-09-15 10:58               ` Alexander Graf
2021-09-15 15:07                 ` Marc Zyngier
2021-09-12 23:07 ` [PATCH v9 08/11] arm: Add Hypervisor.framework build target Alexander Graf
2021-09-12 23:07 ` [PATCH v9 09/11] hvf: arm: Add rudimentary PMC support Alexander Graf
2021-09-12 23:07 ` [PATCH v9 10/11] arm: tcg: Adhere to SMCCC 1.3 section 5.2 Alexander Graf
2021-09-13  8:46   ` Peter Maydell
2021-09-12 23:07 ` [PATCH v9 11/11] hvf: arm: " Alexander Graf
2021-09-13  8:52   ` Peter Maydell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.