All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v8 00/19] hvf: Implement Apple Silicon Support
@ 2021-05-19 20:22 Alexander Graf
  2021-05-19 20:22 ` [PATCH v8 01/19] hvf: Move assert_hvf_ok() into common directory Alexander Graf
                   ` (20 more replies)
  0 siblings, 21 replies; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Now that Apple Silicon is widely available, people are obviously excited
to try and run virtualized workloads on them, such as Linux and Windows.

This patch set implements a fully functional version to get the ball
going on that. With this applied, I can successfully run both Linux and
Windows as guests. I am not aware of any limitations specific to
Hypervisor.framework apart from:

  - Live migration / savevm
  - gdbstub debugging (SP register)
  - missing GICv3 support


Enjoy!

Alex

v1 -> v2:

  - New patch: hvf: Actually set SIG_IPI mask
  - New patch: hvf: Introduce hvf vcpu struct
  - New patch: hvf: arm: Mark CPU as dirty on reset
  - Removed patch: hw/arm/virt: Disable highmem when on hypervisor.framework
  - Removed patch: arm: Synchronize CPU on PSCI on
  - Fix build on 32bit arm
  - Merge vcpu kick function patch into ARM enablement
  - Implement WFI handling (allows vCPUs to sleep)
  - Synchronize system registers (fixes OVMF crashes and reboot)
  - Don't always call cpu_synchronize_state()
  - Use more fine grained iothread locking
  - Populate aa64mmfr0 from hardware
  - Make safe to ctrl-C entitlement application

v2 -> v3:

  - Removed patch: hvf: Actually set SIG_IPI mask
  - New patch: hvf: arm: Add support for GICv3
  - New patch: hvf: arm: Implement -cpu host
  - Advance PC on SMC
  - Use cp list interface for sysreg syncs
  - Do not set current_cpu
  - Fix sysreg isread mask
  - Move sysreg handling to functions
  - Remove WFI logic again
  - Revert to global iothread locking

v3 -> v4:

  - Removed patch: hvf: arm: Mark CPU as dirty on reset
  - New patch: hvf: Simplify post reset/init/loadvm hooks
  - Remove i386-softmmu target (meson.build for hvf target)
  - Combine both if statements (PSCI)
  - Use hv.h instead of Hypervisor.h for 10.15 compat
  - Remove manual inclusion of Hypervisor.h in common .c files
  - No longer include Hypervisor.h in arm hvf .c files
  - Remove unused exe_full variable
  - Reuse exe_name variable

v4 -> v5:

  - Use g_free() on destroy

v5 -> v6:

  - Switch SYSREG() macro order to the same as asm intrinsics

v6 -> v7:

  - Already merged: hvf: Add hypervisor entitlement to output binaries
  - Already merged: hvf: x86: Remove unused definitions
  - Patch split: hvf: Move common code out
    -> hvf: Move assert_hvf_ok() into common directory
    -> hvf: Move vcpu thread functions into common directory
    -> hvf: Move cpu functions into common directory
    -> hvf: Move hvf internal definitions into common header
    -> hvf: Make hvf_set_phys_mem() static
    -> hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
    -> hvf: Split out common code on vcpu init and destroy
    -> hvf: Use cpu_synchronize_state()
    -> hvf: Make synchronize functions static
    -> hvf: Remove hvf-accel-ops.h
  - New patch: hvf: arm: Implement PSCI handling
  - New patch: arm: Enable Windows 10 trusted SMCCC boot call
  - New patch: hvf: arm: Handle Windows 10 SMC call
  - Removed patch: "arm: Set PSCI to 0.2 for HVF" (included above)
  - Removed patch: "hvf: arm: Add support for GICv3" (deferred to later)
  - Remove osdep.h include from hvf_int.h
  - Synchronize SIMD registers as well
  - Prepend 0x for hex values
  - Convert DPRINTF to trace points
  - Use main event loop (fixes gdbstub issues)
  - Remove PSCI support, inject UDEF on HVC/SMC
  - Change vtimer logic to look at ctl.istatus for vtimer mask sync
  - Add kick callback again (fixes remote CPU notification)
  - Move function define to own header
  - Do not propagate SVE features for HVF
  - Remove stray whitespace change
  - Verify that EL0 and EL1 do not allow AArch32 mode
  - Only probe host CPU features once
  - Move WFI into function
  - Improve comment wording
  - Simplify HVF matching logic in meson build file

v7 -> v8:

  - checkpatch fixes
  - Do not advance for HVC, PC is already updated by hvf
    (fixes Linux boot)

Alexander Graf (18):
  hvf: Move assert_hvf_ok() into common directory
  hvf: Move vcpu thread functions into common directory
  hvf: Move cpu functions into common directory
  hvf: Move hvf internal definitions into common header
  hvf: Make hvf_set_phys_mem() static
  hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
  hvf: Split out common code on vcpu init and destroy
  hvf: Use cpu_synchronize_state()
  hvf: Make synchronize functions static
  hvf: Remove hvf-accel-ops.h
  hvf: Introduce hvf vcpu struct
  hvf: Simplify post reset/init/loadvm hooks
  hvf: Add Apple Silicon support
  hvf: arm: Implement -cpu host
  hvf: arm: Implement PSCI handling
  arm: Add Hypervisor.framework build target
  arm: Enable Windows 10 trusted SMCCC boot call
  hvf: arm: Handle Windows 10 SMC call

Peter Collingbourne (1):
  arm/hvf: Add a WFI handler

 MAINTAINERS                     |  13 +
 accel/hvf/hvf-accel-ops.c       | 484 ++++++++++++++++
 accel/hvf/hvf-all.c             |  47 ++
 accel/hvf/meson.build           |   7 +
 accel/meson.build               |   1 +
 include/hw/core/cpu.h           |   3 +-
 include/sysemu/hvf_int.h        |  66 +++
 meson.build                     |   8 +
 target/arm/cpu.c                |  13 +-
 target/arm/cpu.h                |   2 +
 target/arm/hvf/hvf.c            | 962 ++++++++++++++++++++++++++++++++
 target/arm/hvf/meson.build      |   3 +
 target/arm/hvf/trace-events     |  11 +
 target/arm/hvf_arm.h            |  19 +
 target/arm/kvm-consts.h         |   2 +
 target/arm/kvm_arm.h            |   2 -
 target/arm/meson.build          |   2 +
 target/arm/psci.c               |   2 +
 target/i386/hvf/hvf-accel-ops.c | 146 -----
 target/i386/hvf/hvf-accel-ops.h |  23 -
 target/i386/hvf/hvf-i386.h      |  33 +-
 target/i386/hvf/hvf.c           | 464 ++-------------
 target/i386/hvf/meson.build     |   1 -
 target/i386/hvf/vmx.h           |  24 +-
 target/i386/hvf/x86.c           |  28 +-
 target/i386/hvf/x86_descr.c     |  26 +-
 target/i386/hvf/x86_emu.c       |  62 +-
 target/i386/hvf/x86_mmu.c       |   4 +-
 target/i386/hvf/x86_task.c      |  12 +-
 target/i386/hvf/x86hvf.c        | 222 ++++----
 target/i386/hvf/x86hvf.h        |   2 -
 31 files changed, 1887 insertions(+), 807 deletions(-)
 create mode 100644 accel/hvf/hvf-accel-ops.c
 create mode 100644 accel/hvf/hvf-all.c
 create mode 100644 accel/hvf/meson.build
 create mode 100644 include/sysemu/hvf_int.h
 create mode 100644 target/arm/hvf/hvf.c
 create mode 100644 target/arm/hvf/meson.build
 create mode 100644 target/arm/hvf/trace-events
 create mode 100644 target/arm/hvf_arm.h
 delete mode 100644 target/i386/hvf/hvf-accel-ops.c
 delete mode 100644 target/i386/hvf/hvf-accel-ops.h

-- 
2.30.1 (Apple Git-130)



^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v8 01/19] hvf: Move assert_hvf_ok() into common directory
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:00   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 02/19] hvf: Move vcpu thread functions " Alexander Graf
                   ` (19 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.

This patch moves assert_hvf_ok() and introduces generic build infrastructure.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 MAINTAINERS              |  8 +++++++
 accel/hvf/hvf-all.c      | 47 ++++++++++++++++++++++++++++++++++++++++
 accel/hvf/meson.build    |  6 +++++
 accel/meson.build        |  1 +
 include/sysemu/hvf_int.h | 18 +++++++++++++++
 target/i386/hvf/hvf.c    | 33 +---------------------------
 6 files changed, 81 insertions(+), 32 deletions(-)
 create mode 100644 accel/hvf/hvf-all.c
 create mode 100644 accel/hvf/meson.build
 create mode 100644 include/sysemu/hvf_int.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 89741cfc19..262e96714b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -434,7 +434,15 @@ M: Roman Bolshakov <r.bolshakov@yadro.com>
 W: https://wiki.qemu.org/Features/HVF
 S: Maintained
 F: target/i386/hvf/
+
+HVF
+M: Cameron Esfahani <dirty@apple.com>
+M: Roman Bolshakov <r.bolshakov@yadro.com>
+W: https://wiki.qemu.org/Features/HVF
+S: Maintained
+F: accel/hvf/
 F: include/sysemu/hvf.h
+F: include/sysemu/hvf_int.h
 
 WHPX CPUs
 M: Sunil Muthuswamy <sunilmut@microsoft.com>
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
new file mode 100644
index 0000000000..f185b0830a
--- /dev/null
+++ b/accel/hvf/hvf-all.c
@@ -0,0 +1,47 @@
+/*
+ * QEMU Hypervisor.framework support
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu-common.h"
+#include "qemu/error-report.h"
+#include "sysemu/hvf.h"
+#include "sysemu/hvf_int.h"
+
+void assert_hvf_ok(hv_return_t ret)
+{
+    if (ret == HV_SUCCESS) {
+        return;
+    }
+
+    switch (ret) {
+    case HV_ERROR:
+        error_report("Error: HV_ERROR");
+        break;
+    case HV_BUSY:
+        error_report("Error: HV_BUSY");
+        break;
+    case HV_BAD_ARGUMENT:
+        error_report("Error: HV_BAD_ARGUMENT");
+        break;
+    case HV_NO_RESOURCES:
+        error_report("Error: HV_NO_RESOURCES");
+        break;
+    case HV_NO_DEVICE:
+        error_report("Error: HV_NO_DEVICE");
+        break;
+    case HV_UNSUPPORTED:
+        error_report("Error: HV_UNSUPPORTED");
+        break;
+    default:
+        error_report("Unknown Error");
+    }
+
+    abort();
+}
diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build
new file mode 100644
index 0000000000..227b11cd71
--- /dev/null
+++ b/accel/hvf/meson.build
@@ -0,0 +1,6 @@
+hvf_ss = ss.source_set()
+hvf_ss.add(files(
+  'hvf-all.c',
+))
+
+specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss)
diff --git a/accel/meson.build b/accel/meson.build
index b44ba30c86..dfd808d2c8 100644
--- a/accel/meson.build
+++ b/accel/meson.build
@@ -2,6 +2,7 @@ specific_ss.add(files('accel-common.c'))
 softmmu_ss.add(files('accel-softmmu.c'))
 user_ss.add(files('accel-user.c'))
 
+subdir('hvf')
 subdir('qtest')
 subdir('kvm')
 subdir('tcg')
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
new file mode 100644
index 0000000000..3deb4cfacc
--- /dev/null
+++ b/include/sysemu/hvf_int.h
@@ -0,0 +1,18 @@
+/*
+ * QEMU Hypervisor.framework (HVF) support
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+/* header to be included in HVF-specific code */
+
+#ifndef HVF_INT_H
+#define HVF_INT_H
+
+#include <Hypervisor/hv.h>
+
+void assert_hvf_ok(hv_return_t ret);
+
+#endif
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index f044181d06..32f42f1592 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -51,6 +51,7 @@
 #include "qemu/error-report.h"
 
 #include "sysemu/hvf.h"
+#include "sysemu/hvf_int.h"
 #include "sysemu/runstate.h"
 #include "hvf-i386.h"
 #include "vmcs.h"
@@ -76,38 +77,6 @@
 
 HVFState *hvf_state;
 
-static void assert_hvf_ok(hv_return_t ret)
-{
-    if (ret == HV_SUCCESS) {
-        return;
-    }
-
-    switch (ret) {
-    case HV_ERROR:
-        error_report("Error: HV_ERROR");
-        break;
-    case HV_BUSY:
-        error_report("Error: HV_BUSY");
-        break;
-    case HV_BAD_ARGUMENT:
-        error_report("Error: HV_BAD_ARGUMENT");
-        break;
-    case HV_NO_RESOURCES:
-        error_report("Error: HV_NO_RESOURCES");
-        break;
-    case HV_NO_DEVICE:
-        error_report("Error: HV_NO_DEVICE");
-        break;
-    case HV_UNSUPPORTED:
-        error_report("Error: HV_UNSUPPORTED");
-        break;
-    default:
-        error_report("Unknown Error");
-    }
-
-    abort();
-}
-
 /* Memory slots */
 hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
 {
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 02/19] hvf: Move vcpu thread functions into common directory
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
  2021-05-19 20:22 ` [PATCH v8 01/19] hvf: Move assert_hvf_ok() into common directory Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:01   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 03/19] hvf: Move cpu " Alexander Graf
                   ` (18 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.

This patch moves the vCPU thread loop over.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 {target/i386 => accel}/hvf/hvf-accel-ops.c | 0
 {target/i386 => accel}/hvf/hvf-accel-ops.h | 0
 accel/hvf/meson.build                      | 1 +
 target/i386/hvf/meson.build                | 1 -
 target/i386/hvf/x86hvf.c                   | 2 +-
 5 files changed, 2 insertions(+), 2 deletions(-)
 rename {target/i386 => accel}/hvf/hvf-accel-ops.c (100%)
 rename {target/i386 => accel}/hvf/hvf-accel-ops.h (100%)

diff --git a/target/i386/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
similarity index 100%
rename from target/i386/hvf/hvf-accel-ops.c
rename to accel/hvf/hvf-accel-ops.c
diff --git a/target/i386/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
similarity index 100%
rename from target/i386/hvf/hvf-accel-ops.h
rename to accel/hvf/hvf-accel-ops.h
diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build
index 227b11cd71..fc52cb7843 100644
--- a/accel/hvf/meson.build
+++ b/accel/hvf/meson.build
@@ -1,6 +1,7 @@
 hvf_ss = ss.source_set()
 hvf_ss.add(files(
   'hvf-all.c',
+  'hvf-accel-ops.c',
 ))
 
 specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss)
diff --git a/target/i386/hvf/meson.build b/target/i386/hvf/meson.build
index d253d5fd10..f6d4c394d3 100644
--- a/target/i386/hvf/meson.build
+++ b/target/i386/hvf/meson.build
@@ -1,6 +1,5 @@
 i386_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
   'hvf.c',
-  'hvf-accel-ops.c',
   'x86.c',
   'x86_cpuid.c',
   'x86_decode.c',
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 0d7533742e..2b99f3eaa2 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -32,7 +32,7 @@
 #include <Hypervisor/hv.h>
 #include <Hypervisor/hv_vmx.h>
 
-#include "hvf-accel-ops.h"
+#include "accel/hvf/hvf-accel-ops.h"
 
 void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
                      SegmentCache *qseg, bool is_tr)
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 03/19] hvf: Move cpu functions into common directory
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
  2021-05-19 20:22 ` [PATCH v8 01/19] hvf: Move assert_hvf_ok() into common directory Alexander Graf
  2021-05-19 20:22 ` [PATCH v8 02/19] hvf: Move vcpu thread functions " Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:02   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 04/19] hvf: Move hvf internal definitions into common header Alexander Graf
                   ` (17 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.

This patch moves CPU and memory operations over. While at it, make sure
the code is consumable on non-i386 systems.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 accel/hvf/hvf-accel-ops.c  | 308 ++++++++++++++++++++++++++++++++++++-
 include/sysemu/hvf_int.h   |   4 +
 target/i386/hvf/hvf-i386.h |   2 -
 target/i386/hvf/hvf.c      | 302 ------------------------------------
 target/i386/hvf/x86hvf.h   |   2 -
 5 files changed, 311 insertions(+), 307 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index cbaad238e0..c2136dfbb8 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -50,13 +50,319 @@
 #include "qemu/osdep.h"
 #include "qemu/error-report.h"
 #include "qemu/main-loop.h"
+#include "exec/address-spaces.h"
+#include "exec/exec-all.h"
+#include "sysemu/cpus.h"
 #include "sysemu/hvf.h"
+#include "sysemu/hvf_int.h"
 #include "sysemu/runstate.h"
-#include "target/i386/cpu.h"
 #include "qemu/guest-random.h"
 
 #include "hvf-accel-ops.h"
 
+HVFState *hvf_state;
+
+/* Memory slots */
+
+hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
+{
+    hvf_slot *slot;
+    int x;
+    for (x = 0; x < hvf_state->num_slots; ++x) {
+        slot = &hvf_state->slots[x];
+        if (slot->size && start < (slot->start + slot->size) &&
+            (start + size) > slot->start) {
+            return slot;
+        }
+    }
+    return NULL;
+}
+
+struct mac_slot {
+    int present;
+    uint64_t size;
+    uint64_t gpa_start;
+    uint64_t gva;
+};
+
+struct mac_slot mac_slots[32];
+
+static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
+{
+    struct mac_slot *macslot;
+    hv_return_t ret;
+
+    macslot = &mac_slots[slot->slot_id];
+
+    if (macslot->present) {
+        if (macslot->size != slot->size) {
+            macslot->present = 0;
+            ret = hv_vm_unmap(macslot->gpa_start, macslot->size);
+            assert_hvf_ok(ret);
+        }
+    }
+
+    if (!slot->size) {
+        return 0;
+    }
+
+    macslot->present = 1;
+    macslot->gpa_start = slot->start;
+    macslot->size = slot->size;
+    ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
+    assert_hvf_ok(ret);
+    return 0;
+}
+
+void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
+{
+    hvf_slot *mem;
+    MemoryRegion *area = section->mr;
+    bool writeable = !area->readonly && !area->rom_device;
+    hv_memory_flags_t flags;
+
+    if (!memory_region_is_ram(area)) {
+        if (writeable) {
+            return;
+        } else if (!memory_region_is_romd(area)) {
+            /*
+             * If the memory device is not in romd_mode, then we actually want
+             * to remove the hvf memory slot so all accesses will trap.
+             */
+             add = false;
+        }
+    }
+
+    mem = hvf_find_overlap_slot(
+            section->offset_within_address_space,
+            int128_get64(section->size));
+
+    if (mem && add) {
+        if (mem->size == int128_get64(section->size) &&
+            mem->start == section->offset_within_address_space &&
+            mem->mem == (memory_region_get_ram_ptr(area) +
+            section->offset_within_region)) {
+            return; /* Same region was attempted to register, go away. */
+        }
+    }
+
+    /* Region needs to be reset. set the size to 0 and remap it. */
+    if (mem) {
+        mem->size = 0;
+        if (do_hvf_set_memory(mem, 0)) {
+            error_report("Failed to reset overlapping slot");
+            abort();
+        }
+    }
+
+    if (!add) {
+        return;
+    }
+
+    if (area->readonly ||
+        (!memory_region_is_ram(area) && memory_region_is_romd(area))) {
+        flags = HV_MEMORY_READ | HV_MEMORY_EXEC;
+    } else {
+        flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC;
+    }
+
+    /* Now make a new slot. */
+    int x;
+
+    for (x = 0; x < hvf_state->num_slots; ++x) {
+        mem = &hvf_state->slots[x];
+        if (!mem->size) {
+            break;
+        }
+    }
+
+    if (x == hvf_state->num_slots) {
+        error_report("No free slots");
+        abort();
+    }
+
+    mem->size = int128_get64(section->size);
+    mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region;
+    mem->start = section->offset_within_address_space;
+    mem->region = area;
+
+    if (do_hvf_set_memory(mem, flags)) {
+        error_report("Error registering new memory slot");
+        abort();
+    }
+}
+
+static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
+{
+    if (!cpu->vcpu_dirty) {
+        hvf_get_registers(cpu);
+        cpu->vcpu_dirty = true;
+    }
+}
+
+void hvf_cpu_synchronize_state(CPUState *cpu)
+{
+    if (!cpu->vcpu_dirty) {
+        run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
+    }
+}
+
+static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
+                                              run_on_cpu_data arg)
+{
+    hvf_put_registers(cpu);
+    cpu->vcpu_dirty = false;
+}
+
+void hvf_cpu_synchronize_post_reset(CPUState *cpu)
+{
+    run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
+}
+
+static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
+                                             run_on_cpu_data arg)
+{
+    hvf_put_registers(cpu);
+    cpu->vcpu_dirty = false;
+}
+
+void hvf_cpu_synchronize_post_init(CPUState *cpu)
+{
+    run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
+}
+
+static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
+                                              run_on_cpu_data arg)
+{
+    cpu->vcpu_dirty = true;
+}
+
+void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
+{
+    run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
+}
+
+static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
+{
+    hvf_slot *slot;
+
+    slot = hvf_find_overlap_slot(
+            section->offset_within_address_space,
+            int128_get64(section->size));
+
+    /* protect region against writes; begin tracking it */
+    if (on) {
+        slot->flags |= HVF_SLOT_LOG;
+        hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
+                      HV_MEMORY_READ);
+    /* stop tracking region*/
+    } else {
+        slot->flags &= ~HVF_SLOT_LOG;
+        hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
+                      HV_MEMORY_READ | HV_MEMORY_WRITE);
+    }
+}
+
+static void hvf_log_start(MemoryListener *listener,
+                          MemoryRegionSection *section, int old, int new)
+{
+    if (old != 0) {
+        return;
+    }
+
+    hvf_set_dirty_tracking(section, 1);
+}
+
+static void hvf_log_stop(MemoryListener *listener,
+                         MemoryRegionSection *section, int old, int new)
+{
+    if (new != 0) {
+        return;
+    }
+
+    hvf_set_dirty_tracking(section, 0);
+}
+
+static void hvf_log_sync(MemoryListener *listener,
+                         MemoryRegionSection *section)
+{
+    /*
+     * sync of dirty pages is handled elsewhere; just make sure we keep
+     * tracking the region.
+     */
+    hvf_set_dirty_tracking(section, 1);
+}
+
+static void hvf_region_add(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    hvf_set_phys_mem(section, true);
+}
+
+static void hvf_region_del(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    hvf_set_phys_mem(section, false);
+}
+
+static MemoryListener hvf_memory_listener = {
+    .priority = 10,
+    .region_add = hvf_region_add,
+    .region_del = hvf_region_del,
+    .log_start = hvf_log_start,
+    .log_stop = hvf_log_stop,
+    .log_sync = hvf_log_sync,
+};
+
+static void dummy_signal(int sig)
+{
+}
+
+bool hvf_allowed;
+
+static int hvf_accel_init(MachineState *ms)
+{
+    int x;
+    hv_return_t ret;
+    HVFState *s;
+
+    ret = hv_vm_create(HV_VM_DEFAULT);
+    assert_hvf_ok(ret);
+
+    s = g_new0(HVFState, 1);
+
+    s->num_slots = 32;
+    for (x = 0; x < s->num_slots; ++x) {
+        s->slots[x].size = 0;
+        s->slots[x].slot_id = x;
+    }
+
+    hvf_state = s;
+    memory_listener_register(&hvf_memory_listener, &address_space_memory);
+    return 0;
+}
+
+static void hvf_accel_class_init(ObjectClass *oc, void *data)
+{
+    AccelClass *ac = ACCEL_CLASS(oc);
+    ac->name = "HVF";
+    ac->init_machine = hvf_accel_init;
+    ac->allowed = &hvf_allowed;
+}
+
+static const TypeInfo hvf_accel_type = {
+    .name = TYPE_HVF_ACCEL,
+    .parent = TYPE_ACCEL,
+    .class_init = hvf_accel_class_init,
+};
+
+static void hvf_type_init(void)
+{
+    type_register_static(&hvf_accel_type);
+}
+
+type_init(hvf_type_init);
+
 /*
  * The HVF-specific vCPU thread function. This one should only run when the host
  * CPU supports the VMX "unrestricted guest" feature.
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index 3deb4cfacc..4c657b054c 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -13,6 +13,10 @@
 
 #include <Hypervisor/hv.h>
 
+void hvf_set_phys_mem(MemoryRegionSection *, bool);
 void assert_hvf_ok(hv_return_t ret);
+hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
+int hvf_put_registers(CPUState *);
+int hvf_get_registers(CPUState *);
 
 #endif
diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h
index 59cfca8875..94e5c788c4 100644
--- a/target/i386/hvf/hvf-i386.h
+++ b/target/i386/hvf/hvf-i386.h
@@ -51,9 +51,7 @@ struct HVFState {
 };
 extern HVFState *hvf_state;
 
-void hvf_set_phys_mem(MemoryRegionSection *, bool);
 void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int);
-hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
 
 #ifdef NEED_CPU_H
 /* Functions exported to host specific mode */
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index 32f42f1592..100ede2a4d 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -75,137 +75,6 @@
 
 #include "hvf-accel-ops.h"
 
-HVFState *hvf_state;
-
-/* Memory slots */
-hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
-{
-    hvf_slot *slot;
-    int x;
-    for (x = 0; x < hvf_state->num_slots; ++x) {
-        slot = &hvf_state->slots[x];
-        if (slot->size && start < (slot->start + slot->size) &&
-            (start + size) > slot->start) {
-            return slot;
-        }
-    }
-    return NULL;
-}
-
-struct mac_slot {
-    int present;
-    uint64_t size;
-    uint64_t gpa_start;
-    uint64_t gva;
-};
-
-struct mac_slot mac_slots[32];
-
-static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
-{
-    struct mac_slot *macslot;
-    hv_return_t ret;
-
-    macslot = &mac_slots[slot->slot_id];
-
-    if (macslot->present) {
-        if (macslot->size != slot->size) {
-            macslot->present = 0;
-            ret = hv_vm_unmap(macslot->gpa_start, macslot->size);
-            assert_hvf_ok(ret);
-        }
-    }
-
-    if (!slot->size) {
-        return 0;
-    }
-
-    macslot->present = 1;
-    macslot->gpa_start = slot->start;
-    macslot->size = slot->size;
-    ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
-    assert_hvf_ok(ret);
-    return 0;
-}
-
-void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
-{
-    hvf_slot *mem;
-    MemoryRegion *area = section->mr;
-    bool writeable = !area->readonly && !area->rom_device;
-    hv_memory_flags_t flags;
-
-    if (!memory_region_is_ram(area)) {
-        if (writeable) {
-            return;
-        } else if (!memory_region_is_romd(area)) {
-            /*
-             * If the memory device is not in romd_mode, then we actually want
-             * to remove the hvf memory slot so all accesses will trap.
-             */
-             add = false;
-        }
-    }
-
-    mem = hvf_find_overlap_slot(
-            section->offset_within_address_space,
-            int128_get64(section->size));
-
-    if (mem && add) {
-        if (mem->size == int128_get64(section->size) &&
-            mem->start == section->offset_within_address_space &&
-            mem->mem == (memory_region_get_ram_ptr(area) +
-            section->offset_within_region)) {
-            return; /* Same region was attempted to register, go away. */
-        }
-    }
-
-    /* Region needs to be reset. set the size to 0 and remap it. */
-    if (mem) {
-        mem->size = 0;
-        if (do_hvf_set_memory(mem, 0)) {
-            error_report("Failed to reset overlapping slot");
-            abort();
-        }
-    }
-
-    if (!add) {
-        return;
-    }
-
-    if (area->readonly ||
-        (!memory_region_is_ram(area) && memory_region_is_romd(area))) {
-        flags = HV_MEMORY_READ | HV_MEMORY_EXEC;
-    } else {
-        flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC;
-    }
-
-    /* Now make a new slot. */
-    int x;
-
-    for (x = 0; x < hvf_state->num_slots; ++x) {
-        mem = &hvf_state->slots[x];
-        if (!mem->size) {
-            break;
-        }
-    }
-
-    if (x == hvf_state->num_slots) {
-        error_report("No free slots");
-        abort();
-    }
-
-    mem->size = int128_get64(section->size);
-    mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region;
-    mem->start = section->offset_within_address_space;
-    mem->region = area;
-
-    if (do_hvf_set_memory(mem, flags)) {
-        error_report("Error registering new memory slot");
-        abort();
-    }
-}
-
 void vmx_update_tpr(CPUState *cpu)
 {
     /* TODO: need integrate APIC handling */
@@ -245,56 +114,6 @@ void hvf_handle_io(CPUArchState *env, uint16_t port, void *buffer,
     }
 }
 
-static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
-{
-    if (!cpu->vcpu_dirty) {
-        hvf_get_registers(cpu);
-        cpu->vcpu_dirty = true;
-    }
-}
-
-void hvf_cpu_synchronize_state(CPUState *cpu)
-{
-    if (!cpu->vcpu_dirty) {
-        run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
-    }
-}
-
-static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
-                                              run_on_cpu_data arg)
-{
-    hvf_put_registers(cpu);
-    cpu->vcpu_dirty = false;
-}
-
-void hvf_cpu_synchronize_post_reset(CPUState *cpu)
-{
-    run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
-}
-
-static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
-                                             run_on_cpu_data arg)
-{
-    hvf_put_registers(cpu);
-    cpu->vcpu_dirty = false;
-}
-
-void hvf_cpu_synchronize_post_init(CPUState *cpu)
-{
-    run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
-}
-
-static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
-                                              run_on_cpu_data arg)
-{
-    cpu->vcpu_dirty = true;
-}
-
-void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
-{
-    run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
-}
-
 static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
 {
     int read, write;
@@ -339,78 +158,6 @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
     return false;
 }
 
-static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
-{
-    hvf_slot *slot;
-
-    slot = hvf_find_overlap_slot(
-            section->offset_within_address_space,
-            int128_get64(section->size));
-
-    /* protect region against writes; begin tracking it */
-    if (on) {
-        slot->flags |= HVF_SLOT_LOG;
-        hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
-                      HV_MEMORY_READ);
-    /* stop tracking region*/
-    } else {
-        slot->flags &= ~HVF_SLOT_LOG;
-        hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
-                      HV_MEMORY_READ | HV_MEMORY_WRITE);
-    }
-}
-
-static void hvf_log_start(MemoryListener *listener,
-                          MemoryRegionSection *section, int old, int new)
-{
-    if (old != 0) {
-        return;
-    }
-
-    hvf_set_dirty_tracking(section, 1);
-}
-
-static void hvf_log_stop(MemoryListener *listener,
-                         MemoryRegionSection *section, int old, int new)
-{
-    if (new != 0) {
-        return;
-    }
-
-    hvf_set_dirty_tracking(section, 0);
-}
-
-static void hvf_log_sync(MemoryListener *listener,
-                         MemoryRegionSection *section)
-{
-    /*
-     * sync of dirty pages is handled elsewhere; just make sure we keep
-     * tracking the region.
-     */
-    hvf_set_dirty_tracking(section, 1);
-}
-
-static void hvf_region_add(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    hvf_set_phys_mem(section, true);
-}
-
-static void hvf_region_del(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    hvf_set_phys_mem(section, false);
-}
-
-static MemoryListener hvf_memory_listener = {
-    .priority = 10,
-    .region_add = hvf_region_add,
-    .region_del = hvf_region_del,
-    .log_start = hvf_log_start,
-    .log_stop = hvf_log_stop,
-    .log_sync = hvf_log_sync,
-};
-
 void hvf_vcpu_destroy(CPUState *cpu)
 {
     X86CPU *x86_cpu = X86_CPU(cpu);
@@ -421,10 +168,6 @@ void hvf_vcpu_destroy(CPUState *cpu)
     assert_hvf_ok(ret);
 }
 
-static void dummy_signal(int sig)
-{
-}
-
 static void init_tsc_freq(CPUX86State *env)
 {
     size_t length;
@@ -931,48 +674,3 @@ int hvf_vcpu_exec(CPUState *cpu)
 
     return ret;
 }
-
-bool hvf_allowed;
-
-static int hvf_accel_init(MachineState *ms)
-{
-    int x;
-    hv_return_t ret;
-    HVFState *s;
-
-    ret = hv_vm_create(HV_VM_DEFAULT);
-    assert_hvf_ok(ret);
-
-    s = g_new0(HVFState, 1);
- 
-    s->num_slots = 32;
-    for (x = 0; x < s->num_slots; ++x) {
-        s->slots[x].size = 0;
-        s->slots[x].slot_id = x;
-    }
-  
-    hvf_state = s;
-    memory_listener_register(&hvf_memory_listener, &address_space_memory);
-    return 0;
-}
-
-static void hvf_accel_class_init(ObjectClass *oc, void *data)
-{
-    AccelClass *ac = ACCEL_CLASS(oc);
-    ac->name = "HVF";
-    ac->init_machine = hvf_accel_init;
-    ac->allowed = &hvf_allowed;
-}
-
-static const TypeInfo hvf_accel_type = {
-    .name = TYPE_HVF_ACCEL,
-    .parent = TYPE_ACCEL,
-    .class_init = hvf_accel_class_init,
-};
-
-static void hvf_type_init(void)
-{
-    type_register_static(&hvf_accel_type);
-}
-
-type_init(hvf_type_init);
diff --git a/target/i386/hvf/x86hvf.h b/target/i386/hvf/x86hvf.h
index 635ab0f34e..99ed8d608d 100644
--- a/target/i386/hvf/x86hvf.h
+++ b/target/i386/hvf/x86hvf.h
@@ -21,8 +21,6 @@
 #include "x86_descr.h"
 
 int hvf_process_events(CPUState *);
-int hvf_put_registers(CPUState *);
-int hvf_get_registers(CPUState *);
 bool hvf_inject_interrupts(CPUState *);
 void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
                      SegmentCache *qseg, bool is_tr);
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 04/19] hvf: Move hvf internal definitions into common header
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (2 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 03/19] hvf: Move cpu " Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:04   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 05/19] hvf: Make hvf_set_phys_mem() static Alexander Graf
                   ` (16 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.

This patch moves a few internal struct and constant defines over.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 include/sysemu/hvf_int.h   | 30 ++++++++++++++++++++++++++++++
 target/i386/hvf/hvf-i386.h | 31 +------------------------------
 2 files changed, 31 insertions(+), 30 deletions(-)

diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index 4c657b054c..ef84a24dd9 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -13,6 +13,36 @@
 
 #include <Hypervisor/hv.h>
 
+/* hvf_slot flags */
+#define HVF_SLOT_LOG (1 << 0)
+
+typedef struct hvf_slot {
+    uint64_t start;
+    uint64_t size;
+    uint8_t *mem;
+    int slot_id;
+    uint32_t flags;
+    MemoryRegion *region;
+} hvf_slot;
+
+typedef struct hvf_vcpu_caps {
+    uint64_t vmx_cap_pinbased;
+    uint64_t vmx_cap_procbased;
+    uint64_t vmx_cap_procbased2;
+    uint64_t vmx_cap_entry;
+    uint64_t vmx_cap_exit;
+    uint64_t vmx_cap_preemption_timer;
+} hvf_vcpu_caps;
+
+struct HVFState {
+    AccelState parent;
+    hvf_slot slots[32];
+    int num_slots;
+
+    hvf_vcpu_caps *hvf_caps;
+};
+extern HVFState *hvf_state;
+
 void hvf_set_phys_mem(MemoryRegionSection *, bool);
 void assert_hvf_ok(hv_return_t ret);
 hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h
index 94e5c788c4..76e9235524 100644
--- a/target/i386/hvf/hvf-i386.h
+++ b/target/i386/hvf/hvf-i386.h
@@ -18,39 +18,10 @@
 
 #include "qemu/accel.h"
 #include "sysemu/hvf.h"
+#include "sysemu/hvf_int.h"
 #include "cpu.h"
 #include "x86.h"
 
-/* hvf_slot flags */
-#define HVF_SLOT_LOG (1 << 0)
-
-typedef struct hvf_slot {
-    uint64_t start;
-    uint64_t size;
-    uint8_t *mem;
-    int slot_id;
-    uint32_t flags;
-    MemoryRegion *region;
-} hvf_slot;
-
-typedef struct hvf_vcpu_caps {
-    uint64_t vmx_cap_pinbased;
-    uint64_t vmx_cap_procbased;
-    uint64_t vmx_cap_procbased2;
-    uint64_t vmx_cap_entry;
-    uint64_t vmx_cap_exit;
-    uint64_t vmx_cap_preemption_timer;
-} hvf_vcpu_caps;
-
-struct HVFState {
-    AccelState parent;
-    hvf_slot slots[32];
-    int num_slots;
-
-    hvf_vcpu_caps *hvf_caps;
-};
-extern HVFState *hvf_state;
-
 void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int);
 
 #ifdef NEED_CPU_H
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 05/19] hvf: Make hvf_set_phys_mem() static
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (3 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 04/19] hvf: Move hvf internal definitions into common header Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:06   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 06/19] hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t Alexander Graf
                   ` (15 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

The hvf_set_phys_mem() function is only called within the same file.
Make it static.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 accel/hvf/hvf-accel-ops.c | 2 +-
 include/sysemu/hvf_int.h  | 1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index c2136dfbb8..5bec7b4d6d 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -114,7 +114,7 @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
     return 0;
 }
 
-void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
+static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
 {
     hvf_slot *mem;
     MemoryRegion *area = section->mr;
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index ef84a24dd9..d15fa3302a 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -43,7 +43,6 @@ struct HVFState {
 };
 extern HVFState *hvf_state;
 
-void hvf_set_phys_mem(MemoryRegionSection *, bool);
 void assert_hvf_ok(hv_return_t ret);
 hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
 int hvf_put_registers(CPUState *);
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 06/19] hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (4 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 05/19] hvf: Make hvf_set_phys_mem() static Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:07   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 07/19] hvf: Split out common code on vcpu init and destroy Alexander Graf
                   ` (14 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

The ARM version of Hypervisor.framework no longer defines these two
types, so let's just revert to standard ones.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 accel/hvf/hvf-accel-ops.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 5bec7b4d6d..7370fcfba0 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -109,7 +109,7 @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
     macslot->present = 1;
     macslot->gpa_start = slot->start;
     macslot->size = slot->size;
-    ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
+    ret = hv_vm_map(slot->mem, slot->start, slot->size, flags);
     assert_hvf_ok(ret);
     return 0;
 }
@@ -253,12 +253,12 @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
     /* protect region against writes; begin tracking it */
     if (on) {
         slot->flags |= HVF_SLOT_LOG;
-        hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
+        hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
                       HV_MEMORY_READ);
     /* stop tracking region*/
     } else {
         slot->flags &= ~HVF_SLOT_LOG;
-        hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
+        hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
                       HV_MEMORY_READ | HV_MEMORY_WRITE);
     }
 }
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 07/19] hvf: Split out common code on vcpu init and destroy
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (5 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 06/19] hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:09   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 08/19] hvf: Use cpu_synchronize_state() Alexander Graf
                   ` (13 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.

This patch splits the vcpu init and destroy functions into a generic and
an architecture specific portion. This also allows us to move the generic
functions into the generic hvf code, removing exported functions.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 accel/hvf/hvf-accel-ops.c | 30 ++++++++++++++++++++++++++++++
 accel/hvf/hvf-accel-ops.h |  2 --
 include/sysemu/hvf_int.h  |  2 ++
 target/i386/hvf/hvf.c     | 23 ++---------------------
 4 files changed, 34 insertions(+), 23 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 7370fcfba0..b262efd8b6 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -363,6 +363,36 @@ static void hvf_type_init(void)
 
 type_init(hvf_type_init);
 
+static void hvf_vcpu_destroy(CPUState *cpu)
+{
+    hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd);
+    assert_hvf_ok(ret);
+
+    hvf_arch_vcpu_destroy(cpu);
+}
+
+static int hvf_init_vcpu(CPUState *cpu)
+{
+    int r;
+
+    /* init cpu signals */
+    sigset_t set;
+    struct sigaction sigact;
+
+    memset(&sigact, 0, sizeof(sigact));
+    sigact.sa_handler = dummy_signal;
+    sigaction(SIG_IPI, &sigact, NULL);
+
+    pthread_sigmask(SIG_BLOCK, NULL, &set);
+    sigdelset(&set, SIG_IPI);
+
+    r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
+    cpu->vcpu_dirty = 1;
+    assert_hvf_ok(r);
+
+    return hvf_arch_init_vcpu(cpu);
+}
+
 /*
  * The HVF-specific vCPU thread function. This one should only run when the host
  * CPU supports the VMX "unrestricted guest" feature.
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
index 8f992da168..09fcf22067 100644
--- a/accel/hvf/hvf-accel-ops.h
+++ b/accel/hvf/hvf-accel-ops.h
@@ -12,12 +12,10 @@
 
 #include "sysemu/cpus.h"
 
-int hvf_init_vcpu(CPUState *);
 int hvf_vcpu_exec(CPUState *);
 void hvf_cpu_synchronize_state(CPUState *);
 void hvf_cpu_synchronize_post_reset(CPUState *);
 void hvf_cpu_synchronize_post_init(CPUState *);
 void hvf_cpu_synchronize_pre_loadvm(CPUState *);
-void hvf_vcpu_destroy(CPUState *);
 
 #endif /* HVF_CPUS_H */
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index d15fa3302a..80c1a8f946 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -44,6 +44,8 @@ struct HVFState {
 extern HVFState *hvf_state;
 
 void assert_hvf_ok(hv_return_t ret);
+int hvf_arch_init_vcpu(CPUState *cpu);
+void hvf_arch_vcpu_destroy(CPUState *cpu);
 hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
 int hvf_put_registers(CPUState *);
 int hvf_get_registers(CPUState *);
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index 100ede2a4d..c7132ee370 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -158,14 +158,12 @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
     return false;
 }
 
-void hvf_vcpu_destroy(CPUState *cpu)
+void hvf_arch_vcpu_destroy(CPUState *cpu)
 {
     X86CPU *x86_cpu = X86_CPU(cpu);
     CPUX86State *env = &x86_cpu->env;
 
-    hv_return_t ret = hv_vcpu_destroy((hv_vcpuid_t)cpu->hvf_fd);
     g_free(env->hvf_mmio_buf);
-    assert_hvf_ok(ret);
 }
 
 static void init_tsc_freq(CPUX86State *env)
@@ -210,23 +208,10 @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
     return env->apic_bus_freq != 0;
 }
 
-int hvf_init_vcpu(CPUState *cpu)
+int hvf_arch_init_vcpu(CPUState *cpu)
 {
-
     X86CPU *x86cpu = X86_CPU(cpu);
     CPUX86State *env = &x86cpu->env;
-    int r;
-
-    /* init cpu signals */
-    sigset_t set;
-    struct sigaction sigact;
-
-    memset(&sigact, 0, sizeof(sigact));
-    sigact.sa_handler = dummy_signal;
-    sigaction(SIG_IPI, &sigact, NULL);
-
-    pthread_sigmask(SIG_BLOCK, NULL, &set);
-    sigdelset(&set, SIG_IPI);
 
     init_emu();
     init_decoder();
@@ -243,10 +228,6 @@ int hvf_init_vcpu(CPUState *cpu)
         }
     }
 
-    r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
-    cpu->vcpu_dirty = 1;
-    assert_hvf_ok(r);
-
     if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED,
         &hvf_state->hvf_caps->vmx_cap_pinbased)) {
         abort();
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 08/19] hvf: Use cpu_synchronize_state()
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (6 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 07/19] hvf: Split out common code on vcpu init and destroy Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:15   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 09/19] hvf: Make synchronize functions static Alexander Graf
                   ` (12 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

There is no reason to call the hvf specific hvf_cpu_synchronize_state()
when we can just use the generic cpu_synchronize_state() instead. This
allows us to have less dependency on internal function definitions and
allows us to make hvf_cpu_synchronize_state() static.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 accel/hvf/hvf-accel-ops.c | 2 +-
 accel/hvf/hvf-accel-ops.h | 1 -
 target/i386/hvf/x86hvf.c  | 9 ++++-----
 3 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index b262efd8b6..3b599ac57c 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -200,7 +200,7 @@ static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
     }
 }
 
-void hvf_cpu_synchronize_state(CPUState *cpu)
+static void hvf_cpu_synchronize_state(CPUState *cpu)
 {
     if (!cpu->vcpu_dirty) {
         run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
index 09fcf22067..f6192b56f0 100644
--- a/accel/hvf/hvf-accel-ops.h
+++ b/accel/hvf/hvf-accel-ops.h
@@ -13,7 +13,6 @@
 #include "sysemu/cpus.h"
 
 int hvf_vcpu_exec(CPUState *);
-void hvf_cpu_synchronize_state(CPUState *);
 void hvf_cpu_synchronize_post_reset(CPUState *);
 void hvf_cpu_synchronize_post_init(CPUState *);
 void hvf_cpu_synchronize_pre_loadvm(CPUState *);
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 2b99f3eaa2..cc381307ab 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -26,14 +26,13 @@
 #include "cpu.h"
 #include "x86_descr.h"
 #include "x86_decode.h"
+#include "sysemu/hw_accel.h"
 
 #include "hw/i386/apic_internal.h"
 
 #include <Hypervisor/hv.h>
 #include <Hypervisor/hv_vmx.h>
 
-#include "accel/hvf/hvf-accel-ops.h"
-
 void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
                      SegmentCache *qseg, bool is_tr)
 {
@@ -437,7 +436,7 @@ int hvf_process_events(CPUState *cpu_state)
     env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
 
     if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
-        hvf_cpu_synchronize_state(cpu_state);
+        cpu_synchronize_state(cpu_state);
         do_cpu_init(cpu);
     }
 
@@ -451,12 +450,12 @@ int hvf_process_events(CPUState *cpu_state)
         cpu_state->halted = 0;
     }
     if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
-        hvf_cpu_synchronize_state(cpu_state);
+        cpu_synchronize_state(cpu_state);
         do_cpu_sipi(cpu);
     }
     if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
         cpu_state->interrupt_request &= ~CPU_INTERRUPT_TPR;
-        hvf_cpu_synchronize_state(cpu_state);
+        cpu_synchronize_state(cpu_state);
         apic_handle_tpr_access_report(cpu->apic_state, env->eip,
                                       env->tpr_access_type);
     }
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 09/19] hvf: Make synchronize functions static
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (7 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 08/19] hvf: Use cpu_synchronize_state() Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:15   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 10/19] hvf: Remove hvf-accel-ops.h Alexander Graf
                   ` (11 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

The hvf accel synchronize functions are only used as input for local
callback functions, so we can make them static.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 accel/hvf/hvf-accel-ops.c | 6 +++---
 accel/hvf/hvf-accel-ops.h | 3 ---
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 3b599ac57c..69741ce708 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -214,7 +214,7 @@ static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
     cpu->vcpu_dirty = false;
 }
 
-void hvf_cpu_synchronize_post_reset(CPUState *cpu)
+static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
 {
     run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
 }
@@ -226,7 +226,7 @@ static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
     cpu->vcpu_dirty = false;
 }
 
-void hvf_cpu_synchronize_post_init(CPUState *cpu)
+static void hvf_cpu_synchronize_post_init(CPUState *cpu)
 {
     run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
 }
@@ -237,7 +237,7 @@ static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
     cpu->vcpu_dirty = true;
 }
 
-void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
+static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
 {
     run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
 }
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
index f6192b56f0..018a4e22f6 100644
--- a/accel/hvf/hvf-accel-ops.h
+++ b/accel/hvf/hvf-accel-ops.h
@@ -13,8 +13,5 @@
 #include "sysemu/cpus.h"
 
 int hvf_vcpu_exec(CPUState *);
-void hvf_cpu_synchronize_post_reset(CPUState *);
-void hvf_cpu_synchronize_post_init(CPUState *);
-void hvf_cpu_synchronize_pre_loadvm(CPUState *);
 
 #endif /* HVF_CPUS_H */
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 10/19] hvf: Remove hvf-accel-ops.h
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (8 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 09/19] hvf: Make synchronize functions static Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:16   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 11/19] hvf: Introduce hvf vcpu struct Alexander Graf
                   ` (10 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

We can move the definition of hvf_vcpu_exec() into our internal
hvf header, obsoleting the need for hvf-accel-ops.h.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 accel/hvf/hvf-accel-ops.c |  2 --
 accel/hvf/hvf-accel-ops.h | 17 -----------------
 include/sysemu/hvf_int.h  |  1 +
 target/i386/hvf/hvf.c     |  2 --
 4 files changed, 1 insertion(+), 21 deletions(-)
 delete mode 100644 accel/hvf/hvf-accel-ops.h

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 69741ce708..14fc49791e 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -58,8 +58,6 @@
 #include "sysemu/runstate.h"
 #include "qemu/guest-random.h"
 
-#include "hvf-accel-ops.h"
-
 HVFState *hvf_state;
 
 /* Memory slots */
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
deleted file mode 100644
index 018a4e22f6..0000000000
--- a/accel/hvf/hvf-accel-ops.h
+++ /dev/null
@@ -1,17 +0,0 @@
-/*
- * Accelerator CPUS Interface
- *
- * Copyright 2020 SUSE LLC
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#ifndef HVF_CPUS_H
-#define HVF_CPUS_H
-
-#include "sysemu/cpus.h"
-
-int hvf_vcpu_exec(CPUState *);
-
-#endif /* HVF_CPUS_H */
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index 80c1a8f946..fd1dcaf26e 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -46,6 +46,7 @@ extern HVFState *hvf_state;
 void assert_hvf_ok(hv_return_t ret);
 int hvf_arch_init_vcpu(CPUState *cpu);
 void hvf_arch_vcpu_destroy(CPUState *cpu);
+int hvf_vcpu_exec(CPUState *);
 hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
 int hvf_put_registers(CPUState *);
 int hvf_get_registers(CPUState *);
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index c7132ee370..02f7be6cfd 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -73,8 +73,6 @@
 #include "qemu/accel.h"
 #include "target/i386/cpu.h"
 
-#include "hvf-accel-ops.h"
-
 void vmx_update_tpr(CPUState *cpu)
 {
     /* TODO: need integrate APIC handling */
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 11/19] hvf: Introduce hvf vcpu struct
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (9 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 10/19] hvf: Remove hvf-accel-ops.h Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:18   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 12/19] hvf: Simplify post reset/init/loadvm hooks Alexander Graf
                   ` (9 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Alex Bennée,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

We will need more than a single field for hvf going forward. To keep
the global vcpu struct uncluttered, let's allocate a special hvf vcpu
struct, similar to how hax does it.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

---

v4 -> v5:

  - Use g_free() on destroy
---
 accel/hvf/hvf-accel-ops.c   |   8 +-
 include/hw/core/cpu.h       |   3 +-
 include/sysemu/hvf_int.h    |   4 +
 target/i386/hvf/hvf.c       | 104 +++++++++---------
 target/i386/hvf/vmx.h       |  24 +++--
 target/i386/hvf/x86.c       |  28 ++---
 target/i386/hvf/x86_descr.c |  26 ++---
 target/i386/hvf/x86_emu.c   |  62 +++++------
 target/i386/hvf/x86_mmu.c   |   4 +-
 target/i386/hvf/x86_task.c  |  12 +--
 target/i386/hvf/x86hvf.c    | 210 ++++++++++++++++++------------------
 11 files changed, 248 insertions(+), 237 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 14fc49791e..ded918c443 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -363,16 +363,20 @@ type_init(hvf_type_init);
 
 static void hvf_vcpu_destroy(CPUState *cpu)
 {
-    hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd);
+    hv_return_t ret = hv_vcpu_destroy(cpu->hvf->fd);
     assert_hvf_ok(ret);
 
     hvf_arch_vcpu_destroy(cpu);
+    g_free(cpu->hvf);
+    cpu->hvf = NULL;
 }
 
 static int hvf_init_vcpu(CPUState *cpu)
 {
     int r;
 
+    cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
+
     /* init cpu signals */
     sigset_t set;
     struct sigaction sigact;
@@ -384,7 +388,7 @@ static int hvf_init_vcpu(CPUState *cpu)
     pthread_sigmask(SIG_BLOCK, NULL, &set);
     sigdelset(&set, SIG_IPI);
 
-    r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
+    r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
     cpu->vcpu_dirty = 1;
     assert_hvf_ok(r);
 
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index d45f78290e..47d82a5aa9 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -251,6 +251,7 @@ struct KVMState;
 struct kvm_run;
 
 struct hax_vcpu_state;
+struct hvf_vcpu_state;
 
 #define TB_JMP_CACHE_BITS 12
 #define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
@@ -436,7 +437,7 @@ struct CPUState {
 
     struct hax_vcpu_state *hax_vcpu;
 
-    int hvf_fd;
+    struct hvf_vcpu_state *hvf;
 
     /* track IOMMUs whose translations we've cached in the TCG TLB */
     GArray *iommu_notifiers;
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index fd1dcaf26e..8b66a4e7d0 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -43,6 +43,10 @@ struct HVFState {
 };
 extern HVFState *hvf_state;
 
+struct hvf_vcpu_state {
+    int fd;
+};
+
 void assert_hvf_ok(hv_return_t ret);
 int hvf_arch_init_vcpu(CPUState *cpu);
 void hvf_arch_vcpu_destroy(CPUState *cpu);
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index 02f7be6cfd..346dbcc26f 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -80,11 +80,11 @@ void vmx_update_tpr(CPUState *cpu)
     int tpr = cpu_get_apic_tpr(x86_cpu->apic_state) << 4;
     int irr = apic_get_highest_priority_irr(x86_cpu->apic_state);
 
-    wreg(cpu->hvf_fd, HV_X86_TPR, tpr);
+    wreg(cpu->hvf->fd, HV_X86_TPR, tpr);
     if (irr == -1) {
-        wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0);
+        wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
     } else {
-        wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 :
+        wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 :
               irr >> 4);
     }
 }
@@ -92,7 +92,7 @@ void vmx_update_tpr(CPUState *cpu)
 static void update_apic_tpr(CPUState *cpu)
 {
     X86CPU *x86_cpu = X86_CPU(cpu);
-    int tpr = rreg(cpu->hvf_fd, HV_X86_TPR) >> 4;
+    int tpr = rreg(cpu->hvf->fd, HV_X86_TPR) >> 4;
     cpu_set_apic_tpr(x86_cpu->apic_state, tpr);
 }
 
@@ -244,43 +244,43 @@ int hvf_arch_init_vcpu(CPUState *cpu)
     }
 
     /* set VMCS control fields */
-    wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS,
+    wvmcs(cpu->hvf->fd, VMCS_PIN_BASED_CTLS,
           cap2ctrl(hvf_state->hvf_caps->vmx_cap_pinbased,
           VMCS_PIN_BASED_CTLS_EXTINT |
           VMCS_PIN_BASED_CTLS_NMI |
           VMCS_PIN_BASED_CTLS_VNMI));
-    wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS,
+    wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS,
           cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased,
           VMCS_PRI_PROC_BASED_CTLS_HLT |
           VMCS_PRI_PROC_BASED_CTLS_MWAIT |
           VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET |
           VMCS_PRI_PROC_BASED_CTLS_TPR_SHADOW) |
           VMCS_PRI_PROC_BASED_CTLS_SEC_CONTROL);
-    wvmcs(cpu->hvf_fd, VMCS_SEC_PROC_BASED_CTLS,
+    wvmcs(cpu->hvf->fd, VMCS_SEC_PROC_BASED_CTLS,
           cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased2,
                    VMCS_PRI_PROC_BASED2_CTLS_APIC_ACCESSES));
 
-    wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry,
+    wvmcs(cpu->hvf->fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry,
           0));
-    wvmcs(cpu->hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */
+    wvmcs(cpu->hvf->fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */
 
-    wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0);
+    wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
 
     x86cpu = X86_CPU(cpu);
     x86cpu->env.xsave_buf = qemu_memalign(4096, 4096);
 
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_STAR, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_LSTAR, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_CSTAR, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FMASK, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FSBASE, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_GSBASE, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_KERNELGSBASE, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_TSC_AUX, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_TSC, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_CS, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_EIP, 1);
-    hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_ESP, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_STAR, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_LSTAR, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_CSTAR, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FMASK, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FSBASE, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_GSBASE, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_KERNELGSBASE, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_TSC_AUX, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_TSC, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_CS, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_EIP, 1);
+    hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_ESP, 1);
 
     return 0;
 }
@@ -321,16 +321,16 @@ static void hvf_store_events(CPUState *cpu, uint32_t ins_len, uint64_t idtvec_in
         }
         if (idtvec_info & VMCS_IDT_VEC_ERRCODE_VALID) {
             env->has_error_code = true;
-            env->error_code = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_ERROR);
+            env->error_code = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_ERROR);
         }
     }
-    if ((rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) &
+    if ((rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) &
         VMCS_INTERRUPTIBILITY_NMI_BLOCKING)) {
         env->hflags2 |= HF2_NMI_MASK;
     } else {
         env->hflags2 &= ~HF2_NMI_MASK;
     }
-    if (rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) &
+    if (rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) &
          (VMCS_INTERRUPTIBILITY_STI_BLOCKING |
          VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) {
         env->hflags |= HF_INHIBIT_IRQ_MASK;
@@ -409,20 +409,20 @@ int hvf_vcpu_exec(CPUState *cpu)
             return EXCP_HLT;
         }
 
-        hv_return_t r  = hv_vcpu_run(cpu->hvf_fd);
+        hv_return_t r  = hv_vcpu_run(cpu->hvf->fd);
         assert_hvf_ok(r);
 
         /* handle VMEXIT */
-        uint64_t exit_reason = rvmcs(cpu->hvf_fd, VMCS_EXIT_REASON);
-        uint64_t exit_qual = rvmcs(cpu->hvf_fd, VMCS_EXIT_QUALIFICATION);
-        uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf_fd,
+        uint64_t exit_reason = rvmcs(cpu->hvf->fd, VMCS_EXIT_REASON);
+        uint64_t exit_qual = rvmcs(cpu->hvf->fd, VMCS_EXIT_QUALIFICATION);
+        uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf->fd,
                                            VMCS_EXIT_INSTRUCTION_LENGTH);
 
-        uint64_t idtvec_info = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO);
+        uint64_t idtvec_info = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO);
 
         hvf_store_events(cpu, ins_len, idtvec_info);
-        rip = rreg(cpu->hvf_fd, HV_X86_RIP);
-        env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS);
+        rip = rreg(cpu->hvf->fd, HV_X86_RIP);
+        env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS);
 
         qemu_mutex_lock_iothread();
 
@@ -452,7 +452,7 @@ int hvf_vcpu_exec(CPUState *cpu)
         case EXIT_REASON_EPT_FAULT:
         {
             hvf_slot *slot;
-            uint64_t gpa = rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_ADDRESS);
+            uint64_t gpa = rvmcs(cpu->hvf->fd, VMCS_GUEST_PHYSICAL_ADDRESS);
 
             if (((idtvec_info & VMCS_IDT_VEC_VALID) == 0) &&
                 ((exit_qual & EXIT_QUAL_NMIUDTI) != 0)) {
@@ -497,7 +497,7 @@ int hvf_vcpu_exec(CPUState *cpu)
                 store_regs(cpu);
                 break;
             } else if (!string && !in) {
-                RAX(env) = rreg(cpu->hvf_fd, HV_X86_RAX);
+                RAX(env) = rreg(cpu->hvf->fd, HV_X86_RAX);
                 hvf_handle_io(env, port, &RAX(env), 1, size, 1);
                 macvm_set_rip(cpu, rip + ins_len);
                 break;
@@ -513,21 +513,21 @@ int hvf_vcpu_exec(CPUState *cpu)
             break;
         }
         case EXIT_REASON_CPUID: {
-            uint32_t rax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX);
-            uint32_t rbx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RBX);
-            uint32_t rcx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX);
-            uint32_t rdx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX);
+            uint32_t rax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX);
+            uint32_t rbx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RBX);
+            uint32_t rcx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX);
+            uint32_t rdx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX);
 
             if (rax == 1) {
                 /* CPUID1.ecx.OSXSAVE needs to know CR4 */
-                env->cr[4] = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4);
+                env->cr[4] = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4);
             }
             hvf_cpu_x86_cpuid(env, rax, rcx, &rax, &rbx, &rcx, &rdx);
 
-            wreg(cpu->hvf_fd, HV_X86_RAX, rax);
-            wreg(cpu->hvf_fd, HV_X86_RBX, rbx);
-            wreg(cpu->hvf_fd, HV_X86_RCX, rcx);
-            wreg(cpu->hvf_fd, HV_X86_RDX, rdx);
+            wreg(cpu->hvf->fd, HV_X86_RAX, rax);
+            wreg(cpu->hvf->fd, HV_X86_RBX, rbx);
+            wreg(cpu->hvf->fd, HV_X86_RCX, rcx);
+            wreg(cpu->hvf->fd, HV_X86_RDX, rdx);
 
             macvm_set_rip(cpu, rip + ins_len);
             break;
@@ -535,16 +535,16 @@ int hvf_vcpu_exec(CPUState *cpu)
         case EXIT_REASON_XSETBV: {
             X86CPU *x86_cpu = X86_CPU(cpu);
             CPUX86State *env = &x86_cpu->env;
-            uint32_t eax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX);
-            uint32_t ecx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX);
-            uint32_t edx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX);
+            uint32_t eax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX);
+            uint32_t ecx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX);
+            uint32_t edx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX);
 
             if (ecx) {
                 macvm_set_rip(cpu, rip + ins_len);
                 break;
             }
             env->xcr0 = ((uint64_t)edx << 32) | eax;
-            wreg(cpu->hvf_fd, HV_X86_XCR0, env->xcr0 | 1);
+            wreg(cpu->hvf->fd, HV_X86_XCR0, env->xcr0 | 1);
             macvm_set_rip(cpu, rip + ins_len);
             break;
         }
@@ -583,11 +583,11 @@ int hvf_vcpu_exec(CPUState *cpu)
 
             switch (cr) {
             case 0x0: {
-                macvm_set_cr0(cpu->hvf_fd, RRX(env, reg));
+                macvm_set_cr0(cpu->hvf->fd, RRX(env, reg));
                 break;
             }
             case 4: {
-                macvm_set_cr4(cpu->hvf_fd, RRX(env, reg));
+                macvm_set_cr4(cpu->hvf->fd, RRX(env, reg));
                 break;
             }
             case 8: {
@@ -623,7 +623,7 @@ int hvf_vcpu_exec(CPUState *cpu)
             break;
         }
         case EXIT_REASON_TASK_SWITCH: {
-            uint64_t vinfo = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO);
+            uint64_t vinfo = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO);
             x68_segment_selector sel = {.sel = exit_qual & 0xffff};
             vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3,
              vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MASK, vinfo
@@ -636,8 +636,8 @@ int hvf_vcpu_exec(CPUState *cpu)
             break;
         }
         case EXIT_REASON_RDPMC:
-            wreg(cpu->hvf_fd, HV_X86_RAX, 0);
-            wreg(cpu->hvf_fd, HV_X86_RDX, 0);
+            wreg(cpu->hvf->fd, HV_X86_RAX, 0);
+            wreg(cpu->hvf->fd, HV_X86_RDX, 0);
             macvm_set_rip(cpu, rip + ins_len);
             break;
         case VMX_REASON_VMCALL:
diff --git a/target/i386/hvf/vmx.h b/target/i386/hvf/vmx.h
index 24c4cdf0be..6df87116f6 100644
--- a/target/i386/hvf/vmx.h
+++ b/target/i386/hvf/vmx.h
@@ -30,6 +30,8 @@
 #include "vmcs.h"
 #include "cpu.h"
 #include "x86.h"
+#include "sysemu/hvf.h"
+#include "sysemu/hvf_int.h"
 
 #include "exec/address-spaces.h"
 
@@ -179,15 +181,15 @@ static inline void macvm_set_rip(CPUState *cpu, uint64_t rip)
     uint64_t val;
 
     /* BUG, should take considering overlap.. */
-    wreg(cpu->hvf_fd, HV_X86_RIP, rip);
+    wreg(cpu->hvf->fd, HV_X86_RIP, rip);
     env->eip = rip;
 
     /* after moving forward in rip, we need to clean INTERRUPTABILITY */
-   val = rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
+   val = rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
    if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING |
                VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) {
         env->hflags &= ~HF_INHIBIT_IRQ_MASK;
-        wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY,
+        wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY,
                val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING |
                VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING));
    }
@@ -199,9 +201,9 @@ static inline void vmx_clear_nmi_blocking(CPUState *cpu)
     CPUX86State *env = &x86_cpu->env;
 
     env->hflags2 &= ~HF2_NMI_MASK;
-    uint32_t gi = (uint32_t) rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
+    uint32_t gi = (uint32_t) rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
     gi &= ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
-    wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
+    wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
 }
 
 static inline void vmx_set_nmi_blocking(CPUState *cpu)
@@ -210,16 +212,16 @@ static inline void vmx_set_nmi_blocking(CPUState *cpu)
     CPUX86State *env = &x86_cpu->env;
 
     env->hflags2 |= HF2_NMI_MASK;
-    uint32_t gi = (uint32_t)rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
+    uint32_t gi = (uint32_t)rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
     gi |= VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
-    wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
+    wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
 }
 
 static inline void vmx_set_nmi_window_exiting(CPUState *cpu)
 {
     uint64_t val;
-    val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
-    wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val |
+    val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
+    wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val |
           VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
 
 }
@@ -228,8 +230,8 @@ static inline void vmx_clear_nmi_window_exiting(CPUState *cpu)
 {
 
     uint64_t val;
-    val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
-    wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val &
+    val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
+    wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val &
           ~VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
 }
 
diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c
index cd045183a8..2898bb70a8 100644
--- a/target/i386/hvf/x86.c
+++ b/target/i386/hvf/x86.c
@@ -62,11 +62,11 @@ bool x86_read_segment_descriptor(struct CPUState *cpu,
     }
 
     if (GDT_SEL == sel.ti) {
-        base  = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
-        limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
+        base  = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE);
+        limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
     } else {
-        base  = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
-        limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
+        base  = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE);
+        limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT);
     }
 
     if (sel.index * 8 >= limit) {
@@ -85,11 +85,11 @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
     uint32_t limit;
     
     if (GDT_SEL == sel.ti) {
-        base  = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
-        limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
+        base  = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE);
+        limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
     } else {
-        base  = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
-        limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
+        base  = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE);
+        limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT);
     }
     
     if (sel.index * 8 >= limit) {
@@ -103,8 +103,8 @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
 bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc,
                         int gate)
 {
-    target_ulong base  = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE);
-    uint32_t limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT);
+    target_ulong base  = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_BASE);
+    uint32_t limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_LIMIT);
 
     memset(idt_desc, 0, sizeof(*idt_desc));
     if (gate * 8 >= limit) {
@@ -118,7 +118,7 @@ bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc,
 
 bool x86_is_protected(struct CPUState *cpu)
 {
-    uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
+    uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
     return cr0 & CR0_PE;
 }
 
@@ -136,7 +136,7 @@ bool x86_is_v8086(struct CPUState *cpu)
 
 bool x86_is_long_mode(struct CPUState *cpu)
 {
-    return rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA;
+    return rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA;
 }
 
 bool x86_is_long64_mode(struct CPUState *cpu)
@@ -149,13 +149,13 @@ bool x86_is_long64_mode(struct CPUState *cpu)
 
 bool x86_is_paging_mode(struct CPUState *cpu)
 {
-    uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
+    uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
     return cr0 & CR0_PG;
 }
 
 bool x86_is_pae_enabled(struct CPUState *cpu)
 {
-    uint64_t cr4 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4);
+    uint64_t cr4 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4);
     return cr4 & CR4_PAE;
 }
 
diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c
index 9f539e73f6..af15c06ac5 100644
--- a/target/i386/hvf/x86_descr.c
+++ b/target/i386/hvf/x86_descr.c
@@ -48,47 +48,47 @@ static const struct vmx_segment_field {
 
 uint32_t vmx_read_segment_limit(CPUState *cpu, X86Seg seg)
 {
-    return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit);
+    return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit);
 }
 
 uint32_t vmx_read_segment_ar(CPUState *cpu, X86Seg seg)
 {
-    return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes);
+    return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes);
 }
 
 uint64_t vmx_read_segment_base(CPUState *cpu, X86Seg seg)
 {
-    return rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base);
+    return rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base);
 }
 
 x68_segment_selector vmx_read_segment_selector(CPUState *cpu, X86Seg seg)
 {
     x68_segment_selector sel;
-    sel.sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector);
+    sel.sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector);
     return sel;
 }
 
 void vmx_write_segment_selector(struct CPUState *cpu, x68_segment_selector selector, X86Seg seg)
 {
-    wvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector, selector.sel);
+    wvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector, selector.sel);
 }
 
 void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment *desc, X86Seg seg)
 {
-    desc->sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector);
-    desc->base = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base);
-    desc->limit = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit);
-    desc->ar = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes);
+    desc->sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector);
+    desc->base = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base);
+    desc->limit = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit);
+    desc->ar = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes);
 }
 
 void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc, X86Seg seg)
 {
     const struct vmx_segment_field *sf = &vmx_segment_fields[seg];
 
-    wvmcs(cpu->hvf_fd, sf->base, desc->base);
-    wvmcs(cpu->hvf_fd, sf->limit, desc->limit);
-    wvmcs(cpu->hvf_fd, sf->selector, desc->sel);
-    wvmcs(cpu->hvf_fd, sf->ar_bytes, desc->ar);
+    wvmcs(cpu->hvf->fd, sf->base, desc->base);
+    wvmcs(cpu->hvf->fd, sf->limit, desc->limit);
+    wvmcs(cpu->hvf->fd, sf->selector, desc->sel);
+    wvmcs(cpu->hvf->fd, sf->ar_bytes, desc->ar);
 }
 
 void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selector selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_desc)
diff --git a/target/i386/hvf/x86_emu.c b/target/i386/hvf/x86_emu.c
index e52c39ddb1..7c8203b21f 100644
--- a/target/i386/hvf/x86_emu.c
+++ b/target/i386/hvf/x86_emu.c
@@ -674,7 +674,7 @@ void simulate_rdmsr(struct CPUState *cpu)
 
     switch (msr) {
     case MSR_IA32_TSC:
-        val = rdtscp() + rvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET);
+        val = rdtscp() + rvmcs(cpu->hvf->fd, VMCS_TSC_OFFSET);
         break;
     case MSR_IA32_APICBASE:
         val = cpu_get_apic_base(X86_CPU(cpu)->apic_state);
@@ -683,16 +683,16 @@ void simulate_rdmsr(struct CPUState *cpu)
         val = x86_cpu->ucode_rev;
         break;
     case MSR_EFER:
-        val = rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER);
+        val = rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER);
         break;
     case MSR_FSBASE:
-        val = rvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE);
+        val = rvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE);
         break;
     case MSR_GSBASE:
-        val = rvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE);
+        val = rvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE);
         break;
     case MSR_KERNELGSBASE:
-        val = rvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE);
+        val = rvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE);
         break;
     case MSR_STAR:
         abort();
@@ -780,13 +780,13 @@ void simulate_wrmsr(struct CPUState *cpu)
         cpu_set_apic_base(X86_CPU(cpu)->apic_state, data);
         break;
     case MSR_FSBASE:
-        wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, data);
+        wvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE, data);
         break;
     case MSR_GSBASE:
-        wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, data);
+        wvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE, data);
         break;
     case MSR_KERNELGSBASE:
-        wvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE, data);
+        wvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE, data);
         break;
     case MSR_STAR:
         abort();
@@ -799,9 +799,9 @@ void simulate_wrmsr(struct CPUState *cpu)
         break;
     case MSR_EFER:
         /*printf("new efer %llx\n", EFER(cpu));*/
-        wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data);
+        wvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER, data);
         if (data & MSR_EFER_NXE) {
-            hv_vcpu_invalidate_tlb(cpu->hvf_fd);
+            hv_vcpu_invalidate_tlb(cpu->hvf->fd);
         }
         break;
     case MSR_MTRRphysBase(0):
@@ -1425,21 +1425,21 @@ void load_regs(struct CPUState *cpu)
     CPUX86State *env = &x86_cpu->env;
 
     int i = 0;
-    RRX(env, R_EAX) = rreg(cpu->hvf_fd, HV_X86_RAX);
-    RRX(env, R_EBX) = rreg(cpu->hvf_fd, HV_X86_RBX);
-    RRX(env, R_ECX) = rreg(cpu->hvf_fd, HV_X86_RCX);
-    RRX(env, R_EDX) = rreg(cpu->hvf_fd, HV_X86_RDX);
-    RRX(env, R_ESI) = rreg(cpu->hvf_fd, HV_X86_RSI);
-    RRX(env, R_EDI) = rreg(cpu->hvf_fd, HV_X86_RDI);
-    RRX(env, R_ESP) = rreg(cpu->hvf_fd, HV_X86_RSP);
-    RRX(env, R_EBP) = rreg(cpu->hvf_fd, HV_X86_RBP);
+    RRX(env, R_EAX) = rreg(cpu->hvf->fd, HV_X86_RAX);
+    RRX(env, R_EBX) = rreg(cpu->hvf->fd, HV_X86_RBX);
+    RRX(env, R_ECX) = rreg(cpu->hvf->fd, HV_X86_RCX);
+    RRX(env, R_EDX) = rreg(cpu->hvf->fd, HV_X86_RDX);
+    RRX(env, R_ESI) = rreg(cpu->hvf->fd, HV_X86_RSI);
+    RRX(env, R_EDI) = rreg(cpu->hvf->fd, HV_X86_RDI);
+    RRX(env, R_ESP) = rreg(cpu->hvf->fd, HV_X86_RSP);
+    RRX(env, R_EBP) = rreg(cpu->hvf->fd, HV_X86_RBP);
     for (i = 8; i < 16; i++) {
-        RRX(env, i) = rreg(cpu->hvf_fd, HV_X86_RAX + i);
+        RRX(env, i) = rreg(cpu->hvf->fd, HV_X86_RAX + i);
     }
 
-    env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS);
+    env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS);
     rflags_to_lflags(env);
-    env->eip = rreg(cpu->hvf_fd, HV_X86_RIP);
+    env->eip = rreg(cpu->hvf->fd, HV_X86_RIP);
 }
 
 void store_regs(struct CPUState *cpu)
@@ -1448,20 +1448,20 @@ void store_regs(struct CPUState *cpu)
     CPUX86State *env = &x86_cpu->env;
 
     int i = 0;
-    wreg(cpu->hvf_fd, HV_X86_RAX, RAX(env));
-    wreg(cpu->hvf_fd, HV_X86_RBX, RBX(env));
-    wreg(cpu->hvf_fd, HV_X86_RCX, RCX(env));
-    wreg(cpu->hvf_fd, HV_X86_RDX, RDX(env));
-    wreg(cpu->hvf_fd, HV_X86_RSI, RSI(env));
-    wreg(cpu->hvf_fd, HV_X86_RDI, RDI(env));
-    wreg(cpu->hvf_fd, HV_X86_RBP, RBP(env));
-    wreg(cpu->hvf_fd, HV_X86_RSP, RSP(env));
+    wreg(cpu->hvf->fd, HV_X86_RAX, RAX(env));
+    wreg(cpu->hvf->fd, HV_X86_RBX, RBX(env));
+    wreg(cpu->hvf->fd, HV_X86_RCX, RCX(env));
+    wreg(cpu->hvf->fd, HV_X86_RDX, RDX(env));
+    wreg(cpu->hvf->fd, HV_X86_RSI, RSI(env));
+    wreg(cpu->hvf->fd, HV_X86_RDI, RDI(env));
+    wreg(cpu->hvf->fd, HV_X86_RBP, RBP(env));
+    wreg(cpu->hvf->fd, HV_X86_RSP, RSP(env));
     for (i = 8; i < 16; i++) {
-        wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(env, i));
+        wreg(cpu->hvf->fd, HV_X86_RAX + i, RRX(env, i));
     }
 
     lflags_to_rflags(env);
-    wreg(cpu->hvf_fd, HV_X86_RFLAGS, env->eflags);
+    wreg(cpu->hvf->fd, HV_X86_RFLAGS, env->eflags);
     macvm_set_rip(cpu, env->eip);
 }
 
diff --git a/target/i386/hvf/x86_mmu.c b/target/i386/hvf/x86_mmu.c
index 78fff04684..e9ed0f5aa1 100644
--- a/target/i386/hvf/x86_mmu.c
+++ b/target/i386/hvf/x86_mmu.c
@@ -127,7 +127,7 @@ static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt,
         pt->err_code |= MMU_PAGE_PT;
     }
 
-    uint32_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
+    uint32_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
     /* check protection */
     if (cr0 & CR0_WP) {
         if (pt->write_access && !pte_write_access(pte)) {
@@ -172,7 +172,7 @@ static bool walk_gpt(struct CPUState *cpu, target_ulong addr, int err_code,
 {
     int top_level, level;
     bool is_large = false;
-    target_ulong cr3 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3);
+    target_ulong cr3 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR3);
     uint64_t page_mask = pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK;
     
     memset(pt, 0, sizeof(*pt));
diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c
index d66dfd7669..422156128b 100644
--- a/target/i386/hvf/x86_task.c
+++ b/target/i386/hvf/x86_task.c
@@ -62,7 +62,7 @@ static void load_state_from_tss32(CPUState *cpu, struct x86_tss_segment32 *tss)
     X86CPU *x86_cpu = X86_CPU(cpu);
     CPUX86State *env = &x86_cpu->env;
 
-    wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3);
+    wvmcs(cpu->hvf->fd, VMCS_GUEST_CR3, tss->cr3);
 
     env->eip = tss->eip;
     env->eflags = tss->eflags | 2;
@@ -111,11 +111,11 @@ static int task_switch_32(CPUState *cpu, x68_segment_selector tss_sel, x68_segme
 
 void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int reason, bool gate_valid, uint8_t gate, uint64_t gate_type)
 {
-    uint64_t rip = rreg(cpu->hvf_fd, HV_X86_RIP);
+    uint64_t rip = rreg(cpu->hvf->fd, HV_X86_RIP);
     if (!gate_valid || (gate_type != VMCS_INTR_T_HWEXCEPTION &&
                         gate_type != VMCS_INTR_T_HWINTR &&
                         gate_type != VMCS_INTR_T_NMI)) {
-        int ins_len = rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH);
+        int ins_len = rvmcs(cpu->hvf->fd, VMCS_EXIT_INSTRUCTION_LENGTH);
         macvm_set_rip(cpu, rip + ins_len);
         return;
     }
@@ -174,12 +174,12 @@ void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int rea
         //ret = task_switch_16(cpu, tss_sel, old_tss_sel, old_tss_base, &next_tss_desc);
         VM_PANIC("task_switch_16");
 
-    macvm_set_cr0(cpu->hvf_fd, rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS);
+    macvm_set_cr0(cpu->hvf->fd, rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0) | CR0_TS);
     x86_segment_descriptor_to_vmx(cpu, tss_sel, &next_tss_desc, &vmx_seg);
     vmx_write_segment_descriptor(cpu, &vmx_seg, R_TR);
 
     store_regs(cpu);
 
-    hv_vcpu_invalidate_tlb(cpu->hvf_fd);
-    hv_vcpu_flush(cpu->hvf_fd);
+    hv_vcpu_invalidate_tlb(cpu->hvf->fd);
+    hv_vcpu_flush(cpu->hvf->fd);
 }
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index cc381307ab..28cfee4f60 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -80,7 +80,7 @@ void hvf_put_xsave(CPUState *cpu_state)
 
     x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave);
 
-    if (hv_vcpu_write_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) {
+    if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
         abort();
     }
 }
@@ -90,19 +90,19 @@ void hvf_put_segments(CPUState *cpu_state)
     CPUX86State *env = &X86_CPU(cpu_state)->env;
     struct vmx_segment seg;
     
-    wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
-    wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
+    wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
+    wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
 
-    wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
-    wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
+    wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
+    wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
 
-    /* wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR2, env->cr[2]); */
-    wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3, env->cr[3]);
+    /* wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR2, env->cr[2]); */
+    wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3, env->cr[3]);
     vmx_update_tpr(cpu_state);
-    wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER, env->efer);
+    wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER, env->efer);
 
-    macvm_set_cr4(cpu_state->hvf_fd, env->cr[4]);
-    macvm_set_cr0(cpu_state->hvf_fd, env->cr[0]);
+    macvm_set_cr4(cpu_state->hvf->fd, env->cr[4]);
+    macvm_set_cr0(cpu_state->hvf->fd, env->cr[0]);
 
     hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false);
     vmx_write_segment_descriptor(cpu_state, &seg, R_CS);
@@ -128,31 +128,31 @@ void hvf_put_segments(CPUState *cpu_state)
     hvf_set_segment(cpu_state, &seg, &env->ldt, false);
     vmx_write_segment_descriptor(cpu_state, &seg, R_LDTR);
     
-    hv_vcpu_flush(cpu_state->hvf_fd);
+    hv_vcpu_flush(cpu_state->hvf->fd);
 }
     
 void hvf_put_msrs(CPUState *cpu_state)
 {
     CPUX86State *env = &X86_CPU(cpu_state)->env;
 
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS,
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS,
                       env->sysenter_cs);
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP,
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP,
                       env->sysenter_esp);
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP,
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP,
                       env->sysenter_eip);
 
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_STAR, env->star);
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_STAR, env->star);
 
 #ifdef TARGET_X86_64
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_CSTAR, env->cstar);
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, env->kernelgsbase);
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FMASK, env->fmask);
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_LSTAR, env->lstar);
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_CSTAR, env->cstar);
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, env->kernelgsbase);
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FMASK, env->fmask);
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_LSTAR, env->lstar);
 #endif
 
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_GSBASE, env->segs[R_GS].base);
-    hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FSBASE, env->segs[R_FS].base);
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_GSBASE, env->segs[R_GS].base);
+    hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FSBASE, env->segs[R_FS].base);
 }
 
 
@@ -162,7 +162,7 @@ void hvf_get_xsave(CPUState *cpu_state)
 
     xsave = X86_CPU(cpu_state)->env.xsave_buf;
 
-    if (hv_vcpu_read_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) {
+    if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
         abort();
     }
 
@@ -201,17 +201,17 @@ void hvf_get_segments(CPUState *cpu_state)
     vmx_read_segment_descriptor(cpu_state, &seg, R_LDTR);
     hvf_get_segment(&env->ldt, &seg);
 
-    env->idt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT);
-    env->idt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE);
-    env->gdt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
-    env->gdt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE);
+    env->idt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT);
+    env->idt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE);
+    env->gdt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
+    env->gdt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE);
 
-    env->cr[0] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR0);
+    env->cr[0] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR0);
     env->cr[2] = 0;
-    env->cr[3] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3);
-    env->cr[4] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4);
+    env->cr[3] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3);
+    env->cr[4] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR4);
     
-    env->efer = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER);
+    env->efer = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER);
 }
 
 void hvf_get_msrs(CPUState *cpu_state)
@@ -219,27 +219,27 @@ void hvf_get_msrs(CPUState *cpu_state)
     CPUX86State *env = &X86_CPU(cpu_state)->env;
     uint64_t tmp;
     
-    hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp);
+    hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS, &tmp);
     env->sysenter_cs = tmp;
     
-    hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp);
+    hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP, &tmp);
     env->sysenter_esp = tmp;
 
-    hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, &tmp);
+    hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP, &tmp);
     env->sysenter_eip = tmp;
 
-    hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_STAR, &env->star);
+    hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_STAR, &env->star);
 
 #ifdef TARGET_X86_64
-    hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_CSTAR, &env->cstar);
-    hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, &env->kernelgsbase);
-    hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_FMASK, &env->fmask);
-    hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_LSTAR, &env->lstar);
+    hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_CSTAR, &env->cstar);
+    hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, &env->kernelgsbase);
+    hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_FMASK, &env->fmask);
+    hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_LSTAR, &env->lstar);
 #endif
 
-    hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp);
+    hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_APICBASE, &tmp);
     
-    env->tsc = rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET);
+    env->tsc = rdtscp() + rvmcs(cpu_state->hvf->fd, VMCS_TSC_OFFSET);
 }
 
 int hvf_put_registers(CPUState *cpu_state)
@@ -247,26 +247,26 @@ int hvf_put_registers(CPUState *cpu_state)
     X86CPU *x86cpu = X86_CPU(cpu_state);
     CPUX86State *env = &x86cpu->env;
 
-    wreg(cpu_state->hvf_fd, HV_X86_RAX, env->regs[R_EAX]);
-    wreg(cpu_state->hvf_fd, HV_X86_RBX, env->regs[R_EBX]);
-    wreg(cpu_state->hvf_fd, HV_X86_RCX, env->regs[R_ECX]);
-    wreg(cpu_state->hvf_fd, HV_X86_RDX, env->regs[R_EDX]);
-    wreg(cpu_state->hvf_fd, HV_X86_RBP, env->regs[R_EBP]);
-    wreg(cpu_state->hvf_fd, HV_X86_RSP, env->regs[R_ESP]);
-    wreg(cpu_state->hvf_fd, HV_X86_RSI, env->regs[R_ESI]);
-    wreg(cpu_state->hvf_fd, HV_X86_RDI, env->regs[R_EDI]);
-    wreg(cpu_state->hvf_fd, HV_X86_R8, env->regs[8]);
-    wreg(cpu_state->hvf_fd, HV_X86_R9, env->regs[9]);
-    wreg(cpu_state->hvf_fd, HV_X86_R10, env->regs[10]);
-    wreg(cpu_state->hvf_fd, HV_X86_R11, env->regs[11]);
-    wreg(cpu_state->hvf_fd, HV_X86_R12, env->regs[12]);
-    wreg(cpu_state->hvf_fd, HV_X86_R13, env->regs[13]);
-    wreg(cpu_state->hvf_fd, HV_X86_R14, env->regs[14]);
-    wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]);
-    wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags);
-    wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip);
+    wreg(cpu_state->hvf->fd, HV_X86_RAX, env->regs[R_EAX]);
+    wreg(cpu_state->hvf->fd, HV_X86_RBX, env->regs[R_EBX]);
+    wreg(cpu_state->hvf->fd, HV_X86_RCX, env->regs[R_ECX]);
+    wreg(cpu_state->hvf->fd, HV_X86_RDX, env->regs[R_EDX]);
+    wreg(cpu_state->hvf->fd, HV_X86_RBP, env->regs[R_EBP]);
+    wreg(cpu_state->hvf->fd, HV_X86_RSP, env->regs[R_ESP]);
+    wreg(cpu_state->hvf->fd, HV_X86_RSI, env->regs[R_ESI]);
+    wreg(cpu_state->hvf->fd, HV_X86_RDI, env->regs[R_EDI]);
+    wreg(cpu_state->hvf->fd, HV_X86_R8, env->regs[8]);
+    wreg(cpu_state->hvf->fd, HV_X86_R9, env->regs[9]);
+    wreg(cpu_state->hvf->fd, HV_X86_R10, env->regs[10]);
+    wreg(cpu_state->hvf->fd, HV_X86_R11, env->regs[11]);
+    wreg(cpu_state->hvf->fd, HV_X86_R12, env->regs[12]);
+    wreg(cpu_state->hvf->fd, HV_X86_R13, env->regs[13]);
+    wreg(cpu_state->hvf->fd, HV_X86_R14, env->regs[14]);
+    wreg(cpu_state->hvf->fd, HV_X86_R15, env->regs[15]);
+    wreg(cpu_state->hvf->fd, HV_X86_RFLAGS, env->eflags);
+    wreg(cpu_state->hvf->fd, HV_X86_RIP, env->eip);
    
-    wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0);
+    wreg(cpu_state->hvf->fd, HV_X86_XCR0, env->xcr0);
     
     hvf_put_xsave(cpu_state);
     
@@ -274,14 +274,14 @@ int hvf_put_registers(CPUState *cpu_state)
     
     hvf_put_msrs(cpu_state);
     
-    wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]);
-    wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]);
-    wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]);
-    wreg(cpu_state->hvf_fd, HV_X86_DR3, env->dr[3]);
-    wreg(cpu_state->hvf_fd, HV_X86_DR4, env->dr[4]);
-    wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]);
-    wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]);
-    wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]);
+    wreg(cpu_state->hvf->fd, HV_X86_DR0, env->dr[0]);
+    wreg(cpu_state->hvf->fd, HV_X86_DR1, env->dr[1]);
+    wreg(cpu_state->hvf->fd, HV_X86_DR2, env->dr[2]);
+    wreg(cpu_state->hvf->fd, HV_X86_DR3, env->dr[3]);
+    wreg(cpu_state->hvf->fd, HV_X86_DR4, env->dr[4]);
+    wreg(cpu_state->hvf->fd, HV_X86_DR5, env->dr[5]);
+    wreg(cpu_state->hvf->fd, HV_X86_DR6, env->dr[6]);
+    wreg(cpu_state->hvf->fd, HV_X86_DR7, env->dr[7]);
     
     return 0;
 }
@@ -291,40 +291,40 @@ int hvf_get_registers(CPUState *cpu_state)
     X86CPU *x86cpu = X86_CPU(cpu_state);
     CPUX86State *env = &x86cpu->env;
 
-    env->regs[R_EAX] = rreg(cpu_state->hvf_fd, HV_X86_RAX);
-    env->regs[R_EBX] = rreg(cpu_state->hvf_fd, HV_X86_RBX);
-    env->regs[R_ECX] = rreg(cpu_state->hvf_fd, HV_X86_RCX);
-    env->regs[R_EDX] = rreg(cpu_state->hvf_fd, HV_X86_RDX);
-    env->regs[R_EBP] = rreg(cpu_state->hvf_fd, HV_X86_RBP);
-    env->regs[R_ESP] = rreg(cpu_state->hvf_fd, HV_X86_RSP);
-    env->regs[R_ESI] = rreg(cpu_state->hvf_fd, HV_X86_RSI);
-    env->regs[R_EDI] = rreg(cpu_state->hvf_fd, HV_X86_RDI);
-    env->regs[8] = rreg(cpu_state->hvf_fd, HV_X86_R8);
-    env->regs[9] = rreg(cpu_state->hvf_fd, HV_X86_R9);
-    env->regs[10] = rreg(cpu_state->hvf_fd, HV_X86_R10);
-    env->regs[11] = rreg(cpu_state->hvf_fd, HV_X86_R11);
-    env->regs[12] = rreg(cpu_state->hvf_fd, HV_X86_R12);
-    env->regs[13] = rreg(cpu_state->hvf_fd, HV_X86_R13);
-    env->regs[14] = rreg(cpu_state->hvf_fd, HV_X86_R14);
-    env->regs[15] = rreg(cpu_state->hvf_fd, HV_X86_R15);
+    env->regs[R_EAX] = rreg(cpu_state->hvf->fd, HV_X86_RAX);
+    env->regs[R_EBX] = rreg(cpu_state->hvf->fd, HV_X86_RBX);
+    env->regs[R_ECX] = rreg(cpu_state->hvf->fd, HV_X86_RCX);
+    env->regs[R_EDX] = rreg(cpu_state->hvf->fd, HV_X86_RDX);
+    env->regs[R_EBP] = rreg(cpu_state->hvf->fd, HV_X86_RBP);
+    env->regs[R_ESP] = rreg(cpu_state->hvf->fd, HV_X86_RSP);
+    env->regs[R_ESI] = rreg(cpu_state->hvf->fd, HV_X86_RSI);
+    env->regs[R_EDI] = rreg(cpu_state->hvf->fd, HV_X86_RDI);
+    env->regs[8] = rreg(cpu_state->hvf->fd, HV_X86_R8);
+    env->regs[9] = rreg(cpu_state->hvf->fd, HV_X86_R9);
+    env->regs[10] = rreg(cpu_state->hvf->fd, HV_X86_R10);
+    env->regs[11] = rreg(cpu_state->hvf->fd, HV_X86_R11);
+    env->regs[12] = rreg(cpu_state->hvf->fd, HV_X86_R12);
+    env->regs[13] = rreg(cpu_state->hvf->fd, HV_X86_R13);
+    env->regs[14] = rreg(cpu_state->hvf->fd, HV_X86_R14);
+    env->regs[15] = rreg(cpu_state->hvf->fd, HV_X86_R15);
     
-    env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
-    env->eip = rreg(cpu_state->hvf_fd, HV_X86_RIP);
+    env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
+    env->eip = rreg(cpu_state->hvf->fd, HV_X86_RIP);
    
     hvf_get_xsave(cpu_state);
-    env->xcr0 = rreg(cpu_state->hvf_fd, HV_X86_XCR0);
+    env->xcr0 = rreg(cpu_state->hvf->fd, HV_X86_XCR0);
     
     hvf_get_segments(cpu_state);
     hvf_get_msrs(cpu_state);
     
-    env->dr[0] = rreg(cpu_state->hvf_fd, HV_X86_DR0);
-    env->dr[1] = rreg(cpu_state->hvf_fd, HV_X86_DR1);
-    env->dr[2] = rreg(cpu_state->hvf_fd, HV_X86_DR2);
-    env->dr[3] = rreg(cpu_state->hvf_fd, HV_X86_DR3);
-    env->dr[4] = rreg(cpu_state->hvf_fd, HV_X86_DR4);
-    env->dr[5] = rreg(cpu_state->hvf_fd, HV_X86_DR5);
-    env->dr[6] = rreg(cpu_state->hvf_fd, HV_X86_DR6);
-    env->dr[7] = rreg(cpu_state->hvf_fd, HV_X86_DR7);
+    env->dr[0] = rreg(cpu_state->hvf->fd, HV_X86_DR0);
+    env->dr[1] = rreg(cpu_state->hvf->fd, HV_X86_DR1);
+    env->dr[2] = rreg(cpu_state->hvf->fd, HV_X86_DR2);
+    env->dr[3] = rreg(cpu_state->hvf->fd, HV_X86_DR3);
+    env->dr[4] = rreg(cpu_state->hvf->fd, HV_X86_DR4);
+    env->dr[5] = rreg(cpu_state->hvf->fd, HV_X86_DR5);
+    env->dr[6] = rreg(cpu_state->hvf->fd, HV_X86_DR6);
+    env->dr[7] = rreg(cpu_state->hvf->fd, HV_X86_DR7);
     
     x86_update_hflags(env);
     return 0;
@@ -333,16 +333,16 @@ int hvf_get_registers(CPUState *cpu_state)
 static void vmx_set_int_window_exiting(CPUState *cpu)
 {
      uint64_t val;
-     val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
-     wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val |
+     val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
+     wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val |
              VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING);
 }
 
 void vmx_clear_int_window_exiting(CPUState *cpu)
 {
      uint64_t val;
-     val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
-     wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val &
+     val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
+     wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val &
              ~VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING);
 }
 
@@ -378,7 +378,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
     uint64_t info = 0;
     if (have_event) {
         info = vector | intr_type | VMCS_INTR_VALID;
-        uint64_t reason = rvmcs(cpu_state->hvf_fd, VMCS_EXIT_REASON);
+        uint64_t reason = rvmcs(cpu_state->hvf->fd, VMCS_EXIT_REASON);
         if (env->nmi_injected && reason != EXIT_REASON_TASK_SWITCH) {
             vmx_clear_nmi_blocking(cpu_state);
         }
@@ -387,17 +387,17 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
             info &= ~(1 << 12); /* clear undefined bit */
             if (intr_type == VMCS_INTR_T_SWINTR ||
                 intr_type == VMCS_INTR_T_SWEXCEPTION) {
-                wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
+                wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
             }
             
             if (env->has_error_code) {
-                wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR,
+                wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_EXCEPTION_ERROR,
                       env->error_code);
                 /* Indicate that VMCS_ENTRY_EXCEPTION_ERROR is valid */
                 info |= VMCS_INTR_DEL_ERRCODE;
             }
             /*printf("reinject  %lx err %d\n", info, err);*/
-            wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
+            wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info);
         };
     }
 
@@ -405,7 +405,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
         if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
             cpu_state->interrupt_request &= ~CPU_INTERRUPT_NMI;
             info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | EXCP02_NMI;
-            wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
+            wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info);
         } else {
             vmx_set_nmi_window_exiting(cpu_state);
         }
@@ -417,7 +417,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
         int line = cpu_get_pic_interrupt(&x86cpu->env);
         cpu_state->interrupt_request &= ~CPU_INTERRUPT_HARD;
         if (line >= 0) {
-            wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line |
+            wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, line |
                   VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
         }
     }
@@ -433,7 +433,7 @@ int hvf_process_events(CPUState *cpu_state)
     X86CPU *cpu = X86_CPU(cpu_state);
     CPUX86State *env = &cpu->env;
 
-    env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
+    env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
 
     if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
         cpu_synchronize_state(cpu_state);
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 12/19] hvf: Simplify post reset/init/loadvm hooks
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (10 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 11/19] hvf: Introduce hvf vcpu struct Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:20   ` Sergio Lopez
  2021-05-19 20:22 ` [PATCH v8 13/19] hvf: Add Apple Silicon support Alexander Graf
                   ` (8 subsequent siblings)
  20 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

The hooks we have that call us after reset, init and loadvm really all
just want to say "The reference of all register state is in the QEMU
vcpu struct, please push it".

We already have a working pushing mechanism though called cpu->vcpu_dirty,
so we can just reuse that for all of the above, syncing state properly the
next time we actually execute a vCPU.

This fixes PSCI resets on ARM, as they modify CPU state even after the
post init call has completed, but before we execute the vCPU again.

To also make the scheme work for x86, we have to make sure we don't
move stale eflags into our env when the vcpu state is dirty.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
---
 accel/hvf/hvf-accel-ops.c | 27 +++++++--------------------
 target/i386/hvf/x86hvf.c  |  5 ++++-
 2 files changed, 11 insertions(+), 21 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index ded918c443..d1691be989 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -205,39 +205,26 @@ static void hvf_cpu_synchronize_state(CPUState *cpu)
     }
 }
 
-static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
-                                              run_on_cpu_data arg)
+static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu,
+                                             run_on_cpu_data arg)
 {
-    hvf_put_registers(cpu);
-    cpu->vcpu_dirty = false;
+    /* QEMU state is the reference, push it to HVF now and on next entry */
+    cpu->vcpu_dirty = true;
 }
 
 static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
 {
-    run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
-}
-
-static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
-                                             run_on_cpu_data arg)
-{
-    hvf_put_registers(cpu);
-    cpu->vcpu_dirty = false;
+    run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
 }
 
 static void hvf_cpu_synchronize_post_init(CPUState *cpu)
 {
-    run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
-}
-
-static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
-                                              run_on_cpu_data arg)
-{
-    cpu->vcpu_dirty = true;
+    run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
 }
 
 static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
 {
-    run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
+    run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
 }
 
 static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 28cfee4f60..2ced2c2478 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -433,7 +433,10 @@ int hvf_process_events(CPUState *cpu_state)
     X86CPU *cpu = X86_CPU(cpu_state);
     CPUX86State *env = &cpu->env;
 
-    env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
+    if (!cpu_state->vcpu_dirty) {
+        /* light weight sync for CPU_INTERRUPT_HARD and IF_MASK */
+        env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
+    }
 
     if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
         cpu_synchronize_state(cpu_state);
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 13/19] hvf: Add Apple Silicon support
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (11 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 12/19] hvf: Simplify post reset/init/loadvm hooks Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:55   ` Sergio Lopez
  2021-06-15 10:21   ` Peter Maydell
  2021-05-19 20:22 ` [PATCH v8 14/19] arm/hvf: Add a WFI handler Alexander Graf
                   ` (7 subsequent siblings)
  20 siblings, 2 replies; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

With Apple Silicon available to the masses, it's a good time to add support
for driving its virtualization extensions from QEMU.

This patch adds all necessary architecture specific code to get basic VMs
working. It's still pretty raw, but definitely functional.

Known limitations:

  - Vtimer acknowledgement is hacky
  - Should implement more sysregs and fault on invalid ones then
  - WFI handling is missing, need to marry it with vtimer

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>

---

v1 -> v2:

  - Merge vcpu kick function patch
  - Implement WFI handling (allows vCPUs to sleep)
  - Synchronize system registers (fixes OVMF crashes and reboot)
  - Don't always call cpu_synchronize_state()
  - Use more fine grained iothread locking
  - Populate aa64mmfr0 from hardware

v2 -> v3:

  - Advance PC on SMC
  - Use cp list interface for sysreg syncs
  - Do not set current_cpu
  - Fix sysreg isread mask
  - Move sysreg handling to functions
  - Remove WFI logic again
  - Revert to global iothread locking
  - Use Hypervisor.h on arm, hv.h does not contain aarch64 definitions

v3 -> v4:

  - No longer include Hypervisor.h

v5 -> v6:

  - Swap sysreg definition order. This way we're in line with asm outputs.

v6 -> v7:

  - Remove osdep.h include from hvf_int.h
  - Synchronize SIMD registers as well
  - Prepend 0x for hex values
  - Convert DPRINTF to trace points
  - Use main event loop (fixes gdbstub issues)
  - Remove PSCI support, inject UDEF on HVC/SMC
  - Change vtimer logic to look at ctl.istatus for vtimer mask sync
  - Add kick callback again (fixes remote CPU notification)

v7 -> v8:

  - Fix checkpatch errors
---
 MAINTAINERS                 |   5 +
 accel/hvf/hvf-accel-ops.c   |  14 +
 include/sysemu/hvf_int.h    |   9 +-
 meson.build                 |   1 +
 target/arm/hvf/hvf.c        | 703 ++++++++++++++++++++++++++++++++++++
 target/arm/hvf/trace-events |  10 +
 6 files changed, 741 insertions(+), 1 deletion(-)
 create mode 100644 target/arm/hvf/hvf.c
 create mode 100644 target/arm/hvf/trace-events

diff --git a/MAINTAINERS b/MAINTAINERS
index 262e96714b..f3b4fdcf60 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -428,6 +428,11 @@ F: accel/accel-*.c
 F: accel/Makefile.objs
 F: accel/stubs/Makefile.objs
 
+Apple Silicon HVF CPUs
+M: Alexander Graf <agraf@csgraf.de>
+S: Maintained
+F: target/arm/hvf/
+
 X86 HVF CPUs
 M: Cameron Esfahani <dirty@apple.com>
 M: Roman Bolshakov <r.bolshakov@yadro.com>
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index d1691be989..48e402ef57 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -60,6 +60,10 @@
 
 HVFState *hvf_state;
 
+#ifdef __aarch64__
+#define HV_VM_DEFAULT NULL
+#endif
+
 /* Memory slots */
 
 hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
@@ -375,7 +379,11 @@ static int hvf_init_vcpu(CPUState *cpu)
     pthread_sigmask(SIG_BLOCK, NULL, &set);
     sigdelset(&set, SIG_IPI);
 
+#ifdef __aarch64__
+    r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
+#else
     r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
+#endif
     cpu->vcpu_dirty = 1;
     assert_hvf_ok(r);
 
@@ -446,11 +454,17 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
                        cpu, QEMU_THREAD_JOINABLE);
 }
 
+__attribute__((weak)) void hvf_kick_vcpu_thread(CPUState *cpu)
+{
+    cpus_kick_thread(cpu);
+}
+
 static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
 {
     AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
 
     ops->create_vcpu_thread = hvf_start_vcpu_thread;
+    ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
 
     ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
     ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index 8b66a4e7d0..e52d67ed5c 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -11,7 +11,11 @@
 #ifndef HVF_INT_H
 #define HVF_INT_H
 
+#ifdef __aarch64__
+#include <Hypervisor/Hypervisor.h>
+#else
 #include <Hypervisor/hv.h>
+#endif
 
 /* hvf_slot flags */
 #define HVF_SLOT_LOG (1 << 0)
@@ -44,7 +48,9 @@ struct HVFState {
 extern HVFState *hvf_state;
 
 struct hvf_vcpu_state {
-    int fd;
+    uint64_t fd;
+    void *exit;
+    bool vtimer_masked;
 };
 
 void assert_hvf_ok(hv_return_t ret);
@@ -54,5 +60,6 @@ int hvf_vcpu_exec(CPUState *);
 hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
 int hvf_put_registers(CPUState *);
 int hvf_get_registers(CPUState *);
+void hvf_kick_vcpu_thread(CPUState *cpu);
 
 #endif
diff --git a/meson.build b/meson.build
index a58a75d056..698f4e9356 100644
--- a/meson.build
+++ b/meson.build
@@ -1856,6 +1856,7 @@ if have_system or have_user
     'accel/tcg',
     'hw/core',
     'target/arm',
+    'target/arm/hvf',
     'target/hppa',
     'target/i386',
     'target/i386/kvm',
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
new file mode 100644
index 0000000000..3934c05979
--- /dev/null
+++ b/target/arm/hvf/hvf.c
@@ -0,0 +1,703 @@
+/*
+ * QEMU Hypervisor.framework support for Apple Silicon
+
+ * Copyright 2020 Alexander Graf <agraf@csgraf.de>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "qemu-common.h"
+#include "qemu/error-report.h"
+
+#include "sysemu/runstate.h"
+#include "sysemu/hvf.h"
+#include "sysemu/hvf_int.h"
+#include "sysemu/hw_accel.h"
+
+#include "exec/address-spaces.h"
+#include "hw/irq.h"
+#include "qemu/main-loop.h"
+#include "sysemu/cpus.h"
+#include "target/arm/cpu.h"
+#include "target/arm/internals.h"
+#include "trace/trace-target_arm_hvf.h"
+
+#define HVF_SYSREG(crn, crm, op0, op1, op2) \
+        ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
+#define PL1_WRITE_MASK 0x4
+
+#define SYSREG(op0, op1, crn, crm, op2) \
+    ((op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (crm << 1))
+#define SYSREG_MASK           SYSREG(0x3, 0x7, 0xf, 0xf, 0x7)
+#define SYSREG_CNTPCT_EL0     SYSREG(3, 3, 14, 0, 1)
+#define SYSREG_PMCCNTR_EL0    SYSREG(3, 3, 9, 13, 0)
+
+#define WFX_IS_WFE (1 << 0)
+
+#define TMR_CTL_ENABLE  (1 << 0)
+#define TMR_CTL_IMASK   (1 << 1)
+#define TMR_CTL_ISTATUS (1 << 2)
+
+struct hvf_reg_match {
+    int reg;
+    uint64_t offset;
+};
+
+static const struct hvf_reg_match hvf_reg_match[] = {
+    { HV_REG_X0,   offsetof(CPUARMState, xregs[0]) },
+    { HV_REG_X1,   offsetof(CPUARMState, xregs[1]) },
+    { HV_REG_X2,   offsetof(CPUARMState, xregs[2]) },
+    { HV_REG_X3,   offsetof(CPUARMState, xregs[3]) },
+    { HV_REG_X4,   offsetof(CPUARMState, xregs[4]) },
+    { HV_REG_X5,   offsetof(CPUARMState, xregs[5]) },
+    { HV_REG_X6,   offsetof(CPUARMState, xregs[6]) },
+    { HV_REG_X7,   offsetof(CPUARMState, xregs[7]) },
+    { HV_REG_X8,   offsetof(CPUARMState, xregs[8]) },
+    { HV_REG_X9,   offsetof(CPUARMState, xregs[9]) },
+    { HV_REG_X10,  offsetof(CPUARMState, xregs[10]) },
+    { HV_REG_X11,  offsetof(CPUARMState, xregs[11]) },
+    { HV_REG_X12,  offsetof(CPUARMState, xregs[12]) },
+    { HV_REG_X13,  offsetof(CPUARMState, xregs[13]) },
+    { HV_REG_X14,  offsetof(CPUARMState, xregs[14]) },
+    { HV_REG_X15,  offsetof(CPUARMState, xregs[15]) },
+    { HV_REG_X16,  offsetof(CPUARMState, xregs[16]) },
+    { HV_REG_X17,  offsetof(CPUARMState, xregs[17]) },
+    { HV_REG_X18,  offsetof(CPUARMState, xregs[18]) },
+    { HV_REG_X19,  offsetof(CPUARMState, xregs[19]) },
+    { HV_REG_X20,  offsetof(CPUARMState, xregs[20]) },
+    { HV_REG_X21,  offsetof(CPUARMState, xregs[21]) },
+    { HV_REG_X22,  offsetof(CPUARMState, xregs[22]) },
+    { HV_REG_X23,  offsetof(CPUARMState, xregs[23]) },
+    { HV_REG_X24,  offsetof(CPUARMState, xregs[24]) },
+    { HV_REG_X25,  offsetof(CPUARMState, xregs[25]) },
+    { HV_REG_X26,  offsetof(CPUARMState, xregs[26]) },
+    { HV_REG_X27,  offsetof(CPUARMState, xregs[27]) },
+    { HV_REG_X28,  offsetof(CPUARMState, xregs[28]) },
+    { HV_REG_X29,  offsetof(CPUARMState, xregs[29]) },
+    { HV_REG_X30,  offsetof(CPUARMState, xregs[30]) },
+    { HV_REG_PC,   offsetof(CPUARMState, pc) },
+};
+
+static const struct hvf_reg_match hvf_fpreg_match[] = {
+    { HV_SIMD_FP_REG_Q0,  offsetof(CPUARMState, vfp.zregs[0]) },
+    { HV_SIMD_FP_REG_Q1,  offsetof(CPUARMState, vfp.zregs[1]) },
+    { HV_SIMD_FP_REG_Q2,  offsetof(CPUARMState, vfp.zregs[2]) },
+    { HV_SIMD_FP_REG_Q3,  offsetof(CPUARMState, vfp.zregs[3]) },
+    { HV_SIMD_FP_REG_Q4,  offsetof(CPUARMState, vfp.zregs[4]) },
+    { HV_SIMD_FP_REG_Q5,  offsetof(CPUARMState, vfp.zregs[5]) },
+    { HV_SIMD_FP_REG_Q6,  offsetof(CPUARMState, vfp.zregs[6]) },
+    { HV_SIMD_FP_REG_Q7,  offsetof(CPUARMState, vfp.zregs[7]) },
+    { HV_SIMD_FP_REG_Q8,  offsetof(CPUARMState, vfp.zregs[8]) },
+    { HV_SIMD_FP_REG_Q9,  offsetof(CPUARMState, vfp.zregs[9]) },
+    { HV_SIMD_FP_REG_Q10, offsetof(CPUARMState, vfp.zregs[10]) },
+    { HV_SIMD_FP_REG_Q11, offsetof(CPUARMState, vfp.zregs[11]) },
+    { HV_SIMD_FP_REG_Q12, offsetof(CPUARMState, vfp.zregs[12]) },
+    { HV_SIMD_FP_REG_Q13, offsetof(CPUARMState, vfp.zregs[13]) },
+    { HV_SIMD_FP_REG_Q14, offsetof(CPUARMState, vfp.zregs[14]) },
+    { HV_SIMD_FP_REG_Q15, offsetof(CPUARMState, vfp.zregs[15]) },
+    { HV_SIMD_FP_REG_Q16, offsetof(CPUARMState, vfp.zregs[16]) },
+    { HV_SIMD_FP_REG_Q17, offsetof(CPUARMState, vfp.zregs[17]) },
+    { HV_SIMD_FP_REG_Q18, offsetof(CPUARMState, vfp.zregs[18]) },
+    { HV_SIMD_FP_REG_Q19, offsetof(CPUARMState, vfp.zregs[19]) },
+    { HV_SIMD_FP_REG_Q20, offsetof(CPUARMState, vfp.zregs[20]) },
+    { HV_SIMD_FP_REG_Q21, offsetof(CPUARMState, vfp.zregs[21]) },
+    { HV_SIMD_FP_REG_Q22, offsetof(CPUARMState, vfp.zregs[22]) },
+    { HV_SIMD_FP_REG_Q23, offsetof(CPUARMState, vfp.zregs[23]) },
+    { HV_SIMD_FP_REG_Q24, offsetof(CPUARMState, vfp.zregs[24]) },
+    { HV_SIMD_FP_REG_Q25, offsetof(CPUARMState, vfp.zregs[25]) },
+    { HV_SIMD_FP_REG_Q26, offsetof(CPUARMState, vfp.zregs[26]) },
+    { HV_SIMD_FP_REG_Q27, offsetof(CPUARMState, vfp.zregs[27]) },
+    { HV_SIMD_FP_REG_Q28, offsetof(CPUARMState, vfp.zregs[28]) },
+    { HV_SIMD_FP_REG_Q29, offsetof(CPUARMState, vfp.zregs[29]) },
+    { HV_SIMD_FP_REG_Q30, offsetof(CPUARMState, vfp.zregs[30]) },
+    { HV_SIMD_FP_REG_Q31, offsetof(CPUARMState, vfp.zregs[31]) },
+};
+
+struct hvf_sreg_match {
+    int reg;
+    uint32_t key;
+};
+
+static const struct hvf_sreg_match hvf_sreg_match[] = {
+    { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) },
+
+    { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) },
+    { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) },
+    { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) },
+    { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) },
+
+#ifdef SYNC_NO_RAW_REGS
+    /*
+     * The registers below are manually synced on init because they are
+     * marked as NO_RAW. We still list them to make number space sync easier.
+     */
+    { HV_SYS_REG_MDCCINT_EL1, HVF_SYSREG(0, 2, 2, 0, 0) },
+    { HV_SYS_REG_MIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 0) },
+    { HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) },
+    { HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) },
+#endif
+    { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) },
+    { HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) },
+    { HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) },
+    { HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) },
+    { HV_SYS_REG_ID_AA64ISAR1_EL1, HVF_SYSREG(0, 6, 3, 0, 1) },
+#ifdef SYNC_NO_MMFR0
+    /* We keep the hardware MMFR0 around. HW limits are there anyway */
+    { HV_SYS_REG_ID_AA64MMFR0_EL1, HVF_SYSREG(0, 7, 3, 0, 0) },
+#endif
+    { HV_SYS_REG_ID_AA64MMFR1_EL1, HVF_SYSREG(0, 7, 3, 0, 1) },
+    { HV_SYS_REG_ID_AA64MMFR2_EL1, HVF_SYSREG(0, 7, 3, 0, 2) },
+
+    { HV_SYS_REG_MDSCR_EL1, HVF_SYSREG(0, 2, 2, 0, 2) },
+    { HV_SYS_REG_SCTLR_EL1, HVF_SYSREG(1, 0, 3, 0, 0) },
+    { HV_SYS_REG_CPACR_EL1, HVF_SYSREG(1, 0, 3, 0, 2) },
+    { HV_SYS_REG_TTBR0_EL1, HVF_SYSREG(2, 0, 3, 0, 0) },
+    { HV_SYS_REG_TTBR1_EL1, HVF_SYSREG(2, 0, 3, 0, 1) },
+    { HV_SYS_REG_TCR_EL1, HVF_SYSREG(2, 0, 3, 0, 2) },
+
+    { HV_SYS_REG_APIAKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 0) },
+    { HV_SYS_REG_APIAKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 1) },
+    { HV_SYS_REG_APIBKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 2) },
+    { HV_SYS_REG_APIBKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 3) },
+    { HV_SYS_REG_APDAKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 0) },
+    { HV_SYS_REG_APDAKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 1) },
+    { HV_SYS_REG_APDBKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 2) },
+    { HV_SYS_REG_APDBKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 3) },
+    { HV_SYS_REG_APGAKEYLO_EL1, HVF_SYSREG(2, 3, 3, 0, 0) },
+    { HV_SYS_REG_APGAKEYHI_EL1, HVF_SYSREG(2, 3, 3, 0, 1) },
+
+    { HV_SYS_REG_SPSR_EL1, HVF_SYSREG(4, 0, 3, 1, 0) },
+    { HV_SYS_REG_ELR_EL1, HVF_SYSREG(4, 0, 3, 0, 1) },
+    { HV_SYS_REG_SP_EL0, HVF_SYSREG(4, 1, 3, 0, 0) },
+    { HV_SYS_REG_AFSR0_EL1, HVF_SYSREG(5, 1, 3, 0, 0) },
+    { HV_SYS_REG_AFSR1_EL1, HVF_SYSREG(5, 1, 3, 0, 1) },
+    { HV_SYS_REG_ESR_EL1, HVF_SYSREG(5, 2, 3, 0, 0) },
+    { HV_SYS_REG_FAR_EL1, HVF_SYSREG(6, 0, 3, 0, 0) },
+    { HV_SYS_REG_PAR_EL1, HVF_SYSREG(7, 4, 3, 0, 0) },
+    { HV_SYS_REG_MAIR_EL1, HVF_SYSREG(10, 2, 3, 0, 0) },
+    { HV_SYS_REG_AMAIR_EL1, HVF_SYSREG(10, 3, 3, 0, 0) },
+    { HV_SYS_REG_VBAR_EL1, HVF_SYSREG(12, 0, 3, 0, 0) },
+    { HV_SYS_REG_CONTEXTIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 1) },
+    { HV_SYS_REG_TPIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 4) },
+    { HV_SYS_REG_CNTKCTL_EL1, HVF_SYSREG(14, 1, 3, 0, 0) },
+    { HV_SYS_REG_CSSELR_EL1, HVF_SYSREG(0, 0, 3, 2, 0) },
+    { HV_SYS_REG_TPIDR_EL0, HVF_SYSREG(13, 0, 3, 3, 2) },
+    { HV_SYS_REG_TPIDRRO_EL0, HVF_SYSREG(13, 0, 3, 3, 3) },
+    { HV_SYS_REG_CNTV_CTL_EL0, HVF_SYSREG(14, 3, 3, 3, 1) },
+    { HV_SYS_REG_CNTV_CVAL_EL0, HVF_SYSREG(14, 3, 3, 3, 2) },
+    { HV_SYS_REG_SP_EL1, HVF_SYSREG(4, 1, 3, 4, 0) },
+};
+
+int hvf_get_registers(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    hv_return_t ret;
+    uint64_t val;
+    hv_simd_fp_uchar16_t fpval;
+    int i;
+
+    for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
+        ret = hv_vcpu_get_reg(cpu->hvf->fd, hvf_reg_match[i].reg, &val);
+        *(uint64_t *)((void *)env + hvf_reg_match[i].offset) = val;
+        assert_hvf_ok(ret);
+    }
+
+    for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
+        ret = hv_vcpu_get_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
+                                      &fpval);
+        memcpy((void *)env + hvf_fpreg_match[i].offset, &fpval, sizeof(fpval));
+        assert_hvf_ok(ret);
+    }
+
+    val = 0;
+    ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPCR, &val);
+    assert_hvf_ok(ret);
+    vfp_set_fpcr(env, val);
+
+    val = 0;
+    ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPSR, &val);
+    assert_hvf_ok(ret);
+    vfp_set_fpsr(env, val);
+
+    ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_CPSR, &val);
+    assert_hvf_ok(ret);
+    pstate_write(env, val);
+
+    for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
+        ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
+        assert_hvf_ok(ret);
+
+        arm_cpu->cpreg_values[i] = val;
+    }
+    write_list_to_cpustate(arm_cpu);
+
+    return 0;
+}
+
+int hvf_put_registers(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    hv_return_t ret;
+    uint64_t val;
+    hv_simd_fp_uchar16_t fpval;
+    int i;
+
+    for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
+        val = *(uint64_t *)((void *)env + hvf_reg_match[i].offset);
+        ret = hv_vcpu_set_reg(cpu->hvf->fd, hvf_reg_match[i].reg, val);
+        assert_hvf_ok(ret);
+    }
+
+    for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
+        memcpy(&fpval, (void *)env + hvf_fpreg_match[i].offset, sizeof(fpval));
+        ret = hv_vcpu_set_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
+                                      fpval);
+        assert_hvf_ok(ret);
+    }
+
+    ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPCR, vfp_get_fpcr(env));
+    assert_hvf_ok(ret);
+
+    ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPSR, vfp_get_fpsr(env));
+    assert_hvf_ok(ret);
+
+    ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_CPSR, pstate_read(env));
+    assert_hvf_ok(ret);
+
+    write_cpustate_to_list(arm_cpu, false);
+    for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
+        val = arm_cpu->cpreg_values[i];
+        ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
+        assert_hvf_ok(ret);
+    }
+
+    return 0;
+}
+
+static void flush_cpu_state(CPUState *cpu)
+{
+    if (cpu->vcpu_dirty) {
+        hvf_put_registers(cpu);
+        cpu->vcpu_dirty = false;
+    }
+}
+
+static void hvf_set_reg(CPUState *cpu, int rt, uint64_t val)
+{
+    hv_return_t r;
+
+    flush_cpu_state(cpu);
+
+    if (rt < 31) {
+        r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_X0 + rt, val);
+        assert_hvf_ok(r);
+    }
+}
+
+static uint64_t hvf_get_reg(CPUState *cpu, int rt)
+{
+    uint64_t val = 0;
+    hv_return_t r;
+
+    flush_cpu_state(cpu);
+
+    if (rt < 31) {
+        r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_X0 + rt, &val);
+        assert_hvf_ok(r);
+    }
+
+    return val;
+}
+
+void hvf_arch_vcpu_destroy(CPUState *cpu)
+{
+}
+
+int hvf_arch_init_vcpu(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    uint32_t sregs_match_len = ARRAY_SIZE(hvf_sreg_match);
+    uint64_t pfr;
+    hv_return_t ret;
+    int i;
+
+    env->aarch64 = 1;
+    asm volatile("mrs %0, cntfrq_el0" : "=r"(arm_cpu->gt_cntfrq_hz));
+
+    /* Allocate enough space for our sysreg sync */
+    arm_cpu->cpreg_indexes = g_renew(uint64_t, arm_cpu->cpreg_indexes,
+                                     sregs_match_len);
+    arm_cpu->cpreg_values = g_renew(uint64_t, arm_cpu->cpreg_values,
+                                    sregs_match_len);
+    arm_cpu->cpreg_vmstate_indexes = g_renew(uint64_t,
+                                             arm_cpu->cpreg_vmstate_indexes,
+                                             sregs_match_len);
+    arm_cpu->cpreg_vmstate_values = g_renew(uint64_t,
+                                            arm_cpu->cpreg_vmstate_values,
+                                            sregs_match_len);
+
+    memset(arm_cpu->cpreg_values, 0, sregs_match_len * sizeof(uint64_t));
+    arm_cpu->cpreg_array_len = sregs_match_len;
+    arm_cpu->cpreg_vmstate_array_len = sregs_match_len;
+
+    /* Populate cp list for all known sysregs */
+    for (i = 0; i < sregs_match_len; i++) {
+        const ARMCPRegInfo *ri;
+
+        arm_cpu->cpreg_indexes[i] = cpreg_to_kvm_id(hvf_sreg_match[i].key);
+
+        ri = get_arm_cp_reginfo(arm_cpu->cp_regs, hvf_sreg_match[i].key);
+        if (ri) {
+            assert(!(ri->type & ARM_CP_NO_RAW));
+        }
+    }
+    write_cpustate_to_list(arm_cpu, false);
+
+    /* Set CP_NO_RAW system registers on init */
+    ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MIDR_EL1,
+                              arm_cpu->midr);
+    assert_hvf_ok(ret);
+
+    ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MPIDR_EL1,
+                              arm_cpu->mp_affinity);
+    assert_hvf_ok(ret);
+
+    ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr);
+    assert_hvf_ok(ret);
+    pfr |= env->gicv3state ? (1 << 24) : 0;
+    ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr);
+    assert_hvf_ok(ret);
+
+    /* We're limited to underlying hardware caps, override internal versions */
+    ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64MMFR0_EL1,
+                              &arm_cpu->isar.id_aa64mmfr0);
+    assert_hvf_ok(ret);
+
+    return 0;
+}
+
+void hvf_kick_vcpu_thread(CPUState *cpu)
+{
+    hv_vcpus_exit(&cpu->hvf->fd, 1);
+}
+
+static void hvf_raise_exception(CPUARMState *env, uint32_t excp,
+                                uint32_t syndrome)
+{
+    unsigned int new_el = 1;
+    unsigned int old_mode = pstate_read(env);
+    unsigned int new_mode = aarch64_pstate_mode(new_el, true);
+    target_ulong addr = env->cp15.vbar_el[new_el];
+
+    env->cp15.esr_el[new_el] = syndrome;
+    aarch64_save_sp(env, arm_current_el(env));
+    env->elr_el[new_el] = env->pc;
+    env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode;
+    pstate_write(env, PSTATE_DAIF | new_mode);
+    aarch64_restore_sp(env, new_el);
+    env->pc = addr;
+}
+
+static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t reg)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    uint64_t val = 0;
+
+    switch (reg) {
+    case SYSREG_CNTPCT_EL0:
+        val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
+              gt_cntfrq_period_ns(arm_cpu);
+        break;
+    case SYSREG_PMCCNTR_EL0:
+        val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+        break;
+    default:
+        trace_hvf_unhandled_sysreg_read(reg,
+                                        (reg >> 20) & 0x3,
+                                        (reg >> 14) & 0x7,
+                                        (reg >> 10) & 0xf,
+                                        (reg >> 1) & 0xf,
+                                        (reg >> 17) & 0x7);
+        break;
+    }
+
+    return val;
+}
+
+static void hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
+{
+    switch (reg) {
+    case SYSREG_CNTPCT_EL0:
+        break;
+    default:
+        trace_hvf_unhandled_sysreg_write(reg,
+                                         (reg >> 20) & 0x3,
+                                         (reg >> 14) & 0x7,
+                                         (reg >> 10) & 0xf,
+                                         (reg >> 1) & 0xf,
+                                         (reg >> 17) & 0x7);
+        break;
+    }
+}
+
+static int hvf_inject_interrupts(CPUState *cpu)
+{
+    if (cpu->interrupt_request & CPU_INTERRUPT_FIQ) {
+        trace_hvf_inject_fiq();
+        hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_FIQ,
+                                      true);
+    }
+
+    if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
+        trace_hvf_inject_irq();
+        hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_IRQ,
+                                      true);
+    }
+
+    return 0;
+}
+
+static void hvf_sync_vtimer(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    hv_return_t r;
+    uint64_t ctl;
+    bool irq_state;
+
+    if (!cpu->hvf->vtimer_masked) {
+        /* We will get notified on vtimer changes by hvf, nothing to do */
+        return;
+    }
+
+    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
+    assert_hvf_ok(r);
+
+    irq_state = (ctl & (TMR_CTL_ENABLE | TMR_CTL_IMASK | TMR_CTL_ISTATUS)) ==
+                (TMR_CTL_ENABLE | TMR_CTL_ISTATUS);
+    qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], irq_state);
+
+    if (!irq_state) {
+        /* Timer no longer asserting, we can unmask it */
+        hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false);
+        cpu->hvf->vtimer_masked = false;
+    }
+}
+
+int hvf_vcpu_exec(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
+    hv_return_t r;
+    bool advance_pc = false;
+
+    flush_cpu_state(cpu);
+
+    hvf_sync_vtimer(cpu);
+
+    if (hvf_inject_interrupts(cpu)) {
+        return EXCP_INTERRUPT;
+    }
+
+    if (cpu->halted) {
+        return EXCP_HLT;
+    }
+
+    qemu_mutex_unlock_iothread();
+    assert_hvf_ok(hv_vcpu_run(cpu->hvf->fd));
+
+    /* handle VMEXIT */
+    uint64_t exit_reason = hvf_exit->reason;
+    uint64_t syndrome = hvf_exit->exception.syndrome;
+    uint32_t ec = syn_get_ec(syndrome);
+
+    qemu_mutex_lock_iothread();
+    switch (exit_reason) {
+    case HV_EXIT_REASON_EXCEPTION:
+        /* This is the main one, handle below. */
+        break;
+    case HV_EXIT_REASON_VTIMER_ACTIVATED:
+        qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1);
+        cpu->hvf->vtimer_masked = true;
+        return 0;
+    case HV_EXIT_REASON_CANCELED:
+        /* we got kicked, no exit to process */
+        return 0;
+    default:
+        assert(0);
+    }
+
+    switch (ec) {
+    case EC_DATAABORT: {
+        bool isv = syndrome & ARM_EL_ISV;
+        bool iswrite = (syndrome >> 6) & 1;
+        bool s1ptw = (syndrome >> 7) & 1;
+        uint32_t sas = (syndrome >> 22) & 3;
+        uint32_t len = 1 << sas;
+        uint32_t srt = (syndrome >> 16) & 0x1f;
+        uint64_t val = 0;
+
+        trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address,
+                             hvf_exit->exception.physical_address, isv,
+                             iswrite, s1ptw, len, srt);
+
+        assert(isv);
+
+        if (iswrite) {
+            val = hvf_get_reg(cpu, srt);
+            address_space_write(&address_space_memory,
+                                hvf_exit->exception.physical_address,
+                                MEMTXATTRS_UNSPECIFIED, &val, len);
+        } else {
+            address_space_read(&address_space_memory,
+                               hvf_exit->exception.physical_address,
+                               MEMTXATTRS_UNSPECIFIED, &val, len);
+            hvf_set_reg(cpu, srt, val);
+        }
+
+        advance_pc = true;
+        break;
+    }
+    case EC_SYSTEMREGISTERTRAP: {
+        bool isread = (syndrome >> 0) & 1;
+        uint32_t rt = (syndrome >> 5) & 0x1f;
+        uint32_t reg = syndrome & SYSREG_MASK;
+        uint64_t val = 0;
+
+        if (isread) {
+            val = hvf_sysreg_read(cpu, reg);
+            trace_hvf_sysreg_read(reg,
+                                  (reg >> 20) & 0x3,
+                                  (reg >> 14) & 0x7,
+                                  (reg >> 10) & 0xf,
+                                  (reg >> 1) & 0xf,
+                                  (reg >> 17) & 0x7,
+                                  val);
+            hvf_set_reg(cpu, rt, val);
+        } else {
+            val = hvf_get_reg(cpu, rt);
+            trace_hvf_sysreg_write(reg,
+                                   (reg >> 20) & 0x3,
+                                   (reg >> 14) & 0x7,
+                                   (reg >> 10) & 0xf,
+                                   (reg >> 1) & 0xf,
+                                   (reg >> 17) & 0x7,
+                                   val);
+            hvf_sysreg_write(cpu, reg, val);
+        }
+
+        advance_pc = true;
+        break;
+    }
+    case EC_WFX_TRAP:
+        advance_pc = true;
+        break;
+    case EC_AA64_HVC:
+        cpu_synchronize_state(cpu);
+        trace_hvf_unknown_hvf(env->xregs[0]);
+        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        break;
+    case EC_AA64_SMC:
+        cpu_synchronize_state(cpu);
+        trace_hvf_unknown_smc(env->xregs[0]);
+        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        break;
+    default:
+        cpu_synchronize_state(cpu);
+        trace_hvf_exit(syndrome, ec, env->pc);
+        error_report("0x%llx: unhandled exit 0x%llx", env->pc, exit_reason);
+    }
+
+    if (advance_pc) {
+        uint64_t pc;
+
+        flush_cpu_state(cpu);
+
+        r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc);
+        assert_hvf_ok(r);
+        pc += 4;
+        r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
+        assert_hvf_ok(r);
+    }
+
+    return 0;
+}
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
new file mode 100644
index 0000000000..49a547dcf6
--- /dev/null
+++ b/target/arm/hvf/trace-events
@@ -0,0 +1,10 @@
+hvf_unhandled_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
+hvf_unhandled_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
+hvf_inject_fiq(void) "injecting FIQ"
+hvf_inject_irq(void) "injecting IRQ"
+hvf_data_abort(uint64_t pc, uint64_t va, uint64_t pa, bool isv, bool iswrite, bool s1ptw, uint32_t len, uint32_t srt) "data abort: [pc=0x%"PRIx64" va=0x%016"PRIx64" pa=0x%016"PRIx64" isv=%d iswrite=%d s1ptw=%d len=%d srt=%d]"
+hvf_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d) = 0x%016"PRIx64
+hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d, val=0x%016"PRIx64")"
+hvf_unknown_hvf(uint64_t x0) "unknown HVC! 0x%016"PRIx64
+hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
+hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 14/19] arm/hvf: Add a WFI handler
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (12 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 13/19] hvf: Add Apple Silicon support Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 10:53   ` Sergio Lopez
  2021-06-15 10:38   ` Peter Maydell
  2021-05-19 20:22 ` [PATCH v8 15/19] hvf: arm: Implement -cpu host Alexander Graf
                   ` (6 subsequent siblings)
  20 siblings, 2 replies; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

From: Peter Collingbourne <pcc@google.com>

Sleep on WFI until the VTIMER is due but allow ourselves to be woken
up on IPI.

In this implementation IPI is blocked on the CPU thread at startup and
pselect() is used to atomically unblock the signal and begin sleeping.
The signal is sent unconditionally so there's no need to worry about
races between actually sleeping and the "we think we're sleeping"
state. It may lead to an extra wakeup but that's better than missing
it entirely.

Signed-off-by: Peter Collingbourne <pcc@google.com>
[agraf: Remove unused 'set' variable, always advance PC on WFX trap]
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>

---

v6 -> v7:

  - Move WFI into function
  - Improve comment wording
---
 accel/hvf/hvf-accel-ops.c |  5 ++-
 include/sysemu/hvf_int.h  |  1 +
 target/arm/hvf/hvf.c      | 68 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 71 insertions(+), 3 deletions(-)

diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 48e402ef57..63ec8a6f25 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -369,15 +369,14 @@ static int hvf_init_vcpu(CPUState *cpu)
     cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
 
     /* init cpu signals */
-    sigset_t set;
     struct sigaction sigact;
 
     memset(&sigact, 0, sizeof(sigact));
     sigact.sa_handler = dummy_signal;
     sigaction(SIG_IPI, &sigact, NULL);
 
-    pthread_sigmask(SIG_BLOCK, NULL, &set);
-    sigdelset(&set, SIG_IPI);
+    pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
+    sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
 
 #ifdef __aarch64__
     r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index e52d67ed5c..6d4eef8065 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -51,6 +51,7 @@ struct hvf_vcpu_state {
     uint64_t fd;
     void *exit;
     bool vtimer_masked;
+    sigset_t unblock_ipi_mask;
 };
 
 void assert_hvf_ok(hv_return_t ret);
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 3934c05979..67002efd36 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -2,6 +2,7 @@
  * QEMU Hypervisor.framework support for Apple Silicon
 
  * Copyright 2020 Alexander Graf <agraf@csgraf.de>
+ * Copyright 2020 Google LLC
  *
  * This work is licensed under the terms of the GNU GPL, version 2 or later.
  * See the COPYING file in the top-level directory.
@@ -17,6 +18,8 @@
 #include "sysemu/hvf_int.h"
 #include "sysemu/hw_accel.h"
 
+#include <mach/mach_time.h>
+
 #include "exec/address-spaces.h"
 #include "hw/irq.h"
 #include "qemu/main-loop.h"
@@ -457,6 +460,7 @@ int hvf_arch_init_vcpu(CPUState *cpu)
 
 void hvf_kick_vcpu_thread(CPUState *cpu)
 {
+    cpus_kick_thread(cpu);
     hv_vcpus_exit(&cpu->hvf->fd, 1);
 }
 
@@ -536,6 +540,67 @@ static int hvf_inject_interrupts(CPUState *cpu)
     return 0;
 }
 
+static void hvf_wait_for_ipi(CPUState *cpu, struct timespec *ts)
+{
+    /*
+     * Use pselect to sleep so that other threads can IPI us while we're
+     * sleeping.
+     */
+    qatomic_mb_set(&cpu->thread_kicked, false);
+    qemu_mutex_unlock_iothread();
+    pselect(0, 0, 0, 0, ts, &cpu->hvf->unblock_ipi_mask);
+    qemu_mutex_lock_iothread();
+}
+
+static void hvf_wfi(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    hv_return_t r;
+    uint64_t ctl;
+
+    if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
+        /* Interrupt pending, no need to wait */
+        return;
+    }
+
+    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0,
+                            &ctl);
+    assert_hvf_ok(r);
+
+    if (!(ctl & 1) || (ctl & 2)) {
+        /* Timer disabled or masked, just wait for an IPI. */
+        hvf_wait_for_ipi(cpu, NULL);
+        return;
+    }
+
+    uint64_t cval;
+    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0,
+                            &cval);
+    assert_hvf_ok(r);
+
+    int64_t ticks_to_sleep = cval - mach_absolute_time();
+    if (ticks_to_sleep < 0) {
+        return;
+    }
+
+    uint64_t seconds = ticks_to_sleep / arm_cpu->gt_cntfrq_hz;
+    uint64_t nanos =
+        (ticks_to_sleep - arm_cpu->gt_cntfrq_hz * seconds) *
+        1000000000 / arm_cpu->gt_cntfrq_hz;
+
+    /*
+     * Don't sleep for less than the time a context switch would take,
+     * so that we can satisfy fast timer requests on the same CPU.
+     * Measurements on M1 show the sweet spot to be ~2ms.
+     */
+    if (!seconds && nanos < 2000000) {
+        return;
+    }
+
+    struct timespec ts = { seconds, nanos };
+    hvf_wait_for_ipi(cpu, &ts);
+}
+
 static void hvf_sync_vtimer(CPUState *cpu)
 {
     ARMCPU *arm_cpu = ARM_CPU(cpu);
@@ -670,6 +735,9 @@ int hvf_vcpu_exec(CPUState *cpu)
     }
     case EC_WFX_TRAP:
         advance_pc = true;
+        if (!(syndrome & WFX_IS_WFE)) {
+            hvf_wfi(cpu);
+        }
         break;
     case EC_AA64_HVC:
         cpu_synchronize_state(cpu);
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (13 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 14/19] arm/hvf: Add a WFI handler Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 11:18   ` Sergio Lopez
  2021-06-15 10:56   ` Peter Maydell
  2021-05-19 20:22 ` [PATCH v8 16/19] hvf: arm: Implement PSCI handling Alexander Graf
                   ` (5 subsequent siblings)
  20 siblings, 2 replies; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Now that we have working system register sync, we push more target CPU
properties into the virtual machine. That might be useful in some
situations, but is not the typical case that users want.

So let's add a -cpu host option that allows them to explicitly pass all
CPU capabilities of their host CPU into the guest.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>

---

v6 -> v7:

  - Move function define to own header
  - Do not propagate SVE features for HVF
  - Remove stray whitespace change
  - Verify that EL0 and EL1 do not allow AArch32 mode
  - Only probe host CPU features once
---
 target/arm/cpu.c     |  9 ++++--
 target/arm/cpu.h     |  2 ++
 target/arm/hvf/hvf.c | 72 ++++++++++++++++++++++++++++++++++++++++++++
 target/arm/hvf_arm.h | 19 ++++++++++++
 target/arm/kvm_arm.h |  2 --
 5 files changed, 100 insertions(+), 4 deletions(-)
 create mode 100644 target/arm/hvf_arm.h

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 4eb0d2f85c..762d8a6d26 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -39,6 +39,7 @@
 #include "sysemu/tcg.h"
 #include "sysemu/hw_accel.h"
 #include "kvm_arm.h"
+#include "hvf_arm.h"
 #include "disas/capstone.h"
 #include "fpu/softfloat.h"
 
@@ -1998,15 +1999,19 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
 #endif /* CONFIG_TCG */
 }
 
-#ifdef CONFIG_KVM
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
 static void arm_host_initfn(Object *obj)
 {
     ARMCPU *cpu = ARM_CPU(obj);
 
+#ifdef CONFIG_KVM
     kvm_arm_set_cpu_features_from_host(cpu);
     if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
         aarch64_add_sve_properties(obj);
     }
+#else
+    hvf_arm_set_cpu_features_from_host(cpu);
+#endif
     arm_cpu_post_init(obj);
 }
 
@@ -2066,7 +2071,7 @@ static void arm_cpu_register_types(void)
 {
     type_register_static(&arm_cpu_type_info);
 
-#ifdef CONFIG_KVM
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
     type_register_static(&host_arm_cpu_type_info);
 #endif
 }
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 616b393253..4360e77183 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2977,6 +2977,8 @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
 #define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
 #define CPU_RESOLVING_TYPE TYPE_ARM_CPU
 
+#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
+
 #define cpu_signal_handler cpu_arm_signal_handler
 #define cpu_list arm_cpu_list
 
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 67002efd36..bce46f3ed8 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -17,6 +17,7 @@
 #include "sysemu/hvf.h"
 #include "sysemu/hvf_int.h"
 #include "sysemu/hw_accel.h"
+#include "hvf_arm.h"
 
 #include <mach/mach_time.h>
 
@@ -44,6 +45,16 @@
 #define TMR_CTL_IMASK   (1 << 1)
 #define TMR_CTL_ISTATUS (1 << 2)
 
+typedef struct ARMHostCPUFeatures {
+    ARMISARegisters isar;
+    uint64_t features;
+    uint64_t midr;
+    uint32_t reset_sctlr;
+    const char *dtb_compatible;
+} ARMHostCPUFeatures;
+
+static ARMHostCPUFeatures arm_host_cpu_features;
+
 struct hvf_reg_match {
     int reg;
     uint64_t offset;
@@ -390,6 +401,67 @@ static uint64_t hvf_get_reg(CPUState *cpu, int rt)
     return val;
 }
 
+static void hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
+{
+    ARMISARegisters host_isar;
+    const struct isar_regs {
+        int reg;
+        uint64_t *val;
+    } regs[] = {
+        { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
+        { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
+        { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
+        { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
+        { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
+        { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
+        { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
+        { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
+        { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
+    };
+    hv_vcpu_t fd;
+    hv_vcpu_exit_t *exit;
+    int i;
+
+    ahcf->dtb_compatible = "arm,arm-v8";
+    ahcf->features = (1ULL << ARM_FEATURE_V8) |
+                     (1ULL << ARM_FEATURE_NEON) |
+                     (1ULL << ARM_FEATURE_AARCH64) |
+                     (1ULL << ARM_FEATURE_PMU) |
+                     (1ULL << ARM_FEATURE_GENERIC_TIMER);
+
+    /* We set up a small vcpu to extract host registers */
+
+    assert_hvf_ok(hv_vcpu_create(&fd, &exit, NULL));
+    for (i = 0; i < ARRAY_SIZE(regs); i++) {
+        assert_hvf_ok(hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val));
+    }
+    assert_hvf_ok(hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr));
+    assert_hvf_ok(hv_vcpu_destroy(fd));
+
+    ahcf->isar = host_isar;
+    ahcf->reset_sctlr = 0x00c50078;
+
+    /* Make sure we don't advertise AArch32 support for EL0/EL1 */
+    g_assert((host_isar.id_aa64pfr0 & 0xff) == 0x11);
+}
+
+void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu)
+{
+    if (!arm_host_cpu_features.dtb_compatible) {
+        if (!hvf_enabled()) {
+            cpu->host_cpu_probe_failed = true;
+            return;
+        }
+        hvf_arm_get_host_cpu_features(&arm_host_cpu_features);
+    }
+
+    cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
+    cpu->isar = arm_host_cpu_features.isar;
+    cpu->env.features = arm_host_cpu_features.features;
+    cpu->midr = arm_host_cpu_features.midr;
+    cpu->reset_sctlr = arm_host_cpu_features.reset_sctlr;
+}
+
 void hvf_arch_vcpu_destroy(CPUState *cpu)
 {
 }
diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h
new file mode 100644
index 0000000000..603074a331
--- /dev/null
+++ b/target/arm/hvf_arm.h
@@ -0,0 +1,19 @@
+/*
+ * QEMU Hypervisor.framework (HVF) support -- ARM specifics
+ *
+ * Copyright (c) 2021 Alexander Graf
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef QEMU_HVF_ARM_H
+#define QEMU_HVF_ARM_H
+
+#include "qemu/accel.h"
+#include "cpu.h"
+
+void hvf_arm_set_cpu_features_from_host(struct ARMCPU *cpu);
+
+#endif
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
index 34f8daa377..828dca4a4a 100644
--- a/target/arm/kvm_arm.h
+++ b/target/arm/kvm_arm.h
@@ -214,8 +214,6 @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
  */
 void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
 
-#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
-
 /**
  * ARMHostCPUFeatures: information about the host CPU (identified
  * by asking the host kernel)
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 16/19] hvf: arm: Implement PSCI handling
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (14 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 15/19] hvf: arm: Implement -cpu host Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 11:20   ` Sergio Lopez
  2021-06-15 12:54   ` Peter Maydell
  2021-05-19 20:22 ` [PATCH v8 17/19] arm: Add Hypervisor.framework build target Alexander Graf
                   ` (4 subsequent siblings)
  20 siblings, 2 replies; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

We need to handle PSCI calls. Most of the TCG code works for us,
but we can simplify it to only handle aa64 mode and we need to
handle SUSPEND differently.

This patch takes the TCG code as template and duplicates it in HVF.

To tell the guest that we support PSCI 0.2 now, update the check in
arm_cpu_initfn() as well.

Signed-off-by: Alexander Graf <agraf@csgraf.de>

---

v6 -> v7:

  - This patch integrates "arm: Set PSCI to 0.2 for HVF"

v7 -> v8:

  - Do not advance for HVC, PC is already updated by hvf
  - Fix checkpatch error
---
 target/arm/cpu.c            |   4 +-
 target/arm/hvf/hvf.c        | 123 ++++++++++++++++++++++++++++++++++--
 target/arm/hvf/trace-events |   1 +
 3 files changed, 122 insertions(+), 6 deletions(-)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 762d8a6d26..b202d06e09 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -1079,8 +1079,8 @@ static void arm_cpu_initfn(Object *obj)
     cpu->psci_version = 1; /* By default assume PSCI v0.1 */
     cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
 
-    if (tcg_enabled()) {
-        cpu->psci_version = 2; /* TCG implements PSCI 0.2 */
+    if (tcg_enabled() || hvf_enabled()) {
+        cpu->psci_version = 2; /* TCG and HVF implement PSCI 0.2 */
     }
 }
 
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index bce46f3ed8..65c33e2a14 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -25,6 +25,7 @@
 #include "hw/irq.h"
 #include "qemu/main-loop.h"
 #include "sysemu/cpus.h"
+#include "arm-powerctl.h"
 #include "target/arm/cpu.h"
 #include "target/arm/internals.h"
 #include "trace/trace-target_arm_hvf.h"
@@ -45,6 +46,8 @@
 #define TMR_CTL_IMASK   (1 << 1)
 #define TMR_CTL_ISTATUS (1 << 2)
 
+static void hvf_wfi(CPUState *cpu);
+
 typedef struct ARMHostCPUFeatures {
     ARMISARegisters isar;
     uint64_t features;
@@ -553,6 +556,110 @@ static void hvf_raise_exception(CPUARMState *env, uint32_t excp,
     env->pc = addr;
 }
 
+static int hvf_psci_cpu_off(ARMCPU *arm_cpu)
+{
+    int32_t ret = 0;
+    ret = arm_set_cpu_off(arm_cpu->mp_affinity);
+    assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
+
+    return 0;
+}
+
+static int hvf_handle_psci_call(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    uint64_t param[4] = {
+        env->xregs[0],
+        env->xregs[1],
+        env->xregs[2],
+        env->xregs[3]
+    };
+    uint64_t context_id, mpidr;
+    bool target_aarch64 = true;
+    CPUState *target_cpu_state;
+    ARMCPU *target_cpu;
+    target_ulong entry;
+    int target_el = 1;
+    int32_t ret = 0;
+
+    trace_hvf_psci_call(param[0], param[1], param[2], param[3],
+                        arm_cpu->mp_affinity);
+
+    switch (param[0]) {
+    case QEMU_PSCI_0_2_FN_PSCI_VERSION:
+        ret = QEMU_PSCI_0_2_RET_VERSION_0_2;
+        break;
+    case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
+        ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */
+        break;
+    case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
+    case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
+        mpidr = param[1];
+
+        switch (param[2]) {
+        case 0:
+            target_cpu_state = arm_get_cpu_by_id(mpidr);
+            if (!target_cpu_state) {
+                ret = QEMU_PSCI_RET_INVALID_PARAMS;
+                break;
+            }
+            target_cpu = ARM_CPU(target_cpu_state);
+
+            ret = target_cpu->power_state;
+            break;
+        default:
+            /* Everything above affinity level 0 is always on. */
+            ret = 0;
+        }
+        break;
+    case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
+        qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
+        /* QEMU reset and shutdown are async requests, but PSCI
+         * mandates that we never return from the reset/shutdown
+         * call, so power the CPU off now so it doesn't execute
+         * anything further.
+         */
+        return hvf_psci_cpu_off(arm_cpu);
+    case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
+        qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
+        return hvf_psci_cpu_off(arm_cpu);
+    case QEMU_PSCI_0_1_FN_CPU_ON:
+    case QEMU_PSCI_0_2_FN_CPU_ON:
+    case QEMU_PSCI_0_2_FN64_CPU_ON:
+        mpidr = param[1];
+        entry = param[2];
+        context_id = param[3];
+        ret = arm_set_cpu_on(mpidr, entry, context_id,
+                             target_el, target_aarch64);
+        break;
+    case QEMU_PSCI_0_1_FN_CPU_OFF:
+    case QEMU_PSCI_0_2_FN_CPU_OFF:
+        return hvf_psci_cpu_off(arm_cpu);
+    case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
+    case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
+    case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
+        /* Affinity levels are not supported in QEMU */
+        if (param[1] & 0xfffe0000) {
+            ret = QEMU_PSCI_RET_INVALID_PARAMS;
+            break;
+        }
+        /* Powerdown is not supported, we always go into WFI */
+        env->xregs[0] = 0;
+        hvf_wfi(cpu);
+        break;
+    case QEMU_PSCI_0_1_FN_MIGRATE:
+    case QEMU_PSCI_0_2_FN_MIGRATE:
+        ret = QEMU_PSCI_RET_NOT_SUPPORTED;
+        break;
+    default:
+        return 1;
+    }
+
+    env->xregs[0] = ret;
+    return 0;
+}
+
 static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t reg)
 {
     ARMCPU *arm_cpu = ARM_CPU(cpu);
@@ -716,6 +823,8 @@ int hvf_vcpu_exec(CPUState *cpu)
     }
 
     if (cpu->halted) {
+        /* On unhalt, we usually have CPU state changes. Prepare for them. */
+        cpu_synchronize_state(cpu);
         return EXCP_HLT;
     }
 
@@ -813,13 +922,19 @@ int hvf_vcpu_exec(CPUState *cpu)
         break;
     case EC_AA64_HVC:
         cpu_synchronize_state(cpu);
-        trace_hvf_unknown_hvf(env->xregs[0]);
-        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        if (hvf_handle_psci_call(cpu)) {
+            trace_hvf_unknown_hvf(env->xregs[0]);
+            hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        }
         break;
     case EC_AA64_SMC:
         cpu_synchronize_state(cpu);
-        trace_hvf_unknown_smc(env->xregs[0]);
-        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        if (!hvf_handle_psci_call(cpu)) {
+            advance_pc = true;
+        } else {
+            trace_hvf_unknown_smc(env->xregs[0]);
+            hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
+        }
         break;
     default:
         cpu_synchronize_state(cpu);
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
index 49a547dcf6..278b88cc62 100644
--- a/target/arm/hvf/trace-events
+++ b/target/arm/hvf/trace-events
@@ -8,3 +8,4 @@ hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_
 hvf_unknown_hvf(uint64_t x0) "unknown HVC! 0x%016"PRIx64
 hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
 hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
+hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x"
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 17/19] arm: Add Hypervisor.framework build target
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (15 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 16/19] hvf: arm: Implement PSCI handling Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 11:20   ` Sergio Lopez
  2021-06-15 10:59   ` Peter Maydell
  2021-05-19 20:22 ` [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call Alexander Graf
                   ` (3 subsequent siblings)
  20 siblings, 2 replies; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Now that we have all logic in place that we need to handle Hypervisor.framework
on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we
can build it.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only)

---

v1 -> v2:

  - Fix build on 32bit arm

v3 -> v4:

  - Remove i386-softmmu target

v6 -> v7:

  - Simplify HVF matching logic in meson build file
---
 meson.build                | 7 +++++++
 target/arm/hvf/meson.build | 3 +++
 target/arm/meson.build     | 2 ++
 3 files changed, 12 insertions(+)
 create mode 100644 target/arm/hvf/meson.build

diff --git a/meson.build b/meson.build
index 698f4e9356..d28bd068ff 100644
--- a/meson.build
+++ b/meson.build
@@ -77,6 +77,13 @@ else
 endif
 
 accelerator_targets = { 'CONFIG_KVM': kvm_targets }
+
+if cpu in ['aarch64']
+  accelerator_targets += {
+    'CONFIG_HVF': ['aarch64-softmmu']
+  }
+endif
+
 if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
   # i368 emulator provides xenpv machine type for multiple architectures
   accelerator_targets += {
diff --git a/target/arm/hvf/meson.build b/target/arm/hvf/meson.build
new file mode 100644
index 0000000000..855e6cce5a
--- /dev/null
+++ b/target/arm/hvf/meson.build
@@ -0,0 +1,3 @@
+arm_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
+  'hvf.c',
+))
diff --git a/target/arm/meson.build b/target/arm/meson.build
index 5bfaf43b50..48004bf0e6 100644
--- a/target/arm/meson.build
+++ b/target/arm/meson.build
@@ -57,5 +57,7 @@ arm_softmmu_ss.add(files(
   'psci.c',
 ))
 
+subdir('hvf')
+
 target_arch += {'arm': arm_ss}
 target_softmmu_arch += {'arm': arm_softmmu_ss}
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (16 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 17/19] arm: Add Hypervisor.framework build target Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 11:21   ` Sergio Lopez
  2021-06-15 11:02   ` Peter Maydell
  2021-05-19 20:22 ` [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call Alexander Graf
                   ` (2 subsequent siblings)
  20 siblings, 2 replies; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives
in the trusted application call number space, but its purpose is unknown.

In our current SMC implementation, we inject a UDEF for unknown SMC calls,
including this one. However, Windows breaks on boot when we do this. Instead,
let's return an error code.

With this and -M virt,virtualization=on I can successfully boot the current
Windows 10 Insider Preview in TCG.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
---
 target/arm/kvm-consts.h | 2 ++
 target/arm/psci.c       | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/target/arm/kvm-consts.h b/target/arm/kvm-consts.h
index 580f1c1fee..4b64f98117 100644
--- a/target/arm/kvm-consts.h
+++ b/target/arm/kvm-consts.h
@@ -85,6 +85,8 @@ MISMATCH_CHECK(QEMU_PSCI_0_2_FN64_CPU_SUSPEND, PSCI_0_2_FN64_CPU_SUSPEND);
 MISMATCH_CHECK(QEMU_PSCI_0_2_FN64_CPU_ON, PSCI_0_2_FN64_CPU_ON);
 MISMATCH_CHECK(QEMU_PSCI_0_2_FN64_MIGRATE, PSCI_0_2_FN64_MIGRATE);
 
+#define QEMU_SMCCC_TC_WINDOWS10_BOOT 0xc3000001
+
 /* PSCI v0.2 return values used by TCG emulation of PSCI */
 
 /* No Trusted OS migration to worry about when offlining CPUs */
diff --git a/target/arm/psci.c b/target/arm/psci.c
index 6709e28013..4d11dd59c4 100644
--- a/target/arm/psci.c
+++ b/target/arm/psci.c
@@ -69,6 +69,7 @@ bool arm_is_psci_call(ARMCPU *cpu, int excp_type)
     case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
     case QEMU_PSCI_0_1_FN_MIGRATE:
     case QEMU_PSCI_0_2_FN_MIGRATE:
+    case QEMU_SMCCC_TC_WINDOWS10_BOOT:
         return true;
     default:
         return false;
@@ -194,6 +195,7 @@ void arm_handle_psci_call(ARMCPU *cpu)
         break;
     case QEMU_PSCI_0_1_FN_MIGRATE:
     case QEMU_PSCI_0_2_FN_MIGRATE:
+    case QEMU_SMCCC_TC_WINDOWS10_BOOT:
         ret = QEMU_PSCI_RET_NOT_SUPPORTED;
         break;
     default:
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (17 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call Alexander Graf
@ 2021-05-19 20:22 ` Alexander Graf
  2021-05-27 11:22   ` Sergio Lopez
  2021-06-15  9:31   ` Peter Maydell
  2021-05-19 20:45 ` [PATCH v8 00/19] hvf: Implement Apple Silicon Support no-reply
  2021-06-03 13:43 ` Peter Maydell
  20 siblings, 2 replies; 66+ messages in thread
From: Alexander Graf @ 2021-05-19 20:22 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Maydell, Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives
in the trusted application call number space, but its purpose is unknown.

In our current SMC implementation, we inject a UDEF for unknown SMC calls,
including this one. However, Windows breaks on boot when we do this. Instead,
let's return an error code.

With this patch applied I can successfully boot the current Windows 10
Insider Preview in HVF.

Signed-off-by: Alexander Graf <agraf@csgraf.de>

---

v7 -> v8:

  - fix checkpatch
---
 target/arm/hvf/hvf.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 65c33e2a14..be670af578 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -931,6 +931,10 @@ int hvf_vcpu_exec(CPUState *cpu)
         cpu_synchronize_state(cpu);
         if (!hvf_handle_psci_call(cpu)) {
             advance_pc = true;
+        } else if (env->xregs[0] == QEMU_SMCCC_TC_WINDOWS10_BOOT) {
+            /* This special SMC is called by Windows 10 on boot. Return error */
+            env->xregs[0] = -1;
+            advance_pc = true;
         } else {
             trace_hvf_unknown_smc(env->xregs[0]);
             hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
-- 
2.30.1 (Apple Git-130)



^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 00/19] hvf: Implement Apple Silicon Support
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (18 preceding siblings ...)
  2021-05-19 20:22 ` [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call Alexander Graf
@ 2021-05-19 20:45 ` no-reply
  2021-06-03 13:43 ` Peter Maydell
  20 siblings, 0 replies; 66+ messages in thread
From: no-reply @ 2021-05-19 20:45 UTC (permalink / raw)
  To: agraf
  Cc: peter.maydell, ehabkost, pcc, richard.henderson, qemu-devel,
	dirty, r.bolshakov, qemu-arm, lfy, pbonzini, philmd

Patchew URL: https://patchew.org/QEMU/20210519202253.76782-1-agraf@csgraf.de/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20210519202253.76782-1-agraf@csgraf.de
Subject: [PATCH v8 00/19] hvf: Implement Apple Silicon Support

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 - [tag update]      patchew/20210519165002.1195745-1-titusr@google.com -> patchew/20210519165002.1195745-1-titusr@google.com
 * [new tag]         patchew/20210519202253.76782-1-agraf@csgraf.de -> patchew/20210519202253.76782-1-agraf@csgraf.de
Switched to a new branch 'test'
a91491a hvf: arm: Handle Windows 10 SMC call
3dc7553 arm: Enable Windows 10 trusted SMCCC boot call
65e8115 arm: Add Hypervisor.framework build target
1cfa1d0 hvf: arm: Implement PSCI handling
63e0f6f hvf: arm: Implement -cpu host
3cf833e arm/hvf: Add a WFI handler
ca30893 hvf: Add Apple Silicon support
8a591bd hvf: Simplify post reset/init/loadvm hooks
0cd15e4 hvf: Introduce hvf vcpu struct
086f42d hvf: Remove hvf-accel-ops.h
63ca75b hvf: Make synchronize functions static
6a10390 hvf: Use cpu_synchronize_state()
b06558a hvf: Split out common code on vcpu init and destroy
de8ace6 hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
f4b7940 hvf: Make hvf_set_phys_mem() static
83983fc hvf: Move hvf internal definitions into common header
e011dbb hvf: Move cpu functions into common directory
ac7d522 hvf: Move vcpu thread functions into common directory
72aa7ae hvf: Move assert_hvf_ok() into common directory

=== OUTPUT BEGIN ===
1/19 Checking commit 72aa7aeb05e7 (hvf: Move assert_hvf_ok() into common directory)
2/19 Checking commit ac7d522e0cae (hvf: Move vcpu thread functions into common directory)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#17: 
 {target/i386 => accel}/hvf/hvf-accel-ops.c | 0

total: 0 errors, 1 warnings, 21 lines checked

Patch 2/19 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
3/19 Checking commit e011dbb4cc58 (hvf: Move cpu functions into common directory)
4/19 Checking commit 83983fc96e4f (hvf: Move hvf internal definitions into common header)
5/19 Checking commit f4b79405bcc5 (hvf: Make hvf_set_phys_mem() static)
6/19 Checking commit de8ace6033f1 (hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t)
7/19 Checking commit b06558a826a4 (hvf: Split out common code on vcpu init and destroy)
8/19 Checking commit 6a103903f401 (hvf: Use cpu_synchronize_state())
9/19 Checking commit 63ca75b7c904 (hvf: Make synchronize functions static)
10/19 Checking commit 086f42de24b5 (hvf: Remove hvf-accel-ops.h)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#33: 
deleted file mode 100644

total: 0 errors, 1 warnings, 23 lines checked

Patch 10/19 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
11/19 Checking commit 0cd15e48075a (hvf: Introduce hvf vcpu struct)
WARNING: line over 80 characters
#154: FILE: target/i386/hvf/hvf.c:263:
+    wvmcs(cpu->hvf->fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry,

ERROR: "(foo*)" should be "(foo *)"
#767: FILE: target/i386/hvf/x86hvf.c:83:
+    if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {

ERROR: "(foo*)" should be "(foo *)"
#848: FILE: target/i386/hvf/x86hvf.c:165:
+    if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {

total: 2 errors, 1 warnings, 1001 lines checked

Patch 11/19 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

12/19 Checking commit 8a591bd4b86b (hvf: Simplify post reset/init/loadvm hooks)
13/19 Checking commit ca3089321cb9 (hvf: Add Apple Silicon support)
WARNING: architecture specific defines should be avoided
#56: FILE: accel/hvf/hvf-accel-ops.c:63:
+#ifdef __aarch64__

WARNING: architecture specific defines should be avoided
#67: FILE: accel/hvf/hvf-accel-ops.c:382:
+#ifdef __aarch64__

WARNING: architecture specific defines should be avoided
#101: FILE: include/sysemu/hvf_int.h:14:
+#ifdef __aarch64__

total: 0 errors, 3 warnings, 796 lines checked

Patch 13/19 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
14/19 Checking commit 3cf833ede182 (arm/hvf: Add a WFI handler)
15/19 Checking commit 63e0f6fb8c65 (hvf: arm: Implement -cpu host)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#179: 
new file mode 100644

total: 0 errors, 1 warnings, 160 lines checked

Patch 15/19 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
16/19 Checking commit 1cfa1d082968 (hvf: arm: Implement PSCI handling)
WARNING: Block comments use a leading /* on a separate line
#124: FILE: target/arm/hvf/hvf.c:618:
+        /* QEMU reset and shutdown are async requests, but PSCI

total: 0 errors, 1 warnings, 170 lines checked

Patch 16/19 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
17/19 Checking commit 65e81151cc57 (arm: Add Hypervisor.framework build target)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#41: 
new file mode 100644

total: 0 errors, 1 warnings, 23 lines checked

Patch 17/19 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
18/19 Checking commit 3dc7553180fb (arm: Enable Windows 10 trusted SMCCC boot call)
19/19 Checking commit a91491a61ffc (hvf: arm: Handle Windows 10 SMC call)
=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/20210519202253.76782-1-agraf@csgraf.de/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 01/19] hvf: Move assert_hvf_ok() into common directory
  2021-05-19 20:22 ` [PATCH v8 01/19] hvf: Move assert_hvf_ok() into common directory Alexander Graf
@ 2021-05-27 10:00   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:00 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 1024 bytes --]

On Wed, May 19, 2021 at 10:22:35PM +0200, Alexander Graf wrote:
> Until now, Hypervisor.framework has only been available on x86_64 systems.
> With Apple Silicon shipping now, it extends its reach to aarch64. To
> prepare for support for multiple architectures, let's start moving common
> code out into its own accel directory.
> 
> This patch moves assert_hvf_ok() and introduces generic build infrastructure.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  MAINTAINERS              |  8 +++++++
>  accel/hvf/hvf-all.c      | 47 ++++++++++++++++++++++++++++++++++++++++
>  accel/hvf/meson.build    |  6 +++++
>  accel/meson.build        |  1 +
>  include/sysemu/hvf_int.h | 18 +++++++++++++++
>  target/i386/hvf/hvf.c    | 33 +---------------------------
>  6 files changed, 81 insertions(+), 32 deletions(-)
>  create mode 100644 accel/hvf/hvf-all.c
>  create mode 100644 accel/hvf/meson.build
>  create mode 100644 include/sysemu/hvf_int.h

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 02/19] hvf: Move vcpu thread functions into common directory
  2021-05-19 20:22 ` [PATCH v8 02/19] hvf: Move vcpu thread functions " Alexander Graf
@ 2021-05-27 10:01   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:01 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 930 bytes --]

On Wed, May 19, 2021 at 10:22:36PM +0200, Alexander Graf wrote:
> Until now, Hypervisor.framework has only been available on x86_64 systems.
> With Apple Silicon shipping now, it extends its reach to aarch64. To
> prepare for support for multiple architectures, let's start moving common
> code out into its own accel directory.
> 
> This patch moves the vCPU thread loop over.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  {target/i386 => accel}/hvf/hvf-accel-ops.c | 0
>  {target/i386 => accel}/hvf/hvf-accel-ops.h | 0
>  accel/hvf/meson.build                      | 1 +
>  target/i386/hvf/meson.build                | 1 -
>  target/i386/hvf/x86hvf.c                   | 2 +-
>  5 files changed, 2 insertions(+), 2 deletions(-)
>  rename {target/i386 => accel}/hvf/hvf-accel-ops.c (100%)
>  rename {target/i386 => accel}/hvf/hvf-accel-ops.h (100%)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 03/19] hvf: Move cpu functions into common directory
  2021-05-19 20:22 ` [PATCH v8 03/19] hvf: Move cpu " Alexander Graf
@ 2021-05-27 10:02   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:02 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 891 bytes --]

On Wed, May 19, 2021 at 10:22:37PM +0200, Alexander Graf wrote:
> Until now, Hypervisor.framework has only been available on x86_64 systems.
> With Apple Silicon shipping now, it extends its reach to aarch64. To
> prepare for support for multiple architectures, let's start moving common
> code out into its own accel directory.
> 
> This patch moves CPU and memory operations over. While at it, make sure
> the code is consumable on non-i386 systems.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  accel/hvf/hvf-accel-ops.c  | 308 ++++++++++++++++++++++++++++++++++++-
>  include/sysemu/hvf_int.h   |   4 +
>  target/i386/hvf/hvf-i386.h |   2 -
>  target/i386/hvf/hvf.c      | 302 ------------------------------------
>  target/i386/hvf/x86hvf.h   |   2 -
>  5 files changed, 311 insertions(+), 307 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 04/19] hvf: Move hvf internal definitions into common header
  2021-05-19 20:22 ` [PATCH v8 04/19] hvf: Move hvf internal definitions into common header Alexander Graf
@ 2021-05-27 10:04   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:04 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 705 bytes --]

On Wed, May 19, 2021 at 10:22:38PM +0200, Alexander Graf wrote:
> Until now, Hypervisor.framework has only been available on x86_64 systems.
> With Apple Silicon shipping now, it extends its reach to aarch64. To
> prepare for support for multiple architectures, let's start moving common
> code out into its own accel directory.
> 
> This patch moves a few internal struct and constant defines over.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  include/sysemu/hvf_int.h   | 30 ++++++++++++++++++++++++++++++
>  target/i386/hvf/hvf-i386.h | 31 +------------------------------
>  2 files changed, 31 insertions(+), 30 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 05/19] hvf: Make hvf_set_phys_mem() static
  2021-05-19 20:22 ` [PATCH v8 05/19] hvf: Make hvf_set_phys_mem() static Alexander Graf
@ 2021-05-27 10:06   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:06 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 389 bytes --]

On Wed, May 19, 2021 at 10:22:39PM +0200, Alexander Graf wrote:
> The hvf_set_phys_mem() function is only called within the same file.
> Make it static.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  accel/hvf/hvf-accel-ops.c | 2 +-
>  include/sysemu/hvf_int.h  | 1 -
>  2 files changed, 1 insertion(+), 2 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 06/19] hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
  2021-05-19 20:22 ` [PATCH v8 06/19] hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t Alexander Graf
@ 2021-05-27 10:07   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:07 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 386 bytes --]

On Wed, May 19, 2021 at 10:22:40PM +0200, Alexander Graf wrote:
> The ARM version of Hypervisor.framework no longer defines these two
> types, so let's just revert to standard ones.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  accel/hvf/hvf-accel-ops.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 07/19] hvf: Split out common code on vcpu init and destroy
  2021-05-19 20:22 ` [PATCH v8 07/19] hvf: Split out common code on vcpu init and destroy Alexander Graf
@ 2021-05-27 10:09   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:09 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 924 bytes --]

On Wed, May 19, 2021 at 10:22:41PM +0200, Alexander Graf wrote:
> Until now, Hypervisor.framework has only been available on x86_64 systems.
> With Apple Silicon shipping now, it extends its reach to aarch64. To
> prepare for support for multiple architectures, let's start moving common
> code out into its own accel directory.
> 
> This patch splits the vcpu init and destroy functions into a generic and
> an architecture specific portion. This also allows us to move the generic
> functions into the generic hvf code, removing exported functions.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  accel/hvf/hvf-accel-ops.c | 30 ++++++++++++++++++++++++++++++
>  accel/hvf/hvf-accel-ops.h |  2 --
>  include/sysemu/hvf_int.h  |  2 ++
>  target/i386/hvf/hvf.c     | 23 ++---------------------
>  4 files changed, 34 insertions(+), 23 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 08/19] hvf: Use cpu_synchronize_state()
  2021-05-19 20:22 ` [PATCH v8 08/19] hvf: Use cpu_synchronize_state() Alexander Graf
@ 2021-05-27 10:15   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:15 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 623 bytes --]

On Wed, May 19, 2021 at 10:22:42PM +0200, Alexander Graf wrote:
> There is no reason to call the hvf specific hvf_cpu_synchronize_state()
> when we can just use the generic cpu_synchronize_state() instead. This
> allows us to have less dependency on internal function definitions and
> allows us to make hvf_cpu_synchronize_state() static.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  accel/hvf/hvf-accel-ops.c | 2 +-
>  accel/hvf/hvf-accel-ops.h | 1 -
>  target/i386/hvf/x86hvf.c  | 9 ++++-----
>  3 files changed, 5 insertions(+), 7 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 09/19] hvf: Make synchronize functions static
  2021-05-19 20:22 ` [PATCH v8 09/19] hvf: Make synchronize functions static Alexander Graf
@ 2021-05-27 10:15   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:15 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 428 bytes --]

On Wed, May 19, 2021 at 10:22:43PM +0200, Alexander Graf wrote:
> The hvf accel synchronize functions are only used as input for local
> callback functions, so we can make them static.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  accel/hvf/hvf-accel-ops.c | 6 +++---
>  accel/hvf/hvf-accel-ops.h | 3 ---
>  2 files changed, 3 insertions(+), 6 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 10/19] hvf: Remove hvf-accel-ops.h
  2021-05-19 20:22 ` [PATCH v8 10/19] hvf: Remove hvf-accel-ops.h Alexander Graf
@ 2021-05-27 10:16   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:16 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 564 bytes --]

On Wed, May 19, 2021 at 10:22:44PM +0200, Alexander Graf wrote:
> We can move the definition of hvf_vcpu_exec() into our internal
> hvf header, obsoleting the need for hvf-accel-ops.h.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  accel/hvf/hvf-accel-ops.c |  2 --
>  accel/hvf/hvf-accel-ops.h | 17 -----------------
>  include/sysemu/hvf_int.h  |  1 +
>  target/i386/hvf/hvf.c     |  2 --
>  4 files changed, 1 insertion(+), 21 deletions(-)
>  delete mode 100644 accel/hvf/hvf-accel-ops.h

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 11/19] hvf: Introduce hvf vcpu struct
  2021-05-19 20:22 ` [PATCH v8 11/19] hvf: Introduce hvf vcpu struct Alexander Graf
@ 2021-05-27 10:18   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:18 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Alex Bennée, Richard Henderson, QEMU Developers,
	Cameron Esfahani, Roman Bolshakov, qemu-arm, Frank Yang,
	Paolo Bonzini, Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 1160 bytes --]

On Wed, May 19, 2021 at 10:22:45PM +0200, Alexander Graf wrote:
> We will need more than a single field for hvf going forward. To keep
> the global vcpu struct uncluttered, let's allocate a special hvf vcpu
> struct, similar to how hax does it.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
> Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> 
> ---
> 
> v4 -> v5:
> 
>   - Use g_free() on destroy
> ---
>  accel/hvf/hvf-accel-ops.c   |   8 +-
>  include/hw/core/cpu.h       |   3 +-
>  include/sysemu/hvf_int.h    |   4 +
>  target/i386/hvf/hvf.c       | 104 +++++++++---------
>  target/i386/hvf/vmx.h       |  24 +++--
>  target/i386/hvf/x86.c       |  28 ++---
>  target/i386/hvf/x86_descr.c |  26 ++---
>  target/i386/hvf/x86_emu.c   |  62 +++++------
>  target/i386/hvf/x86_mmu.c   |   4 +-
>  target/i386/hvf/x86_task.c  |  12 +--
>  target/i386/hvf/x86hvf.c    | 210 ++++++++++++++++++------------------
>  11 files changed, 248 insertions(+), 237 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 12/19] hvf: Simplify post reset/init/loadvm hooks
  2021-05-19 20:22 ` [PATCH v8 12/19] hvf: Simplify post reset/init/loadvm hooks Alexander Graf
@ 2021-05-27 10:20   ` Sergio Lopez
  0 siblings, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:20 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 1110 bytes --]

On Wed, May 19, 2021 at 10:22:46PM +0200, Alexander Graf wrote:
> The hooks we have that call us after reset, init and loadvm really all
> just want to say "The reference of all register state is in the QEMU
> vcpu struct, please push it".
> 
> We already have a working pushing mechanism though called cpu->vcpu_dirty,
> so we can just reuse that for all of the above, syncing state properly the
> next time we actually execute a vCPU.
> 
> This fixes PSCI resets on ARM, as they modify CPU state even after the
> post init call has completed, but before we execute the vCPU again.
> 
> To also make the scheme work for x86, we have to make sure we don't
> move stale eflags into our env when the vcpu state is dirty.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
> Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
> ---
>  accel/hvf/hvf-accel-ops.c | 27 +++++++--------------------
>  target/i386/hvf/x86hvf.c  |  5 ++++-
>  2 files changed, 11 insertions(+), 21 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 14/19] arm/hvf: Add a WFI handler
  2021-05-19 20:22 ` [PATCH v8 14/19] arm/hvf: Add a WFI handler Alexander Graf
@ 2021-05-27 10:53   ` Sergio Lopez
  2021-06-15 10:38   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:53 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 1168 bytes --]

On Wed, May 19, 2021 at 10:22:48PM +0200, Alexander Graf wrote:
> From: Peter Collingbourne <pcc@google.com>
> 
> Sleep on WFI until the VTIMER is due but allow ourselves to be woken
> up on IPI.
> 
> In this implementation IPI is blocked on the CPU thread at startup and
> pselect() is used to atomically unblock the signal and begin sleeping.
> The signal is sent unconditionally so there's no need to worry about
> races between actually sleeping and the "we think we're sleeping"
> state. It may lead to an extra wakeup but that's better than missing
> it entirely.
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> [agraf: Remove unused 'set' variable, always advance PC on WFX trap]
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
> 
> ---
> 
> v6 -> v7:
> 
>   - Move WFI into function
>   - Improve comment wording
> ---
>  accel/hvf/hvf-accel-ops.c |  5 ++-
>  include/sysemu/hvf_int.h  |  1 +
>  target/arm/hvf/hvf.c      | 68 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 71 insertions(+), 3 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 13/19] hvf: Add Apple Silicon support
  2021-05-19 20:22 ` [PATCH v8 13/19] hvf: Add Apple Silicon support Alexander Graf
@ 2021-05-27 10:55   ` Sergio Lopez
  2021-06-15 10:21   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 10:55 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 2460 bytes --]

On Wed, May 19, 2021 at 10:22:47PM +0200, Alexander Graf wrote:
> With Apple Silicon available to the masses, it's a good time to add support
> for driving its virtualization extensions from QEMU.
> 
> This patch adds all necessary architecture specific code to get basic VMs
> working. It's still pretty raw, but definitely functional.
> 
> Known limitations:
> 
>   - Vtimer acknowledgement is hacky
>   - Should implement more sysregs and fault on invalid ones then
>   - WFI handling is missing, need to marry it with vtimer
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
> 
> ---
> 
> v1 -> v2:
> 
>   - Merge vcpu kick function patch
>   - Implement WFI handling (allows vCPUs to sleep)
>   - Synchronize system registers (fixes OVMF crashes and reboot)
>   - Don't always call cpu_synchronize_state()
>   - Use more fine grained iothread locking
>   - Populate aa64mmfr0 from hardware
> 
> v2 -> v3:
> 
>   - Advance PC on SMC
>   - Use cp list interface for sysreg syncs
>   - Do not set current_cpu
>   - Fix sysreg isread mask
>   - Move sysreg handling to functions
>   - Remove WFI logic again
>   - Revert to global iothread locking
>   - Use Hypervisor.h on arm, hv.h does not contain aarch64 definitions
> 
> v3 -> v4:
> 
>   - No longer include Hypervisor.h
> 
> v5 -> v6:
> 
>   - Swap sysreg definition order. This way we're in line with asm outputs.
> 
> v6 -> v7:
> 
>   - Remove osdep.h include from hvf_int.h
>   - Synchronize SIMD registers as well
>   - Prepend 0x for hex values
>   - Convert DPRINTF to trace points
>   - Use main event loop (fixes gdbstub issues)
>   - Remove PSCI support, inject UDEF on HVC/SMC
>   - Change vtimer logic to look at ctl.istatus for vtimer mask sync
>   - Add kick callback again (fixes remote CPU notification)
> 
> v7 -> v8:
> 
>   - Fix checkpatch errors
> ---
>  MAINTAINERS                 |   5 +
>  accel/hvf/hvf-accel-ops.c   |  14 +
>  include/sysemu/hvf_int.h    |   9 +-
>  meson.build                 |   1 +
>  target/arm/hvf/hvf.c        | 703 ++++++++++++++++++++++++++++++++++++
>  target/arm/hvf/trace-events |  10 +
>  6 files changed, 741 insertions(+), 1 deletion(-)
>  create mode 100644 target/arm/hvf/hvf.c
>  create mode 100644 target/arm/hvf/trace-events

Reviewed-by: Sergio Lopez <slp@redhat.com>
Tested-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-05-19 20:22 ` [PATCH v8 15/19] hvf: arm: Implement -cpu host Alexander Graf
@ 2021-05-27 11:18   ` Sergio Lopez
  2021-06-15 10:56   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 11:18 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 1179 bytes --]

On Wed, May 19, 2021 at 10:22:49PM +0200, Alexander Graf wrote:
> Now that we have working system register sync, we push more target CPU
> properties into the virtual machine. That might be useful in some
> situations, but is not the typical case that users want.
> 
> So let's add a -cpu host option that allows them to explicitly pass all
> CPU capabilities of their host CPU into the guest.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
> 
> ---
> 
> v6 -> v7:
> 
>   - Move function define to own header
>   - Do not propagate SVE features for HVF
>   - Remove stray whitespace change
>   - Verify that EL0 and EL1 do not allow AArch32 mode
>   - Only probe host CPU features once
> ---
>  target/arm/cpu.c     |  9 ++++--
>  target/arm/cpu.h     |  2 ++
>  target/arm/hvf/hvf.c | 72 ++++++++++++++++++++++++++++++++++++++++++++
>  target/arm/hvf_arm.h | 19 ++++++++++++
>  target/arm/kvm_arm.h |  2 --
>  5 files changed, 100 insertions(+), 4 deletions(-)
>  create mode 100644 target/arm/hvf_arm.h

Reviewed-by: Sergio Lopez <slp@redhat.com>
Tested-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 16/19] hvf: arm: Implement PSCI handling
  2021-05-19 20:22 ` [PATCH v8 16/19] hvf: arm: Implement PSCI handling Alexander Graf
@ 2021-05-27 11:20   ` Sergio Lopez
  2021-06-15 12:54   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 11:20 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 929 bytes --]

On Wed, May 19, 2021 at 10:22:50PM +0200, Alexander Graf wrote:
> We need to handle PSCI calls. Most of the TCG code works for us,
> but we can simplify it to only handle aa64 mode and we need to
> handle SUSPEND differently.
> 
> This patch takes the TCG code as template and duplicates it in HVF.
> 
> To tell the guest that we support PSCI 0.2 now, update the check in
> arm_cpu_initfn() as well.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> 
> ---
> 
> v6 -> v7:
> 
>   - This patch integrates "arm: Set PSCI to 0.2 for HVF"
> 
> v7 -> v8:
> 
>   - Do not advance for HVC, PC is already updated by hvf
>   - Fix checkpatch error
> ---
>  target/arm/cpu.c            |   4 +-
>  target/arm/hvf/hvf.c        | 123 ++++++++++++++++++++++++++++++++++--
>  target/arm/hvf/trace-events |   1 +
>  3 files changed, 122 insertions(+), 6 deletions(-)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 17/19] arm: Add Hypervisor.framework build target
  2021-05-19 20:22 ` [PATCH v8 17/19] arm: Add Hypervisor.framework build target Alexander Graf
@ 2021-05-27 11:20   ` Sergio Lopez
  2021-06-15 10:59   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 11:20 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 876 bytes --]

On Wed, May 19, 2021 at 10:22:51PM +0200, Alexander Graf wrote:
> Now that we have all logic in place that we need to handle Hypervisor.framework
> on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we
> can build it.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
> Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only)
> 
> ---
> 
> v1 -> v2:
> 
>   - Fix build on 32bit arm
> 
> v3 -> v4:
> 
>   - Remove i386-softmmu target
> 
> v6 -> v7:
> 
>   - Simplify HVF matching logic in meson build file
> ---
>  meson.build                | 7 +++++++
>  target/arm/hvf/meson.build | 3 +++
>  target/arm/meson.build     | 2 ++
>  3 files changed, 12 insertions(+)
>  create mode 100644 target/arm/hvf/meson.build

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call
  2021-05-19 20:22 ` [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call Alexander Graf
@ 2021-05-27 11:21   ` Sergio Lopez
  2021-06-15 11:02   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 11:21 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 748 bytes --]

On Wed, May 19, 2021 at 10:22:52PM +0200, Alexander Graf wrote:
> Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives
> in the trusted application call number space, but its purpose is unknown.
> 
> In our current SMC implementation, we inject a UDEF for unknown SMC calls,
> including this one. However, Windows breaks on boot when we do this. Instead,
> let's return an error code.
> 
> With this and -M virt,virtualization=on I can successfully boot the current
> Windows 10 Insider Preview in TCG.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  target/arm/kvm-consts.h | 2 ++
>  target/arm/psci.c       | 2 ++
>  2 files changed, 4 insertions(+)

Reviewed-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call
  2021-05-19 20:22 ` [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call Alexander Graf
@ 2021-05-27 11:22   ` Sergio Lopez
  2021-06-15  9:31   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Sergio Lopez @ 2021-05-27 11:22 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Peter Maydell, Eduardo Habkost, Peter Collingbourne,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Philippe Mathieu-Daudé

[-- Attachment #1: Type: text/plain, Size: 791 bytes --]

On Wed, May 19, 2021 at 10:22:53PM +0200, Alexander Graf wrote:
> Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives
> in the trusted application call number space, but its purpose is unknown.
> 
> In our current SMC implementation, we inject a UDEF for unknown SMC calls,
> including this one. However, Windows breaks on boot when we do this. Instead,
> let's return an error code.
> 
> With this patch applied I can successfully boot the current Windows 10
> Insider Preview in HVF.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> 
> ---
> 
> v7 -> v8:
> 
>   - fix checkpatch
> ---
>  target/arm/hvf/hvf.c | 4 ++++
>  1 file changed, 4 insertions(+)

Reviewed-by: Sergio Lopez <slp@redhat.com>
Tested-by: Sergio Lopez <slp@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 00/19] hvf: Implement Apple Silicon Support
  2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
                   ` (19 preceding siblings ...)
  2021-05-19 20:45 ` [PATCH v8 00/19] hvf: Implement Apple Silicon Support no-reply
@ 2021-06-03 13:43 ` Peter Maydell
  2021-06-03 13:53   ` Alexander Graf
  2021-06-15 12:54   ` Peter Maydell
  20 siblings, 2 replies; 66+ messages in thread
From: Peter Maydell @ 2021-06-03 13:43 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Wed, 19 May 2021 at 21:22, Alexander Graf <agraf@csgraf.de> wrote:
>
> Now that Apple Silicon is widely available, people are obviously excited
> to try and run virtualized workloads on them, such as Linux and Windows.
>
> This patch set implements a fully functional version to get the ball
> going on that. With this applied, I can successfully run both Linux and
> Windows as guests. I am not aware of any limitations specific to
> Hypervisor.framework apart from:
>
>   - Live migration / savevm
>   - gdbstub debugging (SP register)
>   - missing GICv3 support

> Alexander Graf (18):
>   hvf: Move assert_hvf_ok() into common directory
>   hvf: Move vcpu thread functions into common directory
>   hvf: Move cpu functions into common directory
>   hvf: Move hvf internal definitions into common header
>   hvf: Make hvf_set_phys_mem() static
>   hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
>   hvf: Split out common code on vcpu init and destroy
>   hvf: Use cpu_synchronize_state()
>   hvf: Make synchronize functions static
>   hvf: Remove hvf-accel-ops.h
>   hvf: Introduce hvf vcpu struct
>   hvf: Simplify post reset/init/loadvm hooks

I haven't had time to review the tail-end of this series yet,
I'm afraid, but these first 12 patches are clearly all OK, so
I'm going to put them into target-arm.next so that at least
that refactoring part is in master and won't go stale.

The last 7 patches are still on my todo list to review.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 00/19] hvf: Implement Apple Silicon Support
  2021-06-03 13:43 ` Peter Maydell
@ 2021-06-03 13:53   ` Alexander Graf
  2021-06-15 12:54   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Alexander Graf @ 2021-06-03 13:53 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne



> Am 03.06.2021 um 15:44 schrieb Peter Maydell <peter.maydell@linaro.org>:
> 
> On Wed, 19 May 2021 at 21:22, Alexander Graf <agraf@csgraf.de> wrote:
>> 
>> Now that Apple Silicon is widely available, people are obviously excited
>> to try and run virtualized workloads on them, such as Linux and Windows.
>> 
>> This patch set implements a fully functional version to get the ball
>> going on that. With this applied, I can successfully run both Linux and
>> Windows as guests. I am not aware of any limitations specific to
>> Hypervisor.framework apart from:
>> 
>>  - Live migration / savevm
>>  - gdbstub debugging (SP register)
>>  - missing GICv3 support
> 
>> Alexander Graf (18):
>>  hvf: Move assert_hvf_ok() into common directory
>>  hvf: Move vcpu thread functions into common directory
>>  hvf: Move cpu functions into common directory
>>  hvf: Move hvf internal definitions into common header
>>  hvf: Make hvf_set_phys_mem() static
>>  hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
>>  hvf: Split out common code on vcpu init and destroy
>>  hvf: Use cpu_synchronize_state()
>>  hvf: Make synchronize functions static
>>  hvf: Remove hvf-accel-ops.h
>>  hvf: Introduce hvf vcpu struct
>>  hvf: Simplify post reset/init/loadvm hooks
> 
> I haven't had time to review the tail-end of this series yet,
> I'm afraid, but these first 12 patches are clearly all OK, so
> I'm going to put them into target-arm.next so that at least
> that refactoring part is in master and won't go stale.
> 
> The last 7 patches are still on my todo list to review.

Thank you ;)

Alex

> 
> thanks
> -- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call
  2021-05-19 20:22 ` [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call Alexander Graf
  2021-05-27 11:22   ` Sergio Lopez
@ 2021-06-15  9:31   ` Peter Maydell
  2021-06-27 21:07     ` Alexander Graf
  1 sibling, 1 reply; 66+ messages in thread
From: Peter Maydell @ 2021-06-15  9:31 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>
> Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives
> in the trusted application call number space, but its purpose is unknown.
>
> In our current SMC implementation, we inject a UDEF for unknown SMC calls,
> including this one. However, Windows breaks on boot when we do this. Instead,
> let's return an error code.
>
> With this patch applied I can successfully boot the current Windows 10
> Insider Preview in HVF.
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
>
> ---
>
> v7 -> v8:
>
>   - fix checkpatch
> ---
>  target/arm/hvf/hvf.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
> index 65c33e2a14..be670af578 100644
> --- a/target/arm/hvf/hvf.c
> +++ b/target/arm/hvf/hvf.c
> @@ -931,6 +931,10 @@ int hvf_vcpu_exec(CPUState *cpu)
>          cpu_synchronize_state(cpu);
>          if (!hvf_handle_psci_call(cpu)) {
>              advance_pc = true;
> +        } else if (env->xregs[0] == QEMU_SMCCC_TC_WINDOWS10_BOOT) {
> +            /* This special SMC is called by Windows 10 on boot. Return error */
> +            env->xregs[0] = -1;
> +            advance_pc = true;
>          } else {
>              trace_hvf_unknown_smc(env->xregs[0]);
>              hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());

Where can I find documentation on what this SMC call is and what
it's supposed to do ?

thanks
-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 13/19] hvf: Add Apple Silicon support
  2021-05-19 20:22 ` [PATCH v8 13/19] hvf: Add Apple Silicon support Alexander Graf
  2021-05-27 10:55   ` Sergio Lopez
@ 2021-06-15 10:21   ` Peter Maydell
  2021-06-27 20:40     ` Alexander Graf
  1 sibling, 1 reply; 66+ messages in thread
From: Peter Maydell @ 2021-06-15 10:21 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>
> With Apple Silicon available to the masses, it's a good time to add support
> for driving its virtualization extensions from QEMU.
>
> This patch adds all necessary architecture specific code to get basic VMs
> working. It's still pretty raw, but definitely functional.
>
> Known limitations:
>
>   - Vtimer acknowledgement is hacky
>   - Should implement more sysregs and fault on invalid ones then
>   - WFI handling is missing, need to marry it with vtimer
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
> @@ -446,11 +454,17 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
>                         cpu, QEMU_THREAD_JOINABLE);
>  }
>
> +__attribute__((weak)) void hvf_kick_vcpu_thread(CPUState *cpu)
> +{
> +    cpus_kick_thread(cpu);
> +}

Why is this marked 'weak' ? If there's a reason for it then
it ought to have a comment describing the reason. If we can avoid
it then we should do so -- past experience is that 'weak' refs
are rather non-portable, though at least this one is in a
host-OS-specific file.

> +static void hvf_raise_exception(CPUARMState *env, uint32_t excp,
> +                                uint32_t syndrome)
> +{
> +    unsigned int new_el = 1;
> +    unsigned int old_mode = pstate_read(env);
> +    unsigned int new_mode = aarch64_pstate_mode(new_el, true);
> +    target_ulong addr = env->cp15.vbar_el[new_el];
> +
> +    env->cp15.esr_el[new_el] = syndrome;
> +    aarch64_save_sp(env, arm_current_el(env));
> +    env->elr_el[new_el] = env->pc;
> +    env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode;
> +    pstate_write(env, PSTATE_DAIF | new_mode);
> +    aarch64_restore_sp(env, new_el);
> +    env->pc = addr;
> +}

KVM does "raise an exception" by calling arm_cpu_do_interrupt()
to do the "set ESR_ELx, save SPSR, etc etc" work (see eg
kvm_arm_handle_debug()". Does that not work here ?

> +
> +static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t reg)
> +{
> +    ARMCPU *arm_cpu = ARM_CPU(cpu);
> +    uint64_t val = 0;
> +
> +    switch (reg) {
> +    case SYSREG_CNTPCT_EL0:
> +        val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
> +              gt_cntfrq_period_ns(arm_cpu);
> +        break;

Does hvf handle the "EL0 access which should be denied because
CNTKCTL_EL1.EL0PCTEN is 0" case for us, or should we have
an access-check here ?

> +    case SYSREG_PMCCNTR_EL0:
> +        val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);

This is supposed to be a cycle counter, not a timestamp...

> +        break;
> +    default:
> +        trace_hvf_unhandled_sysreg_read(reg,
> +                                        (reg >> 20) & 0x3,
> +                                        (reg >> 14) & 0x7,
> +                                        (reg >> 10) & 0xf,
> +                                        (reg >> 1) & 0xf,
> +                                        (reg >> 17) & 0x7);
> +        break;
> +    }
> +
> +    return val;
> +}
> +
> +static void hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
> +{
> +    switch (reg) {
> +    case SYSREG_CNTPCT_EL0:
> +        break;

CNTPCT_EL0 is read-only (ie writes should fault) but this
makes it writes-ignored, doesn't it ?

> +    default:
> +        trace_hvf_unhandled_sysreg_write(reg,
> +                                         (reg >> 20) & 0x3,
> +                                         (reg >> 14) & 0x7,
> +                                         (reg >> 10) & 0xf,
> +                                         (reg >> 1) & 0xf,
> +                                         (reg >> 17) & 0x7);
> +        break;
> +    }
> +}

> +    switch (ec) {
> +    case EC_DATAABORT: {
> +        bool isv = syndrome & ARM_EL_ISV;
> +        bool iswrite = (syndrome >> 6) & 1;
> +        bool s1ptw = (syndrome >> 7) & 1;
> +        uint32_t sas = (syndrome >> 22) & 3;
> +        uint32_t len = 1 << sas;
> +        uint32_t srt = (syndrome >> 16) & 0x1f;
> +        uint64_t val = 0;
> +
> +        trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address,
> +                             hvf_exit->exception.physical_address, isv,
> +                             iswrite, s1ptw, len, srt);
> +
> +        assert(isv);

This seems dubious -- won't we just crash if the guest does
a data access to a device or to unmapped memory with an insn that
doesn't set ISV ? With KVM we feed this back to the guest as an
external data abort (see the KVM_EXIT_ARM_NISV handling).

> +
> +        if (iswrite) {
> +            val = hvf_get_reg(cpu, srt);
> +            address_space_write(&address_space_memory,
> +                                hvf_exit->exception.physical_address,
> +                                MEMTXATTRS_UNSPECIFIED, &val, len);
> +        } else {
> +            address_space_read(&address_space_memory,
> +                               hvf_exit->exception.physical_address,
> +                               MEMTXATTRS_UNSPECIFIED, &val, len);
> +            hvf_set_reg(cpu, srt, val);
> +        }
> +
> +        advance_pc = true;
> +        break;
> +    }
> +    case EC_SYSTEMREGISTERTRAP: {
> +        bool isread = (syndrome >> 0) & 1;
> +        uint32_t rt = (syndrome >> 5) & 0x1f;
> +        uint32_t reg = syndrome & SYSREG_MASK;
> +        uint64_t val = 0;
> +
> +        if (isread) {
> +            val = hvf_sysreg_read(cpu, reg);
> +            trace_hvf_sysreg_read(reg,
> +                                  (reg >> 20) & 0x3,
> +                                  (reg >> 14) & 0x7,
> +                                  (reg >> 10) & 0xf,
> +                                  (reg >> 1) & 0xf,
> +                                  (reg >> 17) & 0x7,
> +                                  val);
> +            hvf_set_reg(cpu, rt, val);
> +        } else {
> +            val = hvf_get_reg(cpu, rt);
> +            trace_hvf_sysreg_write(reg,
> +                                   (reg >> 20) & 0x3,
> +                                   (reg >> 14) & 0x7,
> +                                   (reg >> 10) & 0xf,
> +                                   (reg >> 1) & 0xf,
> +                                   (reg >> 17) & 0x7,
> +                                   val);
> +            hvf_sysreg_write(cpu, reg, val);
> +        }

This needs support for "this really was a bogus system register
access, feed the guest an exception".

> +
> +        advance_pc = true;
> +        break;
> +    }
> +    case EC_WFX_TRAP:
> +        advance_pc = true;
> +        break;
> +    case EC_AA64_HVC:
> +        cpu_synchronize_state(cpu);
> +        trace_hvf_unknown_hvf(env->xregs[0]);
> +        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
> +        break;
> +    case EC_AA64_SMC:
> +        cpu_synchronize_state(cpu);
> +        trace_hvf_unknown_smc(env->xregs[0]);
> +        hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
> +        break;
> +    default:
> +        cpu_synchronize_state(cpu);
> +        trace_hvf_exit(syndrome, ec, env->pc);
> +        error_report("0x%llx: unhandled exit 0x%llx", env->pc, exit_reason);
> +    }
> +
> +    if (advance_pc) {
> +        uint64_t pc;
> +
> +        flush_cpu_state(cpu);
> +
> +        r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc);
> +        assert_hvf_ok(r);
> +        pc += 4;
> +        r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
> +        assert_hvf_ok(r);
> +    }
> +
> +    return 0;
> +}

thanks
-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 14/19] arm/hvf: Add a WFI handler
  2021-05-19 20:22 ` [PATCH v8 14/19] arm/hvf: Add a WFI handler Alexander Graf
  2021-05-27 10:53   ` Sergio Lopez
@ 2021-06-15 10:38   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Peter Maydell @ 2021-06-15 10:38 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>
> From: Peter Collingbourne <pcc@google.com>
>
> Sleep on WFI until the VTIMER is due but allow ourselves to be woken
> up on IPI.
>
> In this implementation IPI is blocked on the CPU thread at startup and
> pselect() is used to atomically unblock the signal and begin sleeping.
> The signal is sent unconditionally so there's no need to worry about
> races between actually sleeping and the "we think we're sleeping"
> state. It may lead to an extra wakeup but that's better than missing
> it entirely.
>
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> [agraf: Remove unused 'set' variable, always advance PC on WFX trap]
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
>
> ---

> +static void hvf_wfi(CPUState *cpu)
> +{
> +    ARMCPU *arm_cpu = ARM_CPU(cpu);
> +    hv_return_t r;
> +    uint64_t ctl;
> +
> +    if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
> +        /* Interrupt pending, no need to wait */
> +        return;
> +    }
> +
> +    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0,
> +                            &ctl);
> +    assert_hvf_ok(r);
> +
> +    if (!(ctl & 1) || (ctl & 2)) {
> +        /* Timer disabled or masked, just wait for an IPI. */
> +        hvf_wait_for_ipi(cpu, NULL);
> +        return;
> +    }
> +
> +    uint64_t cval;
> +    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0,
> +                            &cval);
> +    assert_hvf_ok(r);
> +
> +    int64_t ticks_to_sleep = cval - mach_absolute_time();

This looks odd. The CNTV_CVAL is the compare value against
the CNTVCT (virtual count), which should start at 0 when the
VM starts, pause when the VM is paused, and so on. But here
we are comparing it against what looks like a host absolute
timecount...

> +    if (ticks_to_sleep < 0) {
> +        return;
> +    }
> +
> +    uint64_t seconds = ticks_to_sleep / arm_cpu->gt_cntfrq_hz;
> +    uint64_t nanos =
> +        (ticks_to_sleep - arm_cpu->gt_cntfrq_hz * seconds) *
> +        1000000000 / arm_cpu->gt_cntfrq_hz;

Should this be calling gt_cntfrq_period_ns() ?
(If not, please use the NANOSECONDS_PER_SECOND constant instead of
a raw 1000000000.)

> +
> +    /*
> +     * Don't sleep for less than the time a context switch would take,
> +     * so that we can satisfy fast timer requests on the same CPU.
> +     * Measurements on M1 show the sweet spot to be ~2ms.
> +     */
> +    if (!seconds && nanos < 2000000) {

"2 * SCALE_MS" is a bit easier to read I think.

> +        return;
> +    }
> +
> +    struct timespec ts = { seconds, nanos };
> +    hvf_wait_for_ipi(cpu, &ts);
> +}

thanks
-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-05-19 20:22 ` [PATCH v8 15/19] hvf: arm: Implement -cpu host Alexander Graf
  2021-05-27 11:18   ` Sergio Lopez
@ 2021-06-15 10:56   ` Peter Maydell
  2021-09-12 20:23     ` Alexander Graf
  1 sibling, 1 reply; 66+ messages in thread
From: Peter Maydell @ 2021-06-15 10:56 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>
> Now that we have working system register sync, we push more target CPU
> properties into the virtual machine. That might be useful in some
> situations, but is not the typical case that users want.
>
> So let's add a -cpu host option that allows them to explicitly pass all
> CPU capabilities of their host CPU into the guest.
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
>
> ---
>
> v6 -> v7:
>
>   - Move function define to own header
>   - Do not propagate SVE features for HVF
>   - Remove stray whitespace change
>   - Verify that EL0 and EL1 do not allow AArch32 mode
>   - Only probe host CPU features once

> +static void hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
> +{
> +    ARMISARegisters host_isar;

Can you zero-initialize this (with "= { }"), please? That way we
know we have zeroes in the aarch32 ID fields rather than random junk later...

> +    const struct isar_regs {
> +        int reg;
> +        uint64_t *val;
> +    } regs[] = {
> +        { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
> +        { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
> +        { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
> +        { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
> +        { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
> +        { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
> +        { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
> +        { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
> +        { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
> +    };
> +    hv_vcpu_t fd;
> +    hv_vcpu_exit_t *exit;
> +    int i;
> +
> +    ahcf->dtb_compatible = "arm,arm-v8";
> +    ahcf->features = (1ULL << ARM_FEATURE_V8) |
> +                     (1ULL << ARM_FEATURE_NEON) |
> +                     (1ULL << ARM_FEATURE_AARCH64) |
> +                     (1ULL << ARM_FEATURE_PMU) |
> +                     (1ULL << ARM_FEATURE_GENERIC_TIMER);
> +
> +    /* We set up a small vcpu to extract host registers */
> +
> +    assert_hvf_ok(hv_vcpu_create(&fd, &exit, NULL));
> +    for (i = 0; i < ARRAY_SIZE(regs); i++) {
> +        assert_hvf_ok(hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val));
> +    }
> +    assert_hvf_ok(hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr));
> +    assert_hvf_ok(hv_vcpu_destroy(fd));
> +
> +    ahcf->isar = host_isar;
> +    ahcf->reset_sctlr = 0x00c50078;

Why this value in particular? Could we just ask the scratch HVF CPU
for the value of SCTLR_EL1 rather than hardcoding something?

> +
> +    /* Make sure we don't advertise AArch32 support for EL0/EL1 */
> +    g_assert((host_isar.id_aa64pfr0 & 0xff) == 0x11);

This shouldn't really be an assert, I think. error_report() something
and return false, and then arm_cpu_realizefn() will fail, which should
cause us to exit.

> +}
> +
> +void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu)
> +{
> +    if (!arm_host_cpu_features.dtb_compatible) {
> +        if (!hvf_enabled()) {
> +            cpu->host_cpu_probe_failed = true;
> +            return;
> +        }
> +        hvf_arm_get_host_cpu_features(&arm_host_cpu_features);
> +    }
> +
> +    cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
> +    cpu->isar = arm_host_cpu_features.isar;
> +    cpu->env.features = arm_host_cpu_features.features;
> +    cpu->midr = arm_host_cpu_features.midr;
> +    cpu->reset_sctlr = arm_host_cpu_features.reset_sctlr;
> +}
> +
>  void hvf_arch_vcpu_destroy(CPUState *cpu)
>  {
>  }

thanks
-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 17/19] arm: Add Hypervisor.framework build target
  2021-05-19 20:22 ` [PATCH v8 17/19] arm: Add Hypervisor.framework build target Alexander Graf
  2021-05-27 11:20   ` Sergio Lopez
@ 2021-06-15 10:59   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Peter Maydell @ 2021-06-15 10:59 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>
> Now that we have all logic in place that we need to handle Hypervisor.framework
> on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we
> can build it.
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
> Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only)



Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call
  2021-05-19 20:22 ` [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call Alexander Graf
  2021-05-27 11:21   ` Sergio Lopez
@ 2021-06-15 11:02   ` Peter Maydell
  2021-06-27 21:12     ` Alexander Graf
  1 sibling, 1 reply; 66+ messages in thread
From: Peter Maydell @ 2021-06-15 11:02 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>
> Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives
> in the trusted application call number space, but its purpose is unknown.
>
> In our current SMC implementation, we inject a UDEF for unknown SMC calls,
> including this one. However, Windows breaks on boot when we do this. Instead,
> let's return an error code.
>
> With this and -M virt,virtualization=on I can successfully boot the current
> Windows 10 Insider Preview in TCG.


Same comments apply here and for patch 19.

Either we can:
 * find a spec for whatever this SMC ABI is and implement it
   consistently across TCG, KVM and HVF
 * find whether we're misimplementing whatever the SMCCC spec says
   should happen for unknown SMC calls, and fix that bug

But we're not adding random hacky workarounds for specific guest OSes.

-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 16/19] hvf: arm: Implement PSCI handling
  2021-05-19 20:22 ` [PATCH v8 16/19] hvf: arm: Implement PSCI handling Alexander Graf
  2021-05-27 11:20   ` Sergio Lopez
@ 2021-06-15 12:54   ` Peter Maydell
  2021-09-12 20:36     ` Alexander Graf
  1 sibling, 1 reply; 66+ messages in thread
From: Peter Maydell @ 2021-06-15 12:54 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>
> We need to handle PSCI calls. Most of the TCG code works for us,
> but we can simplify it to only handle aa64 mode and we need to
> handle SUSPEND differently.
>
> This patch takes the TCG code as template and duplicates it in HVF.
>
> To tell the guest that we support PSCI 0.2 now, update the check in
> arm_cpu_initfn() as well.
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
>
> ---
>
> v6 -> v7:
>
>   - This patch integrates "arm: Set PSCI to 0.2 for HVF"
>
> v7 -> v8:
>
>   - Do not advance for HVC, PC is already updated by hvf
>   - Fix checkpatch error


> +static int hvf_psci_cpu_off(ARMCPU *arm_cpu)
> +{
> +    int32_t ret = 0;
> +    ret = arm_set_cpu_off(arm_cpu->mp_affinity);
> +    assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
> +
> +    return 0;

If you're always returning 0 you might as well just make
it return void.

> +}
> +
> +static int hvf_handle_psci_call(CPUState *cpu)

I think the callsites would be clearer if you made the function
return true for "PSCI call handled", false for "not recognised,
give the guest an UNDEF". Code like
         if (hvf_handle_psci_call(cpu)) {
             stuff;
         }

looks like the 'stuff' is for the "psci call handled" case,
which at the moment it isn't.

Either way, a comment for this function describing what its
return value semantics are would be useful.

> +    ARMCPU *arm_cpu = ARM_CPU(cpu);
> +    CPUARMState *env = &arm_cpu->env;
> +    uint64_t param[4] = {
> +        env->xregs[0],
> +        env->xregs[1],
> +        env->xregs[2],
> +        env->xregs[3]
> +    };
> +    uint64_t context_id, mpidr;
> +    bool target_aarch64 = true;
> +    CPUState *target_cpu_state;
> +    ARMCPU *target_cpu;
> +    target_ulong entry;
> +    int target_el = 1;
> +    int32_t ret = 0;
> +
> +    trace_hvf_psci_call(param[0], param[1], param[2], param[3],
> +                        arm_cpu->mp_affinity);
> +
> +    switch (param[0]) {
> +    case QEMU_PSCI_0_2_FN_PSCI_VERSION:
> +        ret = QEMU_PSCI_0_2_RET_VERSION_0_2;
> +        break;
> +    case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
> +        ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */
> +        break;
> +    case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
> +    case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
> +        mpidr = param[1];
> +
> +        switch (param[2]) {
> +        case 0:
> +            target_cpu_state = arm_get_cpu_by_id(mpidr);
> +            if (!target_cpu_state) {
> +                ret = QEMU_PSCI_RET_INVALID_PARAMS;
> +                break;
> +            }
> +            target_cpu = ARM_CPU(target_cpu_state);
> +
> +            ret = target_cpu->power_state;
> +            break;
> +        default:
> +            /* Everything above affinity level 0 is always on. */
> +            ret = 0;
> +        }
> +        break;
> +    case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
> +        qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
> +        /* QEMU reset and shutdown are async requests, but PSCI
> +         * mandates that we never return from the reset/shutdown
> +         * call, so power the CPU off now so it doesn't execute
> +         * anything further.
> +         */
> +        return hvf_psci_cpu_off(arm_cpu);
> +    case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
> +        qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
> +        return hvf_psci_cpu_off(arm_cpu);
> +    case QEMU_PSCI_0_1_FN_CPU_ON:
> +    case QEMU_PSCI_0_2_FN_CPU_ON:
> +    case QEMU_PSCI_0_2_FN64_CPU_ON:
> +        mpidr = param[1];
> +        entry = param[2];
> +        context_id = param[3];
> +        ret = arm_set_cpu_on(mpidr, entry, context_id,
> +                             target_el, target_aarch64);
> +        break;
> +    case QEMU_PSCI_0_1_FN_CPU_OFF:
> +    case QEMU_PSCI_0_2_FN_CPU_OFF:
> +        return hvf_psci_cpu_off(arm_cpu);
> +    case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
> +    case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
> +    case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
> +        /* Affinity levels are not supported in QEMU */
> +        if (param[1] & 0xfffe0000) {
> +            ret = QEMU_PSCI_RET_INVALID_PARAMS;
> +            break;
> +        }
> +        /* Powerdown is not supported, we always go into WFI */
> +        env->xregs[0] = 0;
> +        hvf_wfi(cpu);
> +        break;
> +    case QEMU_PSCI_0_1_FN_MIGRATE:
> +    case QEMU_PSCI_0_2_FN_MIGRATE:
> +        ret = QEMU_PSCI_RET_NOT_SUPPORTED;
> +        break;
> +    default:
> +        return 1;
> +    }
> +
> +    env->xregs[0] = ret;
> +    return 0;
> +}
> +
>  static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t reg)
>  {
>      ARMCPU *arm_cpu = ARM_CPU(cpu);
> @@ -716,6 +823,8 @@ int hvf_vcpu_exec(CPUState *cpu)
>      }
>
>      if (cpu->halted) {
> +        /* On unhalt, we usually have CPU state changes. Prepare for them. */
> +        cpu_synchronize_state(cpu);
>          return EXCP_HLT;
>      }

This seems odd. If I understand the control flow correctly, this
is neither:
 (a) just before we're about to emulate a PSCI call so we need
the current values of the registers from hvf
 (b) when we've just changed the CPU registers because we made
a PSCI call and we want to push them back to hvf
 (c) when we're about to resume the CPU because it's gone from
halted to not-halted, which might also be a good time to push
register state back to hvf

So what's it for ?

My expectation would be that we would ensure the state is in sync
(ie that hvf has any local changes) before we call hv_cpu_run(),
and that we would go the other way before we access any local
register CPU struct state.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 00/19] hvf: Implement Apple Silicon Support
  2021-06-03 13:43 ` Peter Maydell
  2021-06-03 13:53   ` Alexander Graf
@ 2021-06-15 12:54   ` Peter Maydell
  1 sibling, 0 replies; 66+ messages in thread
From: Peter Maydell @ 2021-06-15 12:54 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Thu, 3 Jun 2021 at 14:43, Peter Maydell <peter.maydell@linaro.org> wrote:
> I haven't had time to review the tail-end of this series yet,
> I'm afraid, but these first 12 patches are clearly all OK, so
> I'm going to put them into target-arm.next so that at least
> that refactoring part is in master and won't go stale.
>
> The last 7 patches are still on my todo list to review.

...and now I've gone through those last 7. Sorry for the delay.

-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 13/19] hvf: Add Apple Silicon support
  2021-06-15 10:21   ` Peter Maydell
@ 2021-06-27 20:40     ` Alexander Graf
  0 siblings, 0 replies; 66+ messages in thread
From: Alexander Graf @ 2021-06-27 20:40 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

Hi Peter,

On 15.06.21 12:21, Peter Maydell wrote:
> On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>> With Apple Silicon available to the masses, it's a good time to add support
>> for driving its virtualization extensions from QEMU.
>>
>> This patch adds all necessary architecture specific code to get basic VMs
>> working. It's still pretty raw, but definitely functional.
>>
>> Known limitations:
>>
>>   - Vtimer acknowledgement is hacky
>>   - Should implement more sysregs and fault on invalid ones then
>>   - WFI handling is missing, need to marry it with vtimer
>>
>> Signed-off-by: Alexander Graf <agraf@csgraf.de>
>> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
>> @@ -446,11 +454,17 @@ static void hvf_start_vcpu_thread(CPUState *cpu)
>>                         cpu, QEMU_THREAD_JOINABLE);
>>  }
>>
>> +__attribute__((weak)) void hvf_kick_vcpu_thread(CPUState *cpu)
>> +{
>> +    cpus_kick_thread(cpu);
>> +}
> Why is this marked 'weak' ? If there's a reason for it then
> it ought to have a comment describing the reason. If we can avoid
> it then we should do so -- past experience is that 'weak' refs
> are rather non-portable, though at least this one is in a
> host-OS-specific file.


Mostly because I wanted to keep the kick function in the generic file
for the generic case. ARM is special in that it requires different kick
mechanisms depending on which context we're in (in-vcpu or in-QEMU).

However, I agree that with 2 architectures, there's not really a
"default". I'm happy to move it into the x86 specific file.


>
>> +static void hvf_raise_exception(CPUARMState *env, uint32_t excp,
>> +                                uint32_t syndrome)
>> +{
>> +    unsigned int new_el = 1;
>> +    unsigned int old_mode = pstate_read(env);
>> +    unsigned int new_mode = aarch64_pstate_mode(new_el, true);
>> +    target_ulong addr = env->cp15.vbar_el[new_el];
>> +
>> +    env->cp15.esr_el[new_el] = syndrome;
>> +    aarch64_save_sp(env, arm_current_el(env));
>> +    env->elr_el[new_el] = env->pc;
>> +    env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode;
>> +    pstate_write(env, PSTATE_DAIF | new_mode);
>> +    aarch64_restore_sp(env, new_el);
>> +    env->pc = addr;
>> +}
> KVM does "raise an exception" by calling arm_cpu_do_interrupt()
> to do the "set ESR_ELx, save SPSR, etc etc" work (see eg
> kvm_arm_handle_debug()". Does that not work here ?


It works like a charm. I mostly did the dance because I was under the
impression you wanted to avoid me calling into any TCG code. And to me
arm_cpu_do_interrupt() seemed like TCG code. I'm absolutely happy to
change it though. Leaving things to generic code is good IMHO :).


>
>> +
>> +static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t reg)
>> +{
>> +    ARMCPU *arm_cpu = ARM_CPU(cpu);
>> +    uint64_t val = 0;
>> +
>> +    switch (reg) {
>> +    case SYSREG_CNTPCT_EL0:
>> +        val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
>> +              gt_cntfrq_period_ns(arm_cpu);
>> +        break;
> Does hvf handle the "EL0 access which should be denied because
> CNTKCTL_EL1.EL0PCTEN is 0" case for us, or should we have
> an access-check here ?


A quick test where I tried to access it in a VM in EL0 shows that either
the CPU or HVF already generates the trap. So no check needed.


>
>> +    case SYSREG_PMCCNTR_EL0:
>> +        val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
> This is supposed to be a cycle counter, not a timestamp...


At 1Ghz cycle frequency, what's the difference? Or are you concerned
about the lack of overflow and PMCR logic?


>
>> +        break;
>> +    default:
>> +        trace_hvf_unhandled_sysreg_read(reg,
>> +                                        (reg >> 20) & 0x3,
>> +                                        (reg >> 14) & 0x7,
>> +                                        (reg >> 10) & 0xf,
>> +                                        (reg >> 1) & 0xf,
>> +                                        (reg >> 17) & 0x7);
>> +        break;
>> +    }
>> +
>> +    return val;
>> +}
>> +
>> +static void hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
>> +{
>> +    switch (reg) {
>> +    case SYSREG_CNTPCT_EL0:
>> +        break;
> CNTPCT_EL0 is read-only (ie writes should fault) but this
> makes it writes-ignored, doesn't it ?


It does indeed, let me fix that.


>
>> +    default:
>> +        trace_hvf_unhandled_sysreg_write(reg,
>> +                                         (reg >> 20) & 0x3,
>> +                                         (reg >> 14) & 0x7,
>> +                                         (reg >> 10) & 0xf,
>> +                                         (reg >> 1) & 0xf,
>> +                                         (reg >> 17) & 0x7);
>> +        break;
>> +    }
>> +}
>> +    switch (ec) {
>> +    case EC_DATAABORT: {
>> +        bool isv = syndrome & ARM_EL_ISV;
>> +        bool iswrite = (syndrome >> 6) & 1;
>> +        bool s1ptw = (syndrome >> 7) & 1;
>> +        uint32_t sas = (syndrome >> 22) & 3;
>> +        uint32_t len = 1 << sas;
>> +        uint32_t srt = (syndrome >> 16) & 0x1f;
>> +        uint64_t val = 0;
>> +
>> +        trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address,
>> +                             hvf_exit->exception.physical_address, isv,
>> +                             iswrite, s1ptw, len, srt);
>> +
>> +        assert(isv);
> This seems dubious -- won't we just crash if the guest does
> a data access to a device or to unmapped memory with an insn that
> doesn't set ISV ? With KVM we feed this back to the guest as an
> external data abort (see the KVM_EXIT_ARM_NISV handling).


Hm, I can't say I'm a big fan of that behavior: It makes enabling OSs
that are not properly adapted for non-ISV behavior harder, because
understanding in-guest exception flows is usually harder than looking at
undesired code paths from the outside. And I highly doubt you'll find a
properly functional guest environment that gets external data aborts on
non-enlightened MMIO access.

But if you feel strongly about it, I'm happy to imitate the behavior of
today's KVM_EXIT_ARM_NISV handler.


>
>> +
>> +        if (iswrite) {
>> +            val = hvf_get_reg(cpu, srt);
>> +            address_space_write(&address_space_memory,
>> +                                hvf_exit->exception.physical_address,
>> +                                MEMTXATTRS_UNSPECIFIED, &val, len);
>> +        } else {
>> +            address_space_read(&address_space_memory,
>> +                               hvf_exit->exception.physical_address,
>> +                               MEMTXATTRS_UNSPECIFIED, &val, len);
>> +            hvf_set_reg(cpu, srt, val);
>> +        }
>> +
>> +        advance_pc = true;
>> +        break;
>> +    }
>> +    case EC_SYSTEMREGISTERTRAP: {
>> +        bool isread = (syndrome >> 0) & 1;
>> +        uint32_t rt = (syndrome >> 5) & 0x1f;
>> +        uint32_t reg = syndrome & SYSREG_MASK;
>> +        uint64_t val = 0;
>> +
>> +        if (isread) {
>> +            val = hvf_sysreg_read(cpu, reg);
>> +            trace_hvf_sysreg_read(reg,
>> +                                  (reg >> 20) & 0x3,
>> +                                  (reg >> 14) & 0x7,
>> +                                  (reg >> 10) & 0xf,
>> +                                  (reg >> 1) & 0xf,
>> +                                  (reg >> 17) & 0x7,
>> +                                  val);
>> +            hvf_set_reg(cpu, rt, val);
>> +        } else {
>> +            val = hvf_get_reg(cpu, rt);
>> +            trace_hvf_sysreg_write(reg,
>> +                                   (reg >> 20) & 0x3,
>> +                                   (reg >> 14) & 0x7,
>> +                                   (reg >> 10) & 0xf,
>> +                                   (reg >> 1) & 0xf,
>> +                                   (reg >> 17) & 0x7,
>> +                                   val);
>> +            hvf_sysreg_write(cpu, reg, val);
>> +        }
> This needs support for "this really was a bogus system register
> access, feed the guest an exception".


Yup. Added.


Thanks!

Alex




^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call
  2021-06-15  9:31   ` Peter Maydell
@ 2021-06-27 21:07     ` Alexander Graf
  0 siblings, 0 replies; 66+ messages in thread
From: Alexander Graf @ 2021-06-27 21:07 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne


On 15.06.21 11:31, Peter Maydell wrote:
> On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>> Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives
>> in the trusted application call number space, but its purpose is unknown.
>>
>> In our current SMC implementation, we inject a UDEF for unknown SMC calls,
>> including this one. However, Windows breaks on boot when we do this. Instead,
>> let's return an error code.
>>
>> With this patch applied I can successfully boot the current Windows 10
>> Insider Preview in HVF.
>>
>> Signed-off-by: Alexander Graf <agraf@csgraf.de>
>>
>> ---
>>
>> v7 -> v8:
>>
>>   - fix checkpatch
>> ---
>>  target/arm/hvf/hvf.c | 4 ++++
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
>> index 65c33e2a14..be670af578 100644
>> --- a/target/arm/hvf/hvf.c
>> +++ b/target/arm/hvf/hvf.c
>> @@ -931,6 +931,10 @@ int hvf_vcpu_exec(CPUState *cpu)
>>          cpu_synchronize_state(cpu);
>>          if (!hvf_handle_psci_call(cpu)) {
>>              advance_pc = true;
>> +        } else if (env->xregs[0] == QEMU_SMCCC_TC_WINDOWS10_BOOT) {
>> +            /* This special SMC is called by Windows 10 on boot. Return error */
>> +            env->xregs[0] = -1;
>> +            advance_pc = true;
>>          } else {
>>              trace_hvf_unknown_smc(env->xregs[0]);
>>              hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());
> Where can I find documentation on what this SMC call is and what
> it's supposed to do ?


It's 0xc3000001 which according to the SMCCC spec [1] means OR'ed values
of the following:

0x80000000 = Fast Call
0x40000000 = SMC64
0x03000000 = OEM Service Calls
0x00000001 = Function number 1

So, uh. I'm not sure how to answer the question above. I don't have
source level access to Windows to read what the call is supposed to do
:). But it's definitely calling something OEM specific that it really
shouldn't be callling.

Reading the SMCCC spec section 5.2, unknown SMCCC calls should return
-1. It advises against probing by just calling them, but does not
specify any other fault behavior than the -1 return (such as the #UDEF
we inject in TCG).


Alex

[1] https://developer.arm.com/documentation/den0028/latest



^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call
  2021-06-15 11:02   ` Peter Maydell
@ 2021-06-27 21:12     ` Alexander Graf
  0 siblings, 0 replies; 66+ messages in thread
From: Alexander Graf @ 2021-06-27 21:12 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne


On 15.06.21 13:02, Peter Maydell wrote:
> On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>> Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives
>> in the trusted application call number space, but its purpose is unknown.
>>
>> In our current SMC implementation, we inject a UDEF for unknown SMC calls,
>> including this one. However, Windows breaks on boot when we do this. Instead,
>> let's return an error code.
>>
>> With this and -M virt,virtualization=on I can successfully boot the current
>> Windows 10 Insider Preview in TCG.
>
> Same comments apply here and for patch 19.
>
> Either we can:
>  * find a spec for whatever this SMC ABI is and implement it
>    consistently across TCG, KVM and HVF
>  * find whether we're misimplementing whatever the SMCCC spec says
>    should happen for unknown SMC calls, and fix that bug
>
> But we're not adding random hacky workarounds for specific guest OSes.


Let's move the conversation to 19/19 then. My take on this is that TCG
is misinterpreting the SMCCC spec.


Alex




^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-06-15 10:56   ` Peter Maydell
@ 2021-09-12 20:23     ` Alexander Graf
  2021-09-13  8:28       ` Peter Maydell
  0 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-09-12 20:23 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne


On 15.06.21 12:56, Peter Maydell wrote:
> On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>> Now that we have working system register sync, we push more target CPU
>> properties into the virtual machine. That might be useful in some
>> situations, but is not the typical case that users want.
>>
>> So let's add a -cpu host option that allows them to explicitly pass all
>> CPU capabilities of their host CPU into the guest.
>>
>> Signed-off-by: Alexander Graf <agraf@csgraf.de>
>> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
>>
>> ---
>>
>> v6 -> v7:
>>
>>   - Move function define to own header
>>   - Do not propagate SVE features for HVF
>>   - Remove stray whitespace change
>>   - Verify that EL0 and EL1 do not allow AArch32 mode
>>   - Only probe host CPU features once
>> +static void hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
>> +{
>> +    ARMISARegisters host_isar;
> Can you zero-initialize this (with "= { }"), please? That way we
> know we have zeroes in the aarch32 ID fields rather than random junk later...
>
>> +    const struct isar_regs {
>> +        int reg;
>> +        uint64_t *val;
>> +    } regs[] = {
>> +        { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
>> +        { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
>> +        { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
>> +        { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
>> +        { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
>> +        { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
>> +        { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
>> +        { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
>> +        { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
>> +    };
>> +    hv_vcpu_t fd;
>> +    hv_vcpu_exit_t *exit;
>> +    int i;
>> +
>> +    ahcf->dtb_compatible = "arm,arm-v8";
>> +    ahcf->features = (1ULL << ARM_FEATURE_V8) |
>> +                     (1ULL << ARM_FEATURE_NEON) |
>> +                     (1ULL << ARM_FEATURE_AARCH64) |
>> +                     (1ULL << ARM_FEATURE_PMU) |
>> +                     (1ULL << ARM_FEATURE_GENERIC_TIMER);
>> +
>> +    /* We set up a small vcpu to extract host registers */
>> +
>> +    assert_hvf_ok(hv_vcpu_create(&fd, &exit, NULL));
>> +    for (i = 0; i < ARRAY_SIZE(regs); i++) {
>> +        assert_hvf_ok(hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val));
>> +    }
>> +    assert_hvf_ok(hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr));
>> +    assert_hvf_ok(hv_vcpu_destroy(fd));
>> +
>> +    ahcf->isar = host_isar;
>> +    ahcf->reset_sctlr = 0x00c50078;
> Why this value in particular? Could we just ask the scratch HVF CPU
> for the value of SCTLR_EL1 rather than hardcoding something?


The fresh scratch hvf CPU has 0 as SCTLR. But I'm happy to put an actual
M1 copy of it here.


>
>> +
>> +    /* Make sure we don't advertise AArch32 support for EL0/EL1 */
>> +    g_assert((host_isar.id_aa64pfr0 & 0xff) == 0x11);
> This shouldn't really be an assert, I think. error_report() something
> and return false, and then arm_cpu_realizefn() will fail, which should
> cause us to exit.


I don't follow. We're filling in the -cpu host CPU template here. There
is no error path anywhere we could take. Or are you suggesting we only
error on realize? I don't see any obvious way how we could tell the
realize function that we don't want to expose AArch32 support for -cpu host.

This is a case that on today's systems can't happen - M1 does not
support AArch32 anywhere. So that assert could only ever hit if you run
macOS on non-Apple hardware (in which case I doubt hvf works as
intended) or if a new Apple CPU starts supporting AArch32 (again, very
unlikely).

So overall, I think the assert here is not too bad :)


Alex




^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 16/19] hvf: arm: Implement PSCI handling
  2021-06-15 12:54   ` Peter Maydell
@ 2021-09-12 20:36     ` Alexander Graf
  2021-09-12 21:20       ` Richard Henderson
  0 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-09-12 20:36 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne


On 15.06.21 14:54, Peter Maydell wrote:
> On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>> We need to handle PSCI calls. Most of the TCG code works for us,
>> but we can simplify it to only handle aa64 mode and we need to
>> handle SUSPEND differently.
>>
>> This patch takes the TCG code as template and duplicates it in HVF.
>>
>> To tell the guest that we support PSCI 0.2 now, update the check in
>> arm_cpu_initfn() as well.
>>
>> Signed-off-by: Alexander Graf <agraf@csgraf.de>
>>
>> ---
>>
>> v6 -> v7:
>>
>>   - This patch integrates "arm: Set PSCI to 0.2 for HVF"
>>
>> v7 -> v8:
>>
>>   - Do not advance for HVC, PC is already updated by hvf
>>   - Fix checkpatch error
>
>> +static int /(ARMCPU *arm_cpu)
>> +{
>> +    int32_t ret = 0;
>> +    ret = arm_set_cpu_off(arm_cpu->mp_affinity);
>> +    assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
>> +
>> +    return 0;
> If you're always returning 0 you might as well just make
> it return void.
>
>> +}
>> +
>> +static int hvf_handle_psci_call(CPUState *cpu)
> I think the callsites would be clearer if you made the function
> return true for "PSCI call handled", false for "not recognised,
> give the guest an UNDEF". Code like
>          if (hvf_handle_psci_call(cpu)) {
>              stuff;
>          }
>
> looks like the 'stuff' is for the "psci call handled" case,
> which at the moment it isn't.


This function merely follows standard C semantics along the lines of "0
means success, !0 is error". Isn't that what you would usually expect?


>
> Either way, a comment for this function describing what its
> return value semantics are would be useful.


Sure, I can add one :)


>
>> +    ARMCPU *arm_cpu = ARM_CPU(cpu);
>> +    CPUARMState *env = &arm_cpu->env;
>> +    uint64_t param[4] = {
>> +        env->xregs[0],
>> +        env->xregs[1],
>> +        env->xregs[2],
>> +        env->xregs[3]
>> +    };
>> +    uint64_t context_id, mpidr;
>> +    bool target_aarch64 = true;
>> +    CPUState *target_cpu_state;
>> +    ARMCPU *target_cpu;
>> +    target_ulong entry;
>> +    int target_el = 1;
>> +    int32_t ret = 0;
>> +
>> +    trace_hvf_psci_call(param[0], param[1], param[2], param[3],
>> +                        arm_cpu->mp_affinity);
>> +
>> +    switch (param[0]) {
>> +    case QEMU_PSCI_0_2_FN_PSCI_VERSION:
>> +        ret = QEMU_PSCI_0_2_RET_VERSION_0_2;
>> +        break;
>> +    case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
>> +        ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */
>> +        break;
>> +    case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
>> +    case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
>> +        mpidr = param[1];
>> +
>> +        switch (param[2]) {
>> +        case 0:
>> +            target_cpu_state = arm_get_cpu_by_id(mpidr);
>> +            if (!target_cpu_state) {
>> +                ret = QEMU_PSCI_RET_INVALID_PARAMS;
>> +                break;
>> +            }
>> +            target_cpu = ARM_CPU(target_cpu_state);
>> +
>> +            ret = target_cpu->power_state;
>> +            break;
>> +        default:
>> +            /* Everything above affinity level 0 is always on. */
>> +            ret = 0;
>> +        }
>> +        break;
>> +    case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
>> +        qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
>> +        /* QEMU reset and shutdown are async requests, but PSCI
>> +         * mandates that we never return from the reset/shutdown
>> +         * call, so power the CPU off now so it doesn't execute
>> +         * anything further.
>> +         */
>> +        return hvf_psci_cpu_off(arm_cpu);
>> +    case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
>> +        qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
>> +        return hvf_psci_cpu_off(arm_cpu);
>> +    case QEMU_PSCI_0_1_FN_CPU_ON:
>> +    case QEMU_PSCI_0_2_FN_CPU_ON:
>> +    case QEMU_PSCI_0_2_FN64_CPU_ON:
>> +        mpidr = param[1];
>> +        entry = param[2];
>> +        context_id = param[3];
>> +        ret = arm_set_cpu_on(mpidr, entry, context_id,
>> +                             target_el, target_aarch64);
>> +        break;
>> +    case QEMU_PSCI_0_1_FN_CPU_OFF:
>> +    case QEMU_PSCI_0_2_FN_CPU_OFF:
>> +        return hvf_psci_cpu_off(arm_cpu);
>> +    case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
>> +    case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
>> +    case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
>> +        /* Affinity levels are not supported in QEMU */
>> +        if (param[1] & 0xfffe0000) {
>> +            ret = QEMU_PSCI_RET_INVALID_PARAMS;
>> +            break;
>> +        }
>> +        /* Powerdown is not supported, we always go into WFI */
>> +        env->xregs[0] = 0;
>> +        hvf_wfi(cpu);
>> +        break;
>> +    case QEMU_PSCI_0_1_FN_MIGRATE:
>> +    case QEMU_PSCI_0_2_FN_MIGRATE:
>> +        ret = QEMU_PSCI_RET_NOT_SUPPORTED;
>> +        break;
>> +    default:
>> +        return 1;
>> +    }
>> +
>> +    env->xregs[0] = ret;
>> +    return 0;
>> +}
>> +
>>  static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t reg)
>>  {
>>      ARMCPU *arm_cpu = ARM_CPU(cpu);
>> @@ -716,6 +823,8 @@ int hvf_vcpu_exec(CPUState *cpu)
>>      }
>>
>>      if (cpu->halted) {
>> +        /* On unhalt, we usually have CPU state changes. Prepare for them. */
>> +        cpu_synchronize_state(cpu);
>>          return EXCP_HLT;
>>      }
> This seems odd. If I understand the control flow correctly, this
> is neither:
>  (a) just before we're about to emulate a PSCI call so we need
> the current values of the registers from hvf
>  (b) when we've just changed the CPU registers because we made
> a PSCI call and we want to push them back to hvf
>  (c) when we're about to resume the CPU because it's gone from
> halted to not-halted, which might also be a good time to push
> register state back to hvf
>
> So what's it for ?


I agree, it's pretty superfluous. I'll remove it :)

Alex


>
> My expectation would be that we would ensure the state is in sync
> (ie that hvf has any local changes) before we call hv_cpu_run(),
> and that we would go the other way before we access any local
> register CPU struct state.
>
> thanks
> -- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 16/19] hvf: arm: Implement PSCI handling
  2021-09-12 20:36     ` Alexander Graf
@ 2021-09-12 21:20       ` Richard Henderson
  2021-09-12 21:37         ` Alexander Graf
  0 siblings, 1 reply; 66+ messages in thread
From: Richard Henderson @ 2021-09-12 21:20 UTC (permalink / raw)
  To: Alexander Graf, Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	QEMU Developers, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

On 9/12/21 1:36 PM, Alexander Graf wrote:
>> I think the callsites would be clearer if you made the function
>> return true for "PSCI call handled", false for "not recognised,
>> give the guest an UNDEF". Code like
>>           if (hvf_handle_psci_call(cpu)) {
>>               stuff;
>>           }
>>
>> looks like the 'stuff' is for the "psci call handled" case,
>> which at the moment it isn't.
> 
> 
> This function merely follows standard C semantics along the lines of "0
> means success, !0 is error". Isn't that what you would usually expect?

No, not really.  I expect stuff that returns error codes to return negative integers on 
failure.  I expect stuff that returns a boolean success/failure to return true on success.


r~


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 16/19] hvf: arm: Implement PSCI handling
  2021-09-12 21:20       ` Richard Henderson
@ 2021-09-12 21:37         ` Alexander Graf
  2021-09-12 21:40           ` Richard Henderson
  0 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-09-12 21:37 UTC (permalink / raw)
  To: Richard Henderson, Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	QEMU Developers, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne


On 12.09.21 23:20, Richard Henderson wrote:
> On 9/12/21 1:36 PM, Alexander Graf wrote:
>>> I think the callsites would be clearer if you made the function
>>> return true for "PSCI call handled", false for "not recognised,
>>> give the guest an UNDEF". Code like
>>>           if (hvf_handle_psci_call(cpu)) {
>>>               stuff;
>>>           }
>>>
>>> looks like the 'stuff' is for the "psci call handled" case,
>>> which at the moment it isn't.
>>
>>
>> This function merely follows standard C semantics along the lines of "0
>> means success, !0 is error". Isn't that what you would usually expect?
>
> No, not really.  I expect stuff that returns error codes to return
> negative integers on failure.  I expect stuff that returns a boolean
> success/failure to return true on success.


Fair, I'll change it to return -1 then. Thanks!


Alex




^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 16/19] hvf: arm: Implement PSCI handling
  2021-09-12 21:37         ` Alexander Graf
@ 2021-09-12 21:40           ` Richard Henderson
  2021-09-13 10:06             ` Alexander Graf
  0 siblings, 1 reply; 66+ messages in thread
From: Richard Henderson @ 2021-09-12 21:40 UTC (permalink / raw)
  To: Alexander Graf, Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	QEMU Developers, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne

On 9/12/21 2:37 PM, Alexander Graf wrote:
> 
> On 12.09.21 23:20, Richard Henderson wrote:
>> On 9/12/21 1:36 PM, Alexander Graf wrote:
>>>> I think the callsites would be clearer if you made the function
>>>> return true for "PSCI call handled", false for "not recognised,
>>>> give the guest an UNDEF". Code like
>>>>            if (hvf_handle_psci_call(cpu)) {
>>>>                stuff;
>>>>            }
>>>>
>>>> looks like the 'stuff' is for the "psci call handled" case,
>>>> which at the moment it isn't.
>>>
>>>
>>> This function merely follows standard C semantics along the lines of "0
>>> means success, !0 is error". Isn't that what you would usually expect?
>>
>> No, not really.  I expect stuff that returns error codes to return
>> negative integers on failure.  I expect stuff that returns a boolean
>> success/failure to return true on success.
> 
> 
> Fair, I'll change it to return -1 then. Thanks!

Not quite the point I was making.  If the only two return values are -1/0, then bool 
false/true is in fact more appropriate.


r~


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-09-12 20:23     ` Alexander Graf
@ 2021-09-13  8:28       ` Peter Maydell
  2021-09-13 10:46         ` Alexander Graf
  0 siblings, 1 reply; 66+ messages in thread
From: Peter Maydell @ 2021-09-13  8:28 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Sun, 12 Sept 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>
>
> On 15.06.21 12:56, Peter Maydell wrote:
> > On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
> >> Now that we have working system register sync, we push more target CPU
> >> properties into the virtual machine. That might be useful in some
> >> situations, but is not the typical case that users want.
> >>
> >> So let's add a -cpu host option that allows them to explicitly pass all
> >> CPU capabilities of their host CPU into the guest.
> >>
> >> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> >> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
> >>
> >> ---
> >>
> >> v6 -> v7:
> >>
> >>   - Move function define to own header
> >>   - Do not propagate SVE features for HVF
> >>   - Remove stray whitespace change
> >>   - Verify that EL0 and EL1 do not allow AArch32 mode
> >>   - Only probe host CPU features once
> >> +static void hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
> >> +{
> >> +    ARMISARegisters host_isar;
> > Can you zero-initialize this (with "= { }"), please? That way we
> > know we have zeroes in the aarch32 ID fields rather than random junk later...
> >
> >> +    const struct isar_regs {
> >> +        int reg;
> >> +        uint64_t *val;
> >> +    } regs[] = {
> >> +        { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
> >> +        { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
> >> +        { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
> >> +        { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
> >> +        { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
> >> +        { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
> >> +        { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
> >> +        { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
> >> +        { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
> >> +    };
> >> +    hv_vcpu_t fd;
> >> +    hv_vcpu_exit_t *exit;
> >> +    int i;
> >> +
> >> +    ahcf->dtb_compatible = "arm,arm-v8";
> >> +    ahcf->features = (1ULL << ARM_FEATURE_V8) |
> >> +                     (1ULL << ARM_FEATURE_NEON) |
> >> +                     (1ULL << ARM_FEATURE_AARCH64) |
> >> +                     (1ULL << ARM_FEATURE_PMU) |
> >> +                     (1ULL << ARM_FEATURE_GENERIC_TIMER);
> >> +
> >> +    /* We set up a small vcpu to extract host registers */
> >> +
> >> +    assert_hvf_ok(hv_vcpu_create(&fd, &exit, NULL));
> >> +    for (i = 0; i < ARRAY_SIZE(regs); i++) {
> >> +        assert_hvf_ok(hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val));
> >> +    }
> >> +    assert_hvf_ok(hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr));
> >> +    assert_hvf_ok(hv_vcpu_destroy(fd));
> >> +
> >> +    ahcf->isar = host_isar;
> >> +    ahcf->reset_sctlr = 0x00c50078;
> > Why this value in particular? Could we just ask the scratch HVF CPU
> > for the value of SCTLR_EL1 rather than hardcoding something?
>
>
> The fresh scratch hvf CPU has 0 as SCTLR.

Yuck. That's pretty unhelpful of hvf, since it's not an
architecturally valid thing for a freshly reset EL1-only
CPU to have as its SCTLR (some bits are supposed to be RES1
or reset-to-1). In that case we do need to set this to a known
value here, I guess -- but we should have a comment explaining
why we do it and what bits we're setting.

> >> +    /* Make sure we don't advertise AArch32 support for EL0/EL1 */
> >> +    g_assert((host_isar.id_aa64pfr0 & 0xff) == 0x11);
> > This shouldn't really be an assert, I think. error_report() something
> > and return false, and then arm_cpu_realizefn() will fail, which should
> > cause us to exit.
>
>
> I don't follow. We're filling in the -cpu host CPU template here. There
> is no error path anywhere we could take.

Look at how the kvm_arm_get_host_cpu_features() error handling works:
it returns a bool. The caller (kvm_arm_set_cpu_features_from_host())
checks the return value, and if the function failed it sets
the cpu->host_cpu_probe_failed flag, which then results in realize
failing. (You'll want to update the arm_cpu_realizefn to allow hvf
as well as kvm for that error message.)

> This is a case that on today's systems can't happen - M1 does not
> support AArch32 anywhere. So that assert could only ever hit if you run
> macOS on non-Apple hardware (in which case I doubt hvf works as
> intended) or if a new Apple CPU starts supporting AArch32 (again, very
> unlikely).
>
> So overall, I think the assert here is not too bad :)

Well, probably not, but you need the error handling path
anyway for the boring case of "oops, this syscall failed".

-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 16/19] hvf: arm: Implement PSCI handling
  2021-09-12 21:40           ` Richard Henderson
@ 2021-09-13 10:06             ` Alexander Graf
  2021-09-13 10:30               ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-09-13 10:06 UTC (permalink / raw)
  To: Richard Henderson, Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	QEMU Developers, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Peter Collingbourne


On 12.09.21 23:40, Richard Henderson wrote:
> On 9/12/21 2:37 PM, Alexander Graf wrote:
>>
>> On 12.09.21 23:20, Richard Henderson wrote:
>>> On 9/12/21 1:36 PM, Alexander Graf wrote:
>>>>> I think the callsites would be clearer if you made the function
>>>>> return true for "PSCI call handled", false for "not recognised,
>>>>> give the guest an UNDEF". Code like
>>>>>            if (hvf_handle_psci_call(cpu)) {
>>>>>                stuff;
>>>>>            }
>>>>>
>>>>> looks like the 'stuff' is for the "psci call handled" case,
>>>>> which at the moment it isn't.
>>>>
>>>>
>>>> This function merely follows standard C semantics along the lines
>>>> of "0
>>>> means success, !0 is error". Isn't that what you would usually expect?
>>>
>>> No, not really.  I expect stuff that returns error codes to return
>>> negative integers on failure.  I expect stuff that returns a boolean
>>> success/failure to return true on success.
>>
>>
>> Fair, I'll change it to return -1 then. Thanks!
>
> Not quite the point I was making.  If the only two return values are
> -1/0, then bool false/true is in fact more appropriate.


If the whole code base adheres to it, maybe. The big problem with using
true/false as return values for functions that don't make it very
explicit (have an is in their name or use gerund for example). QEMU uses
a lot of 0 == success internally.

If you now add bools to the a code base that already uses int returns a
lot, you always need to look up what a function actually returns if the
caller treats it as bool (if, ?, assert, etc), making code review and
reasoning about code flows extremely hard. I've had one too many
occasions where I called an innocently looking API and put the assert()
the wrong way around for example.

I don't think we can solve this problem here, but IMHO the only sensible
way to get to good readability would be to have functions that return
success/failure return a libc defined struct that indicates the error.
Then check for success explicitly on every caller site. Overloading bool
or int for success/failure return is just bad :).


Alex



^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 16/19] hvf: arm: Implement PSCI handling
  2021-09-13 10:06             ` Alexander Graf
@ 2021-09-13 10:30               ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 66+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-09-13 10:30 UTC (permalink / raw)
  To: Alexander Graf, Richard Henderson, Peter Maydell, Markus Armbruster
  Cc: Eduardo Habkost, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

+Markus

On 9/13/21 12:06 PM, Alexander Graf wrote:
> On 12.09.21 23:40, Richard Henderson wrote:
>> On 9/12/21 2:37 PM, Alexander Graf wrote:
>>>
>>> On 12.09.21 23:20, Richard Henderson wrote:
>>>> On 9/12/21 1:36 PM, Alexander Graf wrote:
>>>>>> I think the callsites would be clearer if you made the function
>>>>>> return true for "PSCI call handled", false for "not recognised,
>>>>>> give the guest an UNDEF". Code like
>>>>>>            if (hvf_handle_psci_call(cpu)) {
>>>>>>                stuff;
>>>>>>            }
>>>>>>
>>>>>> looks like the 'stuff' is for the "psci call handled" case,
>>>>>> which at the moment it isn't.
>>>>>
>>>>>
>>>>> This function merely follows standard C semantics along the lines
>>>>> of "0
>>>>> means success, !0 is error". Isn't that what you would usually expect?
>>>>
>>>> No, not really.  I expect stuff that returns error codes to return
>>>> negative integers on failure.  I expect stuff that returns a boolean
>>>> success/failure to return true on success.
>>>
>>>
>>> Fair, I'll change it to return -1 then. Thanks!
>>
>> Not quite the point I was making.  If the only two return values are
>> -1/0, then bool false/true is in fact more appropriate.
> 
> 
> If the whole code base adheres to it, maybe. The big problem with using
> true/false as return values for functions that don't make it very
> explicit (have an is in their name or use gerund for example). QEMU uses
> a lot of 0 == success internally.
> 
> If you now add bools to the a code base that already uses int returns a
> lot, you always need to look up what a function actually returns if the
> caller treats it as bool (if, ?, assert, etc), making code review and
> reasoning about code flows extremely hard. I've had one too many
> occasions where I called an innocently looking API and put the assert()
> the wrong way around for example.
> 
> I don't think we can solve this problem here, but IMHO the only sensible
> way to get to good readability would be to have functions that return
> success/failure return a libc defined struct that indicates the error.
> Then check for success explicitly on every caller site. Overloading bool
> or int for success/failure return is just bad :).

IIUC what we are trying to do lately is, if caller only expects
success/failure then return boolean, with error information in
Error* for eventual propagation. Only return libc errno if the
caller has to handle failures differently.



^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-09-13  8:28       ` Peter Maydell
@ 2021-09-13 10:46         ` Alexander Graf
  2021-09-13 10:52           ` Peter Maydell
  0 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-09-13 10:46 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Peter Collingbourne, Richard Henderson,
	QEMU Developers, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Philippe Mathieu-Daudé


On 13.09.21 10:28, Peter Maydell wrote:
> On Sun, 12 Sept 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>>
>> On 15.06.21 12:56, Peter Maydell wrote:
>>> On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>>>> Now that we have working system register sync, we push more target CPU
>>>> properties into the virtual machine. That might be useful in some
>>>> situations, but is not the typical case that users want.
>>>>
>>>> So let's add a -cpu host option that allows them to explicitly pass all
>>>> CPU capabilities of their host CPU into the guest.
>>>>
>>>> Signed-off-by: Alexander Graf <agraf@csgraf.de>
>>>> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
>>>>
>>>> ---
>>>>
>>>> v6 -> v7:
>>>>
>>>>   - Move function define to own header
>>>>   - Do not propagate SVE features for HVF
>>>>   - Remove stray whitespace change
>>>>   - Verify that EL0 and EL1 do not allow AArch32 mode
>>>>   - Only probe host CPU features once
>>>> +static void hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
>>>> +{
>>>> +    ARMISARegisters host_isar;
>>> Can you zero-initialize this (with "= { }"), please? That way we
>>> know we have zeroes in the aarch32 ID fields rather than random junk later...
>>>
>>>> +    const struct isar_regs {
>>>> +        int reg;
>>>> +        uint64_t *val;
>>>> +    } regs[] = {
>>>> +        { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
>>>> +        { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
>>>> +        { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
>>>> +        { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
>>>> +        { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
>>>> +        { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
>>>> +        { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
>>>> +        { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
>>>> +        { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
>>>> +    };
>>>> +    hv_vcpu_t fd;
>>>> +    hv_vcpu_exit_t *exit;
>>>> +    int i;
>>>> +
>>>> +    ahcf->dtb_compatible = "arm,arm-v8";
>>>> +    ahcf->features = (1ULL << ARM_FEATURE_V8) |
>>>> +                     (1ULL << ARM_FEATURE_NEON) |
>>>> +                     (1ULL << ARM_FEATURE_AARCH64) |
>>>> +                     (1ULL << ARM_FEATURE_PMU) |
>>>> +                     (1ULL << ARM_FEATURE_GENERIC_TIMER);
>>>> +
>>>> +    /* We set up a small vcpu to extract host registers */
>>>> +
>>>> +    assert_hvf_ok(hv_vcpu_create(&fd, &exit, NULL));
>>>> +    for (i = 0; i < ARRAY_SIZE(regs); i++) {
>>>> +        assert_hvf_ok(hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val));
>>>> +    }
>>>> +    assert_hvf_ok(hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr));
>>>> +    assert_hvf_ok(hv_vcpu_destroy(fd));
>>>> +
>>>> +    ahcf->isar = host_isar;
>>>> +    ahcf->reset_sctlr = 0x00c50078;
>>> Why this value in particular? Could we just ask the scratch HVF CPU
>>> for the value of SCTLR_EL1 rather than hardcoding something?
>>
>> The fresh scratch hvf CPU has 0 as SCTLR.
> Yuck. That's pretty unhelpful of hvf, since it's not an
> architecturally valid thing for a freshly reset EL1-only
> CPU to have as its SCTLR (some bits are supposed to be RES1
> or reset-to-1). In that case we do need to set this to a known
> value here, I guess -- but we should have a comment explaining
> why we do it and what bits we're setting.


Ok :)


>
>>>> +    /* Make sure we don't advertise AArch32 support for EL0/EL1 */
>>>> +    g_assert((host_isar.id_aa64pfr0 & 0xff) == 0x11);
>>> This shouldn't really be an assert, I think. error_report() something
>>> and return false, and then arm_cpu_realizefn() will fail, which should
>>> cause us to exit.
>>
>> I don't follow. We're filling in the -cpu host CPU template here. There
>> is no error path anywhere we could take.
> Look at how the kvm_arm_get_host_cpu_features() error handling works:
> it returns a bool. The caller (kvm_arm_set_cpu_features_from_host())
> checks the return value, and if the function failed it sets
> the cpu->host_cpu_probe_failed flag, which then results in realize
> failing. (You'll want to update the arm_cpu_realizefn to allow hvf
> as well as kvm for that error message.)


Sure, happy to adjust accordingly :)


>
>> This is a case that on today's systems can't happen - M1 does not
>> support AArch32 anywhere. So that assert could only ever hit if you run
>> macOS on non-Apple hardware (in which case I doubt hvf works as
>> intended) or if a new Apple CPU starts supporting AArch32 (again, very
>> unlikely).
>>
>> So overall, I think the assert here is not too bad :)
> Well, probably not, but you need the error handling path
> anyway for the boring case of "oops, this syscall failed".


Why? You only get to this code path if you already selected -accel hvf.
If even a simple "create scratch vcpu" syscall failed then, pretty
failure when you define -cpu host is the last thing you care about. Any
CPU creation would fail.

Alex




^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-09-13 10:46         ` Alexander Graf
@ 2021-09-13 10:52           ` Peter Maydell
  2021-09-13 11:46             ` Alexander Graf
  0 siblings, 1 reply; 66+ messages in thread
From: Peter Maydell @ 2021-09-13 10:52 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Peter Collingbourne, Richard Henderson,
	QEMU Developers, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Philippe Mathieu-Daudé

On Mon, 13 Sept 2021 at 11:46, Alexander Graf <agraf@csgraf.de> wrote:
> Why? You only get to this code path if you already selected -accel hvf.
> If even a simple "create scratch vcpu" syscall failed then, pretty
> failure when you define -cpu host is the last thing you care about. Any
> CPU creation would fail.

General design principle -- low level functions should report
errors upwards, not barf and exit.

-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-09-13 10:52           ` Peter Maydell
@ 2021-09-13 11:46             ` Alexander Graf
  2021-09-13 11:48               ` Peter Maydell
  0 siblings, 1 reply; 66+ messages in thread
From: Alexander Graf @ 2021-09-13 11:46 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne


On 13.09.21 12:52, Peter Maydell wrote:
> On Mon, 13 Sept 2021 at 11:46, Alexander Graf <agraf@csgraf.de> wrote:
>> Why? You only get to this code path if you already selected -accel hvf.
>> If even a simple "create scratch vcpu" syscall failed then, pretty
>> failure when you define -cpu host is the last thing you care about. Any
>> CPU creation would fail.
> General design principle -- low level functions should report
> errors upwards, not barf and exit.


Usually I would agree with you, but here it really does not make sense
and would make the code significantly harder to read.


Alex




^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-09-13 11:46             ` Alexander Graf
@ 2021-09-13 11:48               ` Peter Maydell
  2021-09-13 11:55                 ` Alexander Graf
  0 siblings, 1 reply; 66+ messages in thread
From: Peter Maydell @ 2021-09-13 11:48 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Eduardo Habkost, Philippe Mathieu-Daudé,
	Richard Henderson, QEMU Developers, Cameron Esfahani,
	Roman Bolshakov, qemu-arm, Frank Yang, Paolo Bonzini,
	Peter Collingbourne

On Mon, 13 Sept 2021 at 12:46, Alexander Graf <agraf@csgraf.de> wrote:
>
>
> On 13.09.21 12:52, Peter Maydell wrote:
> > On Mon, 13 Sept 2021 at 11:46, Alexander Graf <agraf@csgraf.de> wrote:
> >> Why? You only get to this code path if you already selected -accel hvf.
> >> If even a simple "create scratch vcpu" syscall failed then, pretty
> >> failure when you define -cpu host is the last thing you care about. Any
> >> CPU creation would fail.
> > General design principle -- low level functions should report
> > errors upwards, not barf and exit.
>
>
> Usually I would agree with you, but here it really does not make sense
> and would make the code significantly harder to read.

It's an unnecessary difference from how we've structured the
KVM code. I don't like those. Every time you put one in to
the code you write you can be fairly sure I'm going to question
it during review... I want to be able to look at the hvf code
and say "ah, yes, this is just the hvf version of the kvm code
we already have".

-- PMM


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v8 15/19] hvf: arm: Implement -cpu host
  2021-09-13 11:48               ` Peter Maydell
@ 2021-09-13 11:55                 ` Alexander Graf
  0 siblings, 0 replies; 66+ messages in thread
From: Alexander Graf @ 2021-09-13 11:55 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Eduardo Habkost, Peter Collingbourne, Richard Henderson,
	QEMU Developers, Cameron Esfahani, Roman Bolshakov, qemu-arm,
	Frank Yang, Paolo Bonzini, Philippe Mathieu-Daudé


On 13.09.21 13:48, Peter Maydell wrote:
> On Mon, 13 Sept 2021 at 12:46, Alexander Graf <agraf@csgraf.de> wrote:
>>
>> On 13.09.21 12:52, Peter Maydell wrote:
>>> On Mon, 13 Sept 2021 at 11:46, Alexander Graf <agraf@csgraf.de> wrote:
>>>> Why? You only get to this code path if you already selected -accel hvf.
>>>> If even a simple "create scratch vcpu" syscall failed then, pretty
>>>> failure when you define -cpu host is the last thing you care about. Any
>>>> CPU creation would fail.
>>> General design principle -- low level functions should report
>>> errors upwards, not barf and exit.
>>
>> Usually I would agree with you, but here it really does not make sense
>> and would make the code significantly harder to read.
> It's an unnecessary difference from how we've structured the
> KVM code. I don't like those. Every time you put one in to
> the code you write you can be fairly sure I'm going to question
> it during review... I want to be able to look at the hvf code
> and say "ah, yes, this is just the hvf version of the kvm code
> we already have".


I'll follow the KVM pattern then ...


Alex




^ permalink raw reply	[flat|nested] 66+ messages in thread

end of thread, other threads:[~2021-09-13 11:56 UTC | newest]

Thread overview: 66+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-19 20:22 [PATCH v8 00/19] hvf: Implement Apple Silicon Support Alexander Graf
2021-05-19 20:22 ` [PATCH v8 01/19] hvf: Move assert_hvf_ok() into common directory Alexander Graf
2021-05-27 10:00   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 02/19] hvf: Move vcpu thread functions " Alexander Graf
2021-05-27 10:01   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 03/19] hvf: Move cpu " Alexander Graf
2021-05-27 10:02   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 04/19] hvf: Move hvf internal definitions into common header Alexander Graf
2021-05-27 10:04   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 05/19] hvf: Make hvf_set_phys_mem() static Alexander Graf
2021-05-27 10:06   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 06/19] hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t Alexander Graf
2021-05-27 10:07   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 07/19] hvf: Split out common code on vcpu init and destroy Alexander Graf
2021-05-27 10:09   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 08/19] hvf: Use cpu_synchronize_state() Alexander Graf
2021-05-27 10:15   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 09/19] hvf: Make synchronize functions static Alexander Graf
2021-05-27 10:15   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 10/19] hvf: Remove hvf-accel-ops.h Alexander Graf
2021-05-27 10:16   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 11/19] hvf: Introduce hvf vcpu struct Alexander Graf
2021-05-27 10:18   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 12/19] hvf: Simplify post reset/init/loadvm hooks Alexander Graf
2021-05-27 10:20   ` Sergio Lopez
2021-05-19 20:22 ` [PATCH v8 13/19] hvf: Add Apple Silicon support Alexander Graf
2021-05-27 10:55   ` Sergio Lopez
2021-06-15 10:21   ` Peter Maydell
2021-06-27 20:40     ` Alexander Graf
2021-05-19 20:22 ` [PATCH v8 14/19] arm/hvf: Add a WFI handler Alexander Graf
2021-05-27 10:53   ` Sergio Lopez
2021-06-15 10:38   ` Peter Maydell
2021-05-19 20:22 ` [PATCH v8 15/19] hvf: arm: Implement -cpu host Alexander Graf
2021-05-27 11:18   ` Sergio Lopez
2021-06-15 10:56   ` Peter Maydell
2021-09-12 20:23     ` Alexander Graf
2021-09-13  8:28       ` Peter Maydell
2021-09-13 10:46         ` Alexander Graf
2021-09-13 10:52           ` Peter Maydell
2021-09-13 11:46             ` Alexander Graf
2021-09-13 11:48               ` Peter Maydell
2021-09-13 11:55                 ` Alexander Graf
2021-05-19 20:22 ` [PATCH v8 16/19] hvf: arm: Implement PSCI handling Alexander Graf
2021-05-27 11:20   ` Sergio Lopez
2021-06-15 12:54   ` Peter Maydell
2021-09-12 20:36     ` Alexander Graf
2021-09-12 21:20       ` Richard Henderson
2021-09-12 21:37         ` Alexander Graf
2021-09-12 21:40           ` Richard Henderson
2021-09-13 10:06             ` Alexander Graf
2021-09-13 10:30               ` Philippe Mathieu-Daudé
2021-05-19 20:22 ` [PATCH v8 17/19] arm: Add Hypervisor.framework build target Alexander Graf
2021-05-27 11:20   ` Sergio Lopez
2021-06-15 10:59   ` Peter Maydell
2021-05-19 20:22 ` [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call Alexander Graf
2021-05-27 11:21   ` Sergio Lopez
2021-06-15 11:02   ` Peter Maydell
2021-06-27 21:12     ` Alexander Graf
2021-05-19 20:22 ` [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call Alexander Graf
2021-05-27 11:22   ` Sergio Lopez
2021-06-15  9:31   ` Peter Maydell
2021-06-27 21:07     ` Alexander Graf
2021-05-19 20:45 ` [PATCH v8 00/19] hvf: Implement Apple Silicon Support no-reply
2021-06-03 13:43 ` Peter Maydell
2021-06-03 13:53   ` Alexander Graf
2021-06-15 12:54   ` Peter Maydell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.