qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel@nongnu.org
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
	"Paul Durrant" <paul@xen.org>,
	"Joao Martins" <joao.m.martins@oracle.com>,
	"Ankur Arora" <ankur.a.arora@oracle.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Thomas Huth" <thuth@redhat.com>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"Juan Quintela" <quintela@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	"Claudio Fontana" <cfontana@suse.de>,
	"Julien Grall" <julien@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
	armbru@redhat.com
Subject: [PATCH v7 39/51] hw/xen: Support HVM_PARAM_CALLBACK_TYPE_GSI callback
Date: Mon, 16 Jan 2023 21:57:53 +0000	[thread overview]
Message-ID: <20230116215805.1123514-40-dwmw2@infradead.org> (raw)
In-Reply-To: <20230116215805.1123514-1-dwmw2@infradead.org>

From: David Woodhouse <dwmw@amazon.co.uk>

The GSI callback (and later PCI_INTX) is a level triggered interrupt. It
is asserted when an event channel is delivered to vCPU0, and is supposed
to be cleared when the vcpu_info->evtchn_upcall_pending field for vCPU0
is cleared again.

Thankfully, Xen does *not* assert the GSI if the guest sets its own
evtchn_upcall_pending field; we only need to assert the GSI when we
have delivered an event for ourselves. So that's the easy part, kind of.

There's a slight complexity in that we need to hold the BQL before we
can call qemu_set_irq(), and we definitely can't do that while holding
our own port_lock (because we'll need to take that from the qemu-side
functions that the PV backend drivers will call). So if we end up
wanting to set the IRQ in a context where we *don't* already hold the
BQL, defer to a BH.

However, we *do* need to poll for the evtchn_upcall_pending flag being
cleared. In an ideal world we would poll that when the EOI happens on
the PIC/IOAPIC. That's how it works in the kernel with the VFIO eventfd
pairs — one is used to trigger the interrupt, and the other works in the
other direction to 'resample' on EOI, and trigger the first eventfd
again if the line is still active.

However, QEMU doesn't seem to do that. Even VFIO level interrupts seem
to be supported by temporarily unmapping the device's BARs from the
guest when an interrupt happens, then trapping *all* MMIO to the device
and sending the 'resample' event on *every* MMIO access until the IRQ
is cleared! Maybe in future we'll plumb the 'resample' concept through
QEMU's irq framework but for now we'll do what Xen itself does: just
check the flag on every vmexit if the upcall GSI is known to be
asserted.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 hw/i386/kvm/xen_evtchn.c  | 97 +++++++++++++++++++++++++++++++++++++++
 hw/i386/kvm/xen_evtchn.h  |  4 ++
 hw/i386/pc.c              |  6 +++
 include/sysemu/kvm_xen.h  |  1 +
 target/i386/cpu.h         |  1 +
 target/i386/kvm/kvm.c     | 13 ++++++
 target/i386/kvm/xen-emu.c | 32 +++++++++++++
 target/i386/kvm/xen-emu.h |  1 +
 8 files changed, 155 insertions(+)

diff --git a/hw/i386/kvm/xen_evtchn.c b/hw/i386/kvm/xen_evtchn.c
index a73db5d2bc..e2ecee9a6f 100644
--- a/hw/i386/kvm/xen_evtchn.c
+++ b/hw/i386/kvm/xen_evtchn.c
@@ -26,6 +26,8 @@
 
 #include "hw/sysbus.h"
 #include "hw/xen/xen.h"
+#include "hw/i386/x86.h"
+#include "hw/irq.h"
 
 #include "xen_evtchn.h"
 #include "xen_overlay.h"
@@ -99,9 +101,12 @@ struct XenEvtchnState {
     uint64_t callback_param;
     bool evtchn_in_kernel;
 
+    QEMUBH *gsi_bh;
+
     QemuMutex port_lock;
     uint32_t nr_ports;
     XenEvtchnPort port_table[EVTCHN_2L_NR_CHANNELS];
+    qemu_irq gsis[GSI_NUM_PINS];
 };
 
 struct XenEvtchnState *xen_evtchn_singleton;
@@ -166,13 +171,42 @@ static const TypeInfo xen_evtchn_info = {
     .class_init    = xen_evtchn_class_init,
 };
 
+static void gsi_assert_bh(void *opaque)
+{
+    struct vcpu_info *vi = kvm_xen_get_vcpu_info_hva(0);
+    if (vi) {
+        xen_evtchn_set_callback_level(!!vi->evtchn_upcall_pending);
+    }
+}
+
 void xen_evtchn_create(void)
 {
     XenEvtchnState *s = XEN_EVTCHN(sysbus_create_simple(TYPE_XEN_EVTCHN,
                                                         -1, NULL));
+    int i;
+
     xen_evtchn_singleton = s;
 
     qemu_mutex_init(&s->port_lock);
+    s->gsi_bh = aio_bh_new(qemu_get_aio_context(), gsi_assert_bh, s);
+
+    for (i = 0; i < GSI_NUM_PINS; i++) {
+        sysbus_init_irq(SYS_BUS_DEVICE(s), &s->gsis[i]);
+    }
+}
+
+void xen_evtchn_connect_gsis(qemu_irq *system_gsis)
+{
+    XenEvtchnState *s = xen_evtchn_singleton;
+    int i;
+
+    if (!s) {
+        return;
+    }
+
+    for (i = 0; i < GSI_NUM_PINS; i++) {
+        sysbus_connect_irq(SYS_BUS_DEVICE(s), i, system_gsis[i]);
+    }
 }
 
 static void xen_evtchn_register_types(void)
@@ -182,6 +216,64 @@ static void xen_evtchn_register_types(void)
 
 type_init(xen_evtchn_register_types)
 
+void xen_evtchn_set_callback_level(int level)
+{
+    XenEvtchnState *s = xen_evtchn_singleton;
+    uint32_t param;
+
+    if (!s) {
+        return;
+    }
+
+    /*
+     * We get to this function in a number of ways:
+     *
+     *  • From I/O context, via PV backend drivers sending a notification to
+     *    the guest.
+     *
+     *  • From guest vCPU context, via loopback interdomain event channels
+     *    (or theoretically even IPIs but guests don't use those with GSI
+     *    delivery because that's pointless. We don't want a malicious guest
+     *    to be able to trigger a deadlock though, so we can't rule it out.)
+     *
+     *  • From guest vCPU context when the HVM_PARAM_CALLBACK_IRQ is being
+     *    configured.
+     *
+     *  • From guest vCPU context in the KVM exit handler, if the upcall
+     *    pending flag has been cleared and the GSI needs to be deasserted.
+     *
+     *  • Maybe in future, in an interrupt ack/eoi notifier when the GSI has
+     *    been acked in the irqchip.
+     *
+     * Whichever context we come from if we aren't already holding the BQL
+     * then e can't take it now, as we may already hold s->port_lock. So
+     * trigger the BH to set the IRQ for us instead of doing it immediately.
+     *
+     * In the HVM_PARAM_CALLBACK_IRQ and KVM exit handler cases, the caller
+     * will deliberately take the BQL because they want the change to take
+     * effect immediately. That just leaves interdomain loopback as the case
+     * which uses the BH.
+     */
+    if (!qemu_mutex_iothread_locked()) {
+        qemu_bh_schedule(s->gsi_bh);
+        return;
+    }
+
+    param = (uint32_t)s->callback_param;
+
+    switch (s->callback_param >> CALLBACK_VIA_TYPE_SHIFT) {
+    case HVM_PARAM_CALLBACK_TYPE_GSI:
+        if (param < GSI_NUM_PINS) {
+            qemu_set_irq(s->gsis[param], level);
+            if (level) {
+                /* Ensure the vCPU polls for deassertion */
+                kvm_xen_set_callback_asserted();
+            }
+        }
+        break;
+    }
+}
+
 int xen_evtchn_set_callback_param(uint64_t param)
 {
     XenEvtchnState *s = xen_evtchn_singleton;
@@ -207,6 +299,11 @@ int xen_evtchn_set_callback_param(uint64_t param)
         }
         break;
     }
+
+    case HVM_PARAM_CALLBACK_TYPE_GSI:
+        ret = 0;
+        break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/hw/i386/kvm/xen_evtchn.h b/hw/i386/kvm/xen_evtchn.h
index 670f8b3f7d..1f9ffc3f94 100644
--- a/hw/i386/kvm/xen_evtchn.h
+++ b/hw/i386/kvm/xen_evtchn.h
@@ -12,9 +12,13 @@
 #ifndef QEMU_XEN_EVTCHN_H
 #define QEMU_XEN_EVTCHN_H
 
+#include "hw/sysbus.h"
+
 void xen_evtchn_create(void);
 int xen_evtchn_soft_reset(void);
 int xen_evtchn_set_callback_param(uint64_t param);
+void xen_evtchn_connect_gsis(qemu_irq *system_gsis);
+void xen_evtchn_set_callback_level(int level);
 
 void hmp_xen_event_inject(Monitor *mon, const QDict *qdict);
 void hmp_xen_event_list(Monitor *mon, const QDict *qdict);
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 8f668a5138..61a90c9e5b 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1308,6 +1308,12 @@ void pc_basic_device_init(struct PCMachineState *pcms,
     }
     *rtc_state = mc146818_rtc_init(isa_bus, 2000, rtc_irq);
 
+#ifdef CONFIG_XEN_EMU
+    if (xen_mode == XEN_EMULATE) {
+        xen_evtchn_connect_gsis(gsi);
+    }
+#endif
+
     qemu_register_boot_set(pc_boot_set, *rtc_state);
 
     if (!xen_enabled() &&
diff --git a/include/sysemu/kvm_xen.h b/include/sysemu/kvm_xen.h
index b2bcacd761..a32ee58852 100644
--- a/include/sysemu/kvm_xen.h
+++ b/include/sysemu/kvm_xen.h
@@ -22,6 +22,7 @@
 uint32_t kvm_xen_get_caps(void);
 void *kvm_xen_get_vcpu_info_hva(uint32_t vcpu_id);
 void kvm_xen_inject_vcpu_callback_vector(uint32_t vcpu_id, int type);
+void kvm_xen_set_callback_asserted(void);
 int kvm_xen_set_vcpu_virq(uint32_t vcpu_id, uint16_t virq, uint16_t port);
 
 #define kvm_xen_has_cap(cap) (!!(kvm_xen_get_caps() &           \
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index dba8732fc6..e8718c31e5 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1797,6 +1797,7 @@ typedef struct CPUArchState {
     uint64_t xen_vcpu_time_info_gpa;
     uint64_t xen_vcpu_runstate_gpa;
     uint8_t xen_vcpu_callback_vector;
+    bool xen_callback_asserted;
     uint16_t xen_virq[XEN_NR_VIRQS];
     uint64_t xen_singleshot_timer_ns;
 #endif
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index fa08cb6574..51ddf4bfa2 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -5415,6 +5415,19 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
     char str[256];
     KVMState *state;
 
+#ifdef CONFIG_XEN_EMU
+    /*
+     * If the callback is asserted as a GSI (or PCI INTx) then check if
+     * vcpu_info->evtchn_upcall_pending has been cleared, and deassert
+     * the callback IRQ if so. Ideally we could hook into the PIC/IOAPIC
+     * EOI and only resample then, exactly how the VFIO eventfd pairs
+     * are designed to work for level triggered interrupts.
+     */
+    if (cpu->env.xen_callback_asserted) {
+        kvm_xen_maybe_deassert_callback(cs);
+    }
+#endif
+
     switch (run->exit_reason) {
     case KVM_EXIT_HLT:
         DPRINTF("handle_hlt\n");
diff --git a/target/i386/kvm/xen-emu.c b/target/i386/kvm/xen-emu.c
index bc7426b90f..27e0555baf 100644
--- a/target/i386/kvm/xen-emu.c
+++ b/target/i386/kvm/xen-emu.c
@@ -311,6 +311,31 @@ void *kvm_xen_get_vcpu_info_hva(uint32_t vcpu_id)
     return X86_CPU(cs)->env.xen_vcpu_info_hva;
 }
 
+void kvm_xen_maybe_deassert_callback(CPUState *cs)
+{
+    CPUX86State *env = &X86_CPU(cs)->env;
+    struct vcpu_info *vi = env->xen_vcpu_info_hva;
+    if (!vi) {
+            return;
+    }
+
+    /* If the evtchn_upcall_pending flag is cleared, turn the GSI off. */
+    if (!vi->evtchn_upcall_pending) {
+        qemu_mutex_lock_iothread();
+        xen_evtchn_set_callback_level(0);
+        qemu_mutex_unlock_iothread();
+    }
+}
+
+void kvm_xen_set_callback_asserted(void)
+{
+    CPUState *cs = qemu_get_cpu(0);
+
+    if (cs) {
+        X86_CPU(cs)->env.xen_callback_asserted = true;
+    }
+}
+
 void kvm_xen_inject_vcpu_callback_vector(uint32_t vcpu_id, int type)
 {
     CPUState *cs = qemu_get_cpu(vcpu_id);
@@ -343,6 +368,13 @@ void kvm_xen_inject_vcpu_callback_vector(uint32_t vcpu_id, int type)
          */
         qemu_cpu_kick(cs);
         break;
+
+    case HVM_PARAM_CALLBACK_TYPE_GSI:
+    case HVM_PARAM_CALLBACK_TYPE_PCI_INTX:
+        if (vcpu_id == 0) {
+            xen_evtchn_set_callback_level(1);
+        }
+        break;
     }
 }
 
diff --git a/target/i386/kvm/xen-emu.h b/target/i386/kvm/xen-emu.h
index 452605699a..fe85e0b195 100644
--- a/target/i386/kvm/xen-emu.h
+++ b/target/i386/kvm/xen-emu.h
@@ -28,5 +28,6 @@ int kvm_xen_init_vcpu(CPUState *cs);
 int kvm_xen_handle_exit(X86CPU *cpu, struct kvm_xen_exit *exit);
 int kvm_put_xen_state(CPUState *cs);
 int kvm_get_xen_state(CPUState *cs);
+void kvm_xen_maybe_deassert_callback(CPUState *cs);
 
 #endif /* QEMU_I386_KVM_XEN_EMU_H */
-- 
2.39.0



  parent reply	other threads:[~2023-01-16 21:59 UTC|newest]

Thread overview: 97+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-16 21:57 [PATCH v7 00/51] Xen support under KVM David Woodhouse
2023-01-16 21:57 ` [PATCH v7 01/51] include: import Xen public headers to include/standard-headers/ David Woodhouse
2023-01-19 13:02   ` Alex Bennée
2023-01-19 15:06     ` David Woodhouse
2023-01-16 21:57 ` [PATCH v7 02/51] xen: add CONFIG_XENFV_MACHINE and CONFIG_XEN_EMU options for Xen emulation David Woodhouse
2023-01-16 21:57 ` [PATCH v7 03/51] xen: Add XEN_DISABLED mode and make it default David Woodhouse
2023-01-16 21:57 ` [PATCH v7 04/51] i386/kvm: Add xen-version KVM accelerator property and init KVM Xen support David Woodhouse
2023-01-16 21:57 ` [PATCH v7 05/51] i386/kvm: handle Xen HVM cpuid leaves David Woodhouse
2023-01-16 21:57 ` [PATCH v7 06/51] i386/hvm: Set Xen vCPU ID in KVM David Woodhouse
2023-01-16 21:57 ` [PATCH v7 07/51] xen-platform: exclude vfio-pci from the PCI platform unplug David Woodhouse
2023-01-16 21:57 ` [PATCH v7 08/51] xen-platform: allow its creation with XEN_EMULATE mode David Woodhouse
2023-01-17  9:44   ` Paul Durrant
2023-01-16 21:57 ` [PATCH v7 09/51] i386/xen: handle guest hypercalls David Woodhouse
2023-01-16 21:57 ` [PATCH v7 10/51] i386/xen: implement HYPERVISOR_xen_version David Woodhouse
2023-01-16 21:57 ` [PATCH v7 11/51] i386/xen: implement HYPERVISOR_sched_op, SCHEDOP_shutdown David Woodhouse
2023-01-16 21:57 ` [PATCH v7 12/51] i386/xen: Implement SCHEDOP_poll and SCHEDOP_yield David Woodhouse
2023-01-16 21:57 ` [PATCH v7 13/51] hw/xen: Add xen_overlay device for emulating shared xenheap pages David Woodhouse
2023-01-16 21:57 ` [PATCH v7 14/51] i386/xen: add pc_machine_kvm_type to initialize XEN_EMULATE mode David Woodhouse
2023-01-17  9:47   ` Paul Durrant
2023-01-16 21:57 ` [PATCH v7 15/51] i386/xen: manage and save/restore Xen guest long_mode setting David Woodhouse
2023-01-16 21:57 ` [PATCH v7 16/51] i386/xen: implement HYPERVISOR_memory_op David Woodhouse
2023-01-16 21:57 ` [PATCH v7 17/51] i386/xen: implement XENMEM_add_to_physmap_batch David Woodhouse
2023-01-16 21:57 ` [PATCH v7 18/51] i386/xen: implement HYPERVISOR_hvm_op David Woodhouse
2023-01-16 21:57 ` [PATCH v7 19/51] i386/xen: implement HYPERVISOR_vcpu_op David Woodhouse
2023-01-16 21:57 ` [PATCH v7 20/51] i386/xen: handle VCPUOP_register_vcpu_info David Woodhouse
2023-01-16 21:57 ` [PATCH v7 21/51] i386/xen: handle VCPUOP_register_vcpu_time_info David Woodhouse
2023-01-16 21:57 ` [PATCH v7 22/51] i386/xen: handle VCPUOP_register_runstate_memory_area David Woodhouse
2023-01-16 21:57 ` [PATCH v7 23/51] i386/xen: implement HYPERVISOR_event_channel_op David Woodhouse
2023-01-17  9:53   ` Paul Durrant
2023-01-17  9:59     ` David Woodhouse
2023-01-16 21:57 ` [PATCH v7 24/51] i386/xen: implement HVMOP_set_evtchn_upcall_vector David Woodhouse
2023-01-16 21:57 ` [PATCH v7 25/51] i386/xen: implement HVMOP_set_param David Woodhouse
2023-01-16 21:57 ` [PATCH v7 26/51] hw/xen: Add xen_evtchn device for event channel emulation David Woodhouse
2023-01-17 10:00   ` Paul Durrant
2023-01-17 10:23     ` David Woodhouse
2023-01-17 10:56       ` Paul Durrant
2023-01-17 11:02         ` David Woodhouse
2023-01-17 11:06           ` Paul Durrant
2023-01-17 11:24             ` David Woodhouse
2023-01-17 11:53               ` Paul Durrant
2023-01-17 12:12                 ` David Woodhouse
2023-01-17 13:01                 ` [PATCH v7.1 " David Woodhouse
2023-01-16 21:57 ` [PATCH v7 27/51] i386/xen: Add support for Xen event channel delivery to vCPU David Woodhouse
2023-01-17 11:11   ` Paul Durrant
2023-01-17 12:31     ` David Woodhouse
2023-01-17 13:11       ` Paul Durrant
2023-01-17 12:01   ` Paul Durrant
2023-01-16 21:57 ` [PATCH v7 28/51] hw/xen: Implement EVTCHNOP_status David Woodhouse
2023-01-16 21:57 ` [PATCH v7 29/51] hw/xen: Implement EVTCHNOP_close David Woodhouse
2023-01-16 21:57 ` [PATCH v7 30/51] hw/xen: Implement EVTCHNOP_unmask David Woodhouse
2023-01-16 21:57 ` [PATCH v7 31/51] hw/xen: Implement EVTCHNOP_bind_virq David Woodhouse
2023-01-16 21:57 ` [PATCH v7 32/51] hw/xen: Implement EVTCHNOP_bind_ipi David Woodhouse
2023-01-16 21:57 ` [PATCH v7 33/51] hw/xen: Implement EVTCHNOP_send David Woodhouse
2023-01-16 21:57 ` [PATCH v7 34/51] hw/xen: Implement EVTCHNOP_alloc_unbound David Woodhouse
2023-01-16 21:57 ` [PATCH v7 35/51] hw/xen: Implement EVTCHNOP_bind_interdomain David Woodhouse
2023-01-16 21:57 ` [PATCH v7 36/51] hw/xen: Implement EVTCHNOP_bind_vcpu David Woodhouse
2023-01-16 21:57 ` [PATCH v7 37/51] hw/xen: Implement EVTCHNOP_reset David Woodhouse
2023-01-16 21:57 ` [PATCH v7 38/51] i386/xen: add monitor commands to test event injection David Woodhouse
2023-01-17 10:08   ` Markus Armbruster
2023-01-17 10:41     ` David Woodhouse
2023-01-17 11:31     ` David Woodhouse
2023-01-19 11:01     ` David Woodhouse
2023-01-16 21:57 ` David Woodhouse [this message]
2023-01-16 21:57 ` [PATCH v7 40/51] hw/xen: Support HVM_PARAM_CALLBACK_TYPE_PCI_INTX callback David Woodhouse
2023-01-16 21:57 ` [PATCH v7 41/51] kvm/i386: Add xen-gnttab-max-frames property David Woodhouse
2023-01-16 21:57 ` [PATCH v7 42/51] hw/xen: Add xen_gnttab device for grant table emulation David Woodhouse
2023-01-16 21:57 ` [PATCH v7 43/51] hw/xen: Support mapping grant frames David Woodhouse
2023-01-16 21:57 ` [PATCH v7 44/51] i386/xen: Implement HYPERVISOR_grant_table_op and GNTTABOP_[gs]et_verson David Woodhouse
2023-01-16 21:57 ` [PATCH v7 45/51] hw/xen: Implement GNTTABOP_query_size David Woodhouse
2023-01-16 21:58 ` [PATCH v7 46/51] i386/xen: handle PV timer hypercalls David Woodhouse
2023-01-16 21:58 ` [PATCH v7 47/51] i386/xen: Reserve Xen special pages for console, xenstore rings David Woodhouse
2023-01-16 21:58 ` [PATCH v7 48/51] i386/xen: handle HVMOP_get_param David Woodhouse
2023-01-16 21:58 ` [PATCH v7 49/51] hw/xen: Add backend implementation of interdomain event channel support David Woodhouse
2023-01-16 21:58 ` [PATCH v7 50/51] hw/xen: Add xen_xenstore device for xenstore emulation David Woodhouse
2023-01-16 21:58 ` [PATCH v7 51/51] hw/xen: Add basic ring handling to xenstore David Woodhouse
2023-01-16 22:19 ` [RFC PATCH v7bis 00/19] Emulated Xen PV backend and PIRQ support David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 01/19] hw/xen: Add evtchn operations to allow redirection to internal emulation David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 02/19] hw/xen: Add emulated evtchn ops David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 03/19] hw/xen: Add gnttab operations to allow redirection to internal emulation David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 04/19] hw/xen: Pass grant ref to gnttab unmap David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 05/19] hw/xen: Add foreignmem operations to allow redirection to internal emulation David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 06/19] hw/xen: Add xenstore " David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 07/19] hw/xen: Move xenstore_store_pv_console_info to xen_console.c David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 08/19] hw/xen: Use XEN_PAGE_SIZE in PV backend drivers David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 09/19] hw/xen: Rename xen_common.h to xen_native.h David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 10/19] hw/xen: Build PV backend drivers for XENFV_MACHINE David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 11/19] hw/xen: Map guest XENSTORE_PFN grant in emulated Xenstore David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 12/19] hw/xen: Add backend implementation of grant table operations David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 13/19] hw/xen: Implement soft reset for emulated gnttab David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 14/19] hw/xen: Remove old version of Xen headers David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 15/19] i386/xen: Initialize XenBus and legacy backends from pc_init1() David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 16/19] i386/xen: Implement HYPERVISOR_physdev_op David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 17/19] hw/xen: Implement emulated PIRQ hypercall support David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 18/19] hw/xen: Support GSI mapping to PIRQ David Woodhouse
2023-01-16 22:19   ` [RFC PATCH v7bis 19/19] hw/xen: Support MSI " David Woodhouse
2023-01-17 16:01 ` [PATCH v7 52/51] hw/xen: Automatically add xen-platform PCI device for emulated Xen guests David Woodhouse
2023-01-17 16:02 ` [PATCH v7 53/51] i386/xen: Document Xen HVM emulation David Woodhouse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230116215805.1123514-40-dwmw2@infradead.org \
    --to=dwmw2@infradead.org \
    --cc=alex.bennee@linaro.org \
    --cc=ankur.a.arora@oracle.com \
    --cc=armbru@redhat.com \
    --cc=cfontana@suse.de \
    --cc=dgilbert@redhat.com \
    --cc=joao.m.martins@oracle.com \
    --cc=julien@xen.org \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=paul@xen.org \
    --cc=pbonzini@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).