All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Xen INTX/GSI event channel delivery fixes
@ 2020-12-24 11:53 David Woodhouse
  2020-12-24 11:53 ` [PATCH 1/5] xen: Fix event channel callback via INTX/GSI David Woodhouse
                   ` (4 more replies)
  0 siblings, 5 replies; 23+ messages in thread
From: David Woodhouse @ 2020-12-24 11:53 UTC (permalink / raw)
  To: x86
  Cc: Stefano Stabellini, Boris Ostrovsky, Juergen Gross, Paul Durrant,
	jgrall, karahmed, xen-devel

Fix various breakages with INTX/GSI event channel interrupt delivery,
and add a command line option to test it so that it hopefully doesn't
get so broken again.

Karim attempted to rip this out in 2017 but put it back because it's still
necessary with older versions of Xen. This fixes it properly, and makes it
easier to test. cf. https://lkml.org/lkml/2017/4/10/266

David Woodhouse (5):
      xen: Fix event channel callback via INTX/GSI
      xen: Set platform PCI device INTX affinity to CPU0
      x86/xen: Add a no_vector_callback option to test PCI INTX delivery
      x86/xen: Don't register Xen IPIs when they aren't going to be used
      x86/xen: Don't register PV spinlock IPI when it isn't going to be used

 arch/arm/xen/enlighten.c          |  2 +-
 arch/x86/xen/enlighten_hvm.c      | 15 +++++++--
 arch/x86/xen/spinlock.c           |  6 ++--
 drivers/xen/events/events_base.c  | 10 ------
 drivers/xen/platform-pci.c        |  8 ++++-
 drivers/xen/xenbus/xenbus.h       |  1 +
 drivers/xen/xenbus/xenbus_comms.c |  8 -----
 drivers/xen/xenbus/xenbus_probe.c | 68 +++++++++++++++++++++++++++++++--------
 include/xen/xenbus.h              |  2 +-
 9 files changed, 79 insertions(+), 41 deletions(-)




^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/5] xen: Fix event channel callback via INTX/GSI
  2020-12-24 11:53 [PATCH 0/5] Xen INTX/GSI event channel delivery fixes David Woodhouse
@ 2020-12-24 11:53 ` David Woodhouse
  2021-01-04 16:35   ` boris.ostrovsky
  2020-12-24 11:53 ` [PATCH 2/5] xen: Set platform PCI device INTX affinity to CPU0 David Woodhouse
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2020-12-24 11:53 UTC (permalink / raw)
  To: x86
  Cc: Stefano Stabellini, Boris Ostrovsky, Juergen Gross, Paul Durrant,
	jgrall, karahmed, xen-devel

From: David Woodhouse <dwmw@amazon.co.uk>

For a while, event channel notification via the PCI platform device
has been broken, because we attempt to communicate with xenstore before
we even have notifications working, with the xs_reset_watches() call
in xs_init().

We tend to get away with this on Xen versions below 4.0 because we avoid
calling xs_reset_watches() anyway, because xenstore might not cope with
reading a non-existent key. And newer Xen *does* have the vector
callback support, so we rarely fall back to INTX/GSI delivery.

To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
case, deferring it to be called from xenbus_probe() in the XS_HVM case
instead.

Then fix up the invocation of xenbus_probe() to happen either from its
device_initcall if the callback is available early enough, or when the
callback is finally set up. This means that the hack of calling
xenbus_probe() from a workqueue after the first interrupt, or directly
from the PCI platform device setup, is no longer needed.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/arm/xen/enlighten.c          |  2 +-
 drivers/xen/events/events_base.c  | 10 -----
 drivers/xen/platform-pci.c        |  1 -
 drivers/xen/xenbus/xenbus.h       |  1 +
 drivers/xen/xenbus/xenbus_comms.c |  8 ----
 drivers/xen/xenbus/xenbus_probe.c | 68 ++++++++++++++++++++++++-------
 include/xen/xenbus.h              |  2 +-
 7 files changed, 57 insertions(+), 35 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 60e901cd0de6..5a957a9a0984 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -371,7 +371,7 @@ static int __init xen_guest_init(void)
 	}
 	gnttab_init();
 	if (!xen_initial_domain())
-		xenbus_probe(NULL);
+		xenbus_probe();
 
 	/*
 	 * Making sure board specific code will not set up ops for
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 6038c4c35db5..bbebe248b726 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -2010,16 +2010,6 @@ static struct irq_chip xen_percpu_chip __read_mostly = {
 	.irq_ack		= ack_dynirq,
 };
 
-int xen_set_callback_via(uint64_t via)
-{
-	struct xen_hvm_param a;
-	a.domid = DOMID_SELF;
-	a.index = HVM_PARAM_CALLBACK_IRQ;
-	a.value = via;
-	return HYPERVISOR_hvm_op(HVMOP_set_param, &a);
-}
-EXPORT_SYMBOL_GPL(xen_set_callback_via);
-
 #ifdef CONFIG_XEN_PVHVM
 /* Vector callbacks are better than PCI interrupts to receive event
  * channel notifications because we can receive vector callbacks on any
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index dd911e1ff782..9db557b76511 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -149,7 +149,6 @@ static int platform_pci_probe(struct pci_dev *pdev,
 	ret = gnttab_init();
 	if (ret)
 		goto grant_out;
-	xenbus_probe(NULL);
 	return 0;
 grant_out:
 	gnttab_free_auto_xlat_frames();
diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
index 5f5b8a7d5b80..05bbda51103f 100644
--- a/drivers/xen/xenbus/xenbus.h
+++ b/drivers/xen/xenbus/xenbus.h
@@ -113,6 +113,7 @@ int xenbus_probe_node(struct xen_bus_type *bus,
 		      const char *type,
 		      const char *nodename);
 int xenbus_probe_devices(struct xen_bus_type *bus);
+void xenbus_probe(void);
 
 void xenbus_dev_changed(const char *node, struct xen_bus_type *bus);
 
diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index eb5151fc8efa..e5fda0256feb 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -57,16 +57,8 @@ DEFINE_MUTEX(xs_response_mutex);
 static int xenbus_irq;
 static struct task_struct *xenbus_task;
 
-static DECLARE_WORK(probe_work, xenbus_probe);
-
-
 static irqreturn_t wake_waiting(int irq, void *unused)
 {
-	if (unlikely(xenstored_ready == 0)) {
-		xenstored_ready = 1;
-		schedule_work(&probe_work);
-	}
-
 	wake_up(&xb_waitq);
 	return IRQ_HANDLED;
 }
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 38725d97d909..876f381b100a 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -682,29 +682,63 @@ void unregister_xenstore_notifier(struct notifier_block *nb)
 }
 EXPORT_SYMBOL_GPL(unregister_xenstore_notifier);
 
-void xenbus_probe(struct work_struct *unused)
+void xenbus_probe(void)
 {
 	xenstored_ready = 1;
 
+	/*
+	 * In the HVM case, xenbus_init() deferred its call to
+	 * xs_init() in case callbacks were not operational yet.
+	 * So do it now.
+	 */
+	if (xen_store_domain_type == XS_HVM)
+		xs_init();
+
 	/* Notify others that xenstore is up */
 	blocking_notifier_call_chain(&xenstore_chain, 0, NULL);
 }
-EXPORT_SYMBOL_GPL(xenbus_probe);
 
 static int __init xenbus_probe_initcall(void)
 {
-	if (!xen_domain())
-		return -ENODEV;
-
-	if (xen_initial_domain() || xen_hvm_domain())
-		return 0;
+	/*
+	 * Probe XenBus here in the XS_PV case, and also XS_HVM unless we
+	 * need to wait for the platform PCI device to come up, which is
+	 * the (XEN_PVPVM && !xen_have_vector_callback) case.
+	 */
+	if (xen_store_domain_type == XS_PV ||
+	    (xen_store_domain_type == XS_HVM &&
+	     (!IS_ENABLED(CONFIG_XEN_PVHVM) || xen_have_vector_callback)))
+		xenbus_probe();
 
-	xenbus_probe(NULL);
 	return 0;
 }
-
 device_initcall(xenbus_probe_initcall);
 
+int xen_set_callback_via(uint64_t via)
+{
+	struct xen_hvm_param a;
+	int ret;
+
+	a.domid = DOMID_SELF;
+	a.index = HVM_PARAM_CALLBACK_IRQ;
+	a.value = via;
+
+	ret = HYPERVISOR_hvm_op(HVMOP_set_param, &a);
+	if (ret)
+		return ret;
+
+	/*
+	 * If xenbus_probe_initcall() deferred the xenbus_probe()
+	 * due to the callback not functioning yet, we can do it now.
+	 */
+	if (!xenstored_ready && xen_store_domain_type == XS_HVM &&
+	    IS_ENABLED(CONFIG_XEN_PVHVM) && !xen_have_vector_callback)
+		xenbus_probe();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xen_set_callback_via);
+
 /* Set up event channel for xenstored which is run as a local process
  * (this is normally used only in dom0)
  */
@@ -817,11 +851,17 @@ static int __init xenbus_init(void)
 		break;
 	}
 
-	/* Initialize the interface to xenstore. */
-	err = xs_init();
-	if (err) {
-		pr_warn("Error initializing xenstore comms: %i\n", err);
-		goto out_error;
+	/*
+	 * HVM domains may not have a functional callback yet. In that
+	 * case let xs_init() be called from xenbus_probe(), which will
+	 * get invoked at an appropriate time.
+	 */
+	if (xen_store_domain_type != XS_HVM) {
+		err = xs_init();
+		if (err) {
+			pr_warn("Error initializing xenstore comms: %i\n", err);
+			goto out_error;
+		}
 	}
 
 	if ((xen_store_domain_type != XS_LOCAL) &&
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 5a8315e6d8a6..61202c83d560 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -183,7 +183,7 @@ void xs_suspend_cancel(void);
 
 struct work_struct;
 
-void xenbus_probe(struct work_struct *);
+void xenbus_probe(void);
 
 #define XENBUS_IS_ERR_READ(str) ({			\
 	if (!IS_ERR(str) && strlen(str) == 0) {		\
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/5] xen: Set platform PCI device INTX affinity to CPU0
  2020-12-24 11:53 [PATCH 0/5] Xen INTX/GSI event channel delivery fixes David Woodhouse
  2020-12-24 11:53 ` [PATCH 1/5] xen: Fix event channel callback via INTX/GSI David Woodhouse
@ 2020-12-24 11:53 ` David Woodhouse
  2021-01-04 16:42   ` boris.ostrovsky
  2020-12-24 11:53 ` [PATCH 3/5] x86/xen: Add a no_vector_callback option to test PCI INTX delivery David Woodhouse
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2020-12-24 11:53 UTC (permalink / raw)
  To: x86
  Cc: Stefano Stabellini, Boris Ostrovsky, Juergen Gross, Paul Durrant,
	jgrall, karahmed, xen-devel

From: David Woodhouse <dwmw@amazon.co.uk>

With INTX or GSI delivery, Xen uses the event channel structures of CPU0.

If the interrupt gets handled by Linux on a different CPU, then no events
are seen as pending. Rather than introducing locking to allow other CPUs
to process CPU0's events, just ensure that the PCI interrupts happens
only on CPU0.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 drivers/xen/platform-pci.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index 9db557b76511..18f0ed8b1f93 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -132,6 +132,13 @@ static int platform_pci_probe(struct pci_dev *pdev,
 			dev_warn(&pdev->dev, "request_irq failed err=%d\n", ret);
 			goto out;
 		}
+		/*
+		 * It doesn't strictly *have* to run on CPU0 but it sure
+		 * as hell better process the event channel ports delivered
+		 * to CPU0.
+		 */
+		irq_set_affinity(pdev->irq, cpumask_of(0));
+
 		callback_via = get_callback_via(pdev);
 		ret = xen_set_callback_via(callback_via);
 		if (ret) {
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/5] x86/xen: Add a no_vector_callback option to test PCI INTX delivery
  2020-12-24 11:53 [PATCH 0/5] Xen INTX/GSI event channel delivery fixes David Woodhouse
  2020-12-24 11:53 ` [PATCH 1/5] xen: Fix event channel callback via INTX/GSI David Woodhouse
  2020-12-24 11:53 ` [PATCH 2/5] xen: Set platform PCI device INTX affinity to CPU0 David Woodhouse
@ 2020-12-24 11:53 ` David Woodhouse
  2021-01-04 16:44   ` boris.ostrovsky
  2020-12-24 11:53 ` [PATCH 4/5] x86/xen: Don't register Xen IPIs when they aren't going to be used David Woodhouse
  2020-12-24 11:53 ` [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't " David Woodhouse
  4 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2020-12-24 11:53 UTC (permalink / raw)
  To: x86
  Cc: Stefano Stabellini, Boris Ostrovsky, Juergen Gross, Paul Durrant,
	jgrall, karahmed, xen-devel

From: David Woodhouse <dwmw@amazon.co.uk>

It's useful to be able to test non-vector event channel delivery, to make
sure Linux will work properly on older Xen which doesn't have it.

It's also useful for those working on Xen and Xen-compatible hypervisors,
because there are guest kernels still in active use which use PCI INTX
even when vector delivery is available.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/xen/enlighten_hvm.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 9e87ab010c82..a1c07e0c888e 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -188,6 +188,8 @@ static int xen_cpu_dead_hvm(unsigned int cpu)
        return 0;
 }
 
+static bool no_vector_callback __initdata;
+
 static void __init xen_hvm_guest_init(void)
 {
 	if (xen_pv_domain())
@@ -207,7 +209,7 @@ static void __init xen_hvm_guest_init(void)
 
 	xen_panic_handler_init();
 
-	if (xen_feature(XENFEAT_hvm_callback_vector))
+	if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector))
 		xen_have_vector_callback = 1;
 
 	xen_hvm_smp_init();
@@ -233,6 +235,13 @@ static __init int xen_parse_nopv(char *arg)
 }
 early_param("xen_nopv", xen_parse_nopv);
 
+static __init int xen_parse_no_vector_callback(char *arg)
+{
+	no_vector_callback = true;
+	return 0;
+}
+early_param("no_vector_callback", xen_parse_no_vector_callback);
+
 bool __init xen_hvm_need_lapic(void)
 {
 	if (xen_pv_domain())
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/5] x86/xen: Don't register Xen IPIs when they aren't going to be used
  2020-12-24 11:53 [PATCH 0/5] Xen INTX/GSI event channel delivery fixes David Woodhouse
                   ` (2 preceding siblings ...)
  2020-12-24 11:53 ` [PATCH 3/5] x86/xen: Add a no_vector_callback option to test PCI INTX delivery David Woodhouse
@ 2020-12-24 11:53 ` David Woodhouse
  2021-01-04 16:50   ` boris.ostrovsky
  2020-12-24 11:53 ` [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't " David Woodhouse
  4 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2020-12-24 11:53 UTC (permalink / raw)
  To: x86
  Cc: Stefano Stabellini, Boris Ostrovsky, Juergen Gross, Paul Durrant,
	jgrall, karahmed, xen-devel

From: David Woodhouse <dwmw@amazon.co.uk>

In the case where xen_have_vector_callback is false, we still register
the IPI vectors in xen_smp_intr_init() for the secondary CPUs even
though they aren't going to be used. Stop doing that.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/xen/enlighten_hvm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index a1c07e0c888e..7a6ef517e81a 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -164,10 +164,10 @@ static int xen_cpu_up_prepare_hvm(unsigned int cpu)
 	else
 		per_cpu(xen_vcpu_id, cpu) = cpu;
 	rc = xen_vcpu_setup(cpu);
-	if (rc)
+	if (rc || !xen_have_vector_callback)
 		return rc;
 
-	if (xen_have_vector_callback && xen_feature(XENFEAT_hvm_safe_pvclock))
+	if (xen_feature(XENFEAT_hvm_safe_pvclock))
 		xen_setup_timer(cpu);
 
 	rc = xen_smp_intr_init(cpu);
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2020-12-24 11:53 [PATCH 0/5] Xen INTX/GSI event channel delivery fixes David Woodhouse
                   ` (3 preceding siblings ...)
  2020-12-24 11:53 ` [PATCH 4/5] x86/xen: Don't register Xen IPIs when they aren't going to be used David Woodhouse
@ 2020-12-24 11:53 ` David Woodhouse
  2021-01-04 17:06   ` boris.ostrovsky
  4 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2020-12-24 11:53 UTC (permalink / raw)
  To: x86
  Cc: Stefano Stabellini, Boris Ostrovsky, Juergen Gross, Paul Durrant,
	jgrall, karahmed, xen-devel

From: David Woodhouse <dwmw@amazon.co.uk>

When xen_have_vector_callback is false, we still register the PV spinlock
kicker IPI on the secondary CPUs. Stop doing that.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/xen/spinlock.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 799f4eba0a62..b240ea483e63 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -68,7 +68,7 @@ void xen_init_lock_cpu(int cpu)
 	int irq;
 	char *name;
 
-	if (!xen_pvspin)
+	if (!xen_pvspin || !xen_have_vector_callback)
 		return;
 
 	WARN(per_cpu(lock_kicker_irq, cpu) >= 0, "spinlock on CPU%d exists on IRQ%d!\n",
@@ -93,7 +93,7 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
-	if (!xen_pvspin)
+	if (!xen_pvspin || !xen_have_vector_callback)
 		return;
 
 	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
@@ -115,7 +115,7 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen);
 void __init xen_init_spinlocks(void)
 {
 	/*  Don't need to use pvqspinlock code if there is only 1 vCPU. */
-	if (num_possible_cpus() == 1 || nopvspin)
+	if (num_possible_cpus() == 1 || nopvspin || !xen_have_vector_callback)
 		xen_pvspin = false;
 
 	if (!xen_pvspin) {
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/5] xen: Fix event channel callback via INTX/GSI
  2020-12-24 11:53 ` [PATCH 1/5] xen: Fix event channel callback via INTX/GSI David Woodhouse
@ 2021-01-04 16:35   ` boris.ostrovsky
  0 siblings, 0 replies; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-04 16:35 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 12/24/20 6:53 AM, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> For a while, event channel notification via the PCI platform device
> has been broken, because we attempt to communicate with xenstore before
> we even have notifications working, with the xs_reset_watches() call
> in xs_init().
>
> We tend to get away with this on Xen versions below 4.0 because we avoid
> calling xs_reset_watches() anyway, because xenstore might not cope with
> reading a non-existent key. And newer Xen *does* have the vector
> callback support, so we rarely fall back to INTX/GSI delivery.
>
> To fix it, clean up a bit of the mess of xs_init() and xenbus_probe()
> startup. Call xs_init() directly from xenbus_init() only in the !XS_HVM
> case, deferring it to be called from xenbus_probe() in the XS_HVM case
> instead.
>
> Then fix up the invocation of xenbus_probe() to happen either from its
> device_initcall if the callback is available early enough, or when the
> callback is finally set up. This means that the hack of calling
> xenbus_probe() from a workqueue after the first interrupt, or directly
> from the PCI platform device setup, is no longer needed.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/5] xen: Set platform PCI device INTX affinity to CPU0
  2020-12-24 11:53 ` [PATCH 2/5] xen: Set platform PCI device INTX affinity to CPU0 David Woodhouse
@ 2021-01-04 16:42   ` boris.ostrovsky
  0 siblings, 0 replies; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-04 16:42 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 12/24/20 6:53 AM, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> With INTX or GSI delivery, Xen uses the event channel structures of CPU0.
>
> If the interrupt gets handled by Linux on a different CPU, then no events
> are seen as pending. Rather than introducing locking to allow other CPUs
> to process CPU0's events, just ensure that the PCI interrupts happens
> only on CPU0.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/5] x86/xen: Add a no_vector_callback option to test PCI INTX delivery
  2020-12-24 11:53 ` [PATCH 3/5] x86/xen: Add a no_vector_callback option to test PCI INTX delivery David Woodhouse
@ 2021-01-04 16:44   ` boris.ostrovsky
  0 siblings, 0 replies; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-04 16:44 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 12/24/20 6:53 AM, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> It's useful to be able to test non-vector event channel delivery, to make
> sure Linux will work properly on older Xen which doesn't have it.
>
> It's also useful for those working on Xen and Xen-compatible hypervisors,
> because there are guest kernels still in active use which use PCI INTX
> even when vector delivery is available.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>



This needs an update to Documentation/admin-guide/kernel-parameters.txt


-boris



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 4/5] x86/xen: Don't register Xen IPIs when they aren't going to be used
  2020-12-24 11:53 ` [PATCH 4/5] x86/xen: Don't register Xen IPIs when they aren't going to be used David Woodhouse
@ 2021-01-04 16:50   ` boris.ostrovsky
  2021-01-04 17:29     ` David Woodhouse
  0 siblings, 1 reply; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-04 16:50 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 12/24/20 6:53 AM, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> In the case where xen_have_vector_callback is false, we still register
> the IPI vectors in xen_smp_intr_init() for the secondary CPUs even
> though they aren't going to be used. Stop doing that.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>  arch/x86/xen/enlighten_hvm.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
> index a1c07e0c888e..7a6ef517e81a 100644
> --- a/arch/x86/xen/enlighten_hvm.c
> +++ b/arch/x86/xen/enlighten_hvm.c
> @@ -164,10 +164,10 @@ static int xen_cpu_up_prepare_hvm(unsigned int cpu)
>  	else
>  		per_cpu(xen_vcpu_id, cpu) = cpu;
>  	rc = xen_vcpu_setup(cpu);


Without vector callback we will not be calling xen_vcpu_setup(0) so why should we still call it here for APs?


-boris


> -	if (rc)
> +	if (rc || !xen_have_vector_callback)
>  		return rc;
>  
> -	if (xen_have_vector_callback && xen_feature(XENFEAT_hvm_safe_pvclock))
> +	if (xen_feature(XENFEAT_hvm_safe_pvclock))
>  		xen_setup_timer(cpu);
>  
>  	rc = xen_smp_intr_init(cpu);


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2020-12-24 11:53 ` [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't " David Woodhouse
@ 2021-01-04 17:06   ` boris.ostrovsky
  2021-01-04 17:32     ` David Woodhouse
  0 siblings, 1 reply; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-04 17:06 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 12/24/20 6:53 AM, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> When xen_have_vector_callback is false, we still register the PV spinlock
> kicker IPI on the secondary CPUs. Stop doing that.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>  arch/x86/xen/spinlock.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index 799f4eba0a62..b240ea483e63 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -68,7 +68,7 @@ void xen_init_lock_cpu(int cpu)
>  	int irq;
>  	char *name;
>  
> -	if (!xen_pvspin)
> +	if (!xen_pvspin || !xen_have_vector_callback)
>  		return;
>  
>  	WARN(per_cpu(lock_kicker_irq, cpu) >= 0, "spinlock on CPU%d exists on IRQ%d!\n",
> @@ -93,7 +93,7 @@ void xen_init_lock_cpu(int cpu)
>  
>  void xen_uninit_lock_cpu(int cpu)
>  {
> -	if (!xen_pvspin)
> +	if (!xen_pvspin || !xen_have_vector_callback)
>  		return;
>  
>  	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
> @@ -115,7 +115,7 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen);
>  void __init xen_init_spinlocks(void)
>  {
>  	/*  Don't need to use pvqspinlock code if there is only 1 vCPU. */
> -	if (num_possible_cpus() == 1 || nopvspin)
> +	if (num_possible_cpus() == 1 || nopvspin || !xen_have_vector_callback)


xen_init_spinlock() will not be called without vector callbacks so this test is not really necessary.


OTOH this also implies that we will never update virt_spin_lock_key.


-boris


>  		xen_pvspin = false;
>  
>  	if (!xen_pvspin) {


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 4/5] x86/xen: Don't register Xen IPIs when they aren't going to be used
  2021-01-04 16:50   ` boris.ostrovsky
@ 2021-01-04 17:29     ` David Woodhouse
  0 siblings, 0 replies; 23+ messages in thread
From: David Woodhouse @ 2021-01-04 17:29 UTC (permalink / raw)
  To: boris.ostrovsky, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel

[-- Attachment #1: Type: text/plain, Size: 727 bytes --]

On Mon, 2021-01-04 at 11:50 -0500, boris.ostrovsky@oracle.com wrote:
> > diff --git a/arch/x86/xen/enlighten_hvm.c
> > b/arch/x86/xen/enlighten_hvm.c
> > index a1c07e0c888e..7a6ef517e81a 100644
> > --- a/arch/x86/xen/enlighten_hvm.c
> > +++ b/arch/x86/xen/enlighten_hvm.c
> > @@ -164,10 +164,10 @@ static int xen_cpu_up_prepare_hvm(unsigned
> > int cpu)
> >        else
> >                per_cpu(xen_vcpu_id, cpu) = cpu;
> >        rc = xen_vcpu_setup(cpu);
> 
> 
> Without vector callback we will not be calling xen_vcpu_setup(0) so
> why should we still call it here for APs?

Good point. I didn't spot that because it wasn't one of the things that
actually crashed. I'll stop it doing so and retest.

 

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-04 17:06   ` boris.ostrovsky
@ 2021-01-04 17:32     ` David Woodhouse
  2021-01-04 19:06       ` boris.ostrovsky
  0 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2021-01-04 17:32 UTC (permalink / raw)
  To: boris.ostrovsky, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel

[-- Attachment #1: Type: text/plain, Size: 597 bytes --]

On Mon, 2021-01-04 at 12:06 -0500, boris.ostrovsky@oracle.com wrote:
> > @@ -115,7 +115,7 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen);
> >   void __init xen_init_spinlocks(void)
> >   {
> >        /*  Don't need to use pvqspinlock code if there is only 1 vCPU. */
> > -     if (num_possible_cpus() == 1 || nopvspin)
> > +     if (num_possible_cpus() == 1 || nopvspin || !xen_have_vector_callback)
> 
> 
> xen_init_spinlock() will not be called without vector callbacks so
> this test is not really necessary.

Right, that's just paranoia to make the conditions consistent and safe.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-04 17:32     ` David Woodhouse
@ 2021-01-04 19:06       ` boris.ostrovsky
  2021-01-04 20:51         ` David Woodhouse
  0 siblings, 1 reply; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-04 19:06 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 1/4/21 12:32 PM, David Woodhouse wrote:
> On Mon, 2021-01-04 at 12:06 -0500, boris.ostrovsky@oracle.com wrote:
>>> @@ -115,7 +115,7 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen);
>>>   void __init xen_init_spinlocks(void)
>>>   {
>>>        /*  Don't need to use pvqspinlock code if there is only 1 vCPU. */
>>> -     if (num_possible_cpus() == 1 || nopvspin)
>>> +     if (num_possible_cpus() == 1 || nopvspin || !xen_have_vector_callback)
>>
>> xen_init_spinlock() will not be called without vector callbacks so
>> this test is not really necessary.
> Right, that's just paranoia to make the conditions consistent and safe.


OK, but we still need to do something about virt_spin_lock_key.


-boris



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-04 19:06       ` boris.ostrovsky
@ 2021-01-04 20:51         ` David Woodhouse
  2021-01-04 22:09           ` boris.ostrovsky
  0 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2021-01-04 20:51 UTC (permalink / raw)
  To: boris.ostrovsky, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel

[-- Attachment #1: Type: text/plain, Size: 2208 bytes --]

On Mon, 2021-01-04 at 14:06 -0500, boris.ostrovsky@oracle.com wrote:
> On 1/4/21 12:32 PM, David Woodhouse wrote:
> > On Mon, 2021-01-04 at 12:06 -0500, boris.ostrovsky@oracle.com wrote
> > :
> > > > @@ -115,7 +115,7 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen);
> > > >    void __init xen_init_spinlocks(void)
> > > >    {
> > > >         /*  Don't need to use pvqspinlock code if there is only
> > > > 1 vCPU. */
> > > > -     if (num_possible_cpus() == 1 || nopvspin)
> > > > +     if (num_possible_cpus() == 1 || nopvspin ||
> > > > !xen_have_vector_callback)
> > > 
> > > xen_init_spinlock() will not be called without vector callbacks
> > > so
> > > this test is not really necessary.
> > 
> > Right, that's just paranoia to make the conditions consistent and
> > safe.
> 
> 
> OK, but we still need to do something about virt_spin_lock_key.

Right, I suppose we should just call xen_init_spinlocks() and then my
defensive check stops being defensive and does what we need, including
fixing virt_spin_lock_key.

Normally it's xen_hvm_smp_prepare_boot_cpu() which calls
xen_init_spinlocks(), and *also* calls xen_vcpu_setup() for CPU0...
which brings me back to your other observation.

I think we *should* be calling xen_vcpu_setup() for all CPUs, even when
there's no vector callback. We can still have a per-vCPU vcpu_info page
if we want it. It was relatively harmless that we didn't do it for
CPU0, but it was wrong not to do so.

So I think this fixes both. Will test:

--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -64,6 +64,8 @@ static void xen_hvm_cpu_die(unsigned int cpu)
 
 void __init xen_hvm_smp_init(void)
 {
+       smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
+
        if (!xen_have_vector_callback)
                return;
 
@@ -72,6 +74,5 @@ void __init xen_hvm_smp_init(void)
        smp_ops.cpu_die = xen_hvm_cpu_die;
        smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
        smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
-       smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
        smp_ops.smp_cpus_done = xen_smp_cpus_done;
 }

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-04 20:51         ` David Woodhouse
@ 2021-01-04 22:09           ` boris.ostrovsky
  2021-01-04 22:37             ` David Woodhouse
  0 siblings, 1 reply; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-04 22:09 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 1/4/21 3:51 PM, David Woodhouse wrote:
> On Mon, 2021-01-04 at 14:06 -0500, boris.ostrovsky@oracle.com wrote:
>>
>> OK, but we still need to do something about virt_spin_lock_key.
> Right, I suppose we should just call xen_init_spinlocks() and then my
> defensive check stops being defensive and does what we need, including
> fixing virt_spin_lock_key.
>
> Normally it's xen_hvm_smp_prepare_boot_cpu() which calls
> xen_init_spinlocks(), and *also* calls xen_vcpu_setup() for CPU0...
> which brings me back to your other observation.
>
> I think we *should* be calling xen_vcpu_setup() for all CPUs, even when
> there's no vector callback. We can still have a per-vCPU vcpu_info page
> if we want it. It was relatively harmless that we didn't do it for
> CPU0, but it was wrong not to do so.
>
> So I think this fixes both. Will test:


I actually think this should go further in that only IPI-related ops should be conditioned on vector callback presence. The rest are generic VCPU routines that are not necessarily interrupt/event-related. And if they call something that *is* related then those specific routines should decide what to do based on xen_have_vector_callback.


Also, for the spinlock changes specifically --- I wonder whether it would be better to reverse initial value of xen_pvspin and set it to 'true' only if initialization succeeds.


-boris


>
> --- a/arch/x86/xen/smp_hvm.c
> +++ b/arch/x86/xen/smp_hvm.c
> @@ -64,6 +64,8 @@ static void xen_hvm_cpu_die(unsigned int cpu)
>  
>  void __init xen_hvm_smp_init(void)
>  {
> +       smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
> +
>         if (!xen_have_vector_callback)
>                 return;
>  
> @@ -72,6 +74,5 @@ void __init xen_hvm_smp_init(void)
>         smp_ops.cpu_die = xen_hvm_cpu_die;
>         smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
>         smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
> -       smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
>         smp_ops.smp_cpus_done = xen_smp_cpus_done;
>  }


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-04 22:09           ` boris.ostrovsky
@ 2021-01-04 22:37             ` David Woodhouse
  2021-01-04 23:04               ` boris.ostrovsky
  0 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2021-01-04 22:37 UTC (permalink / raw)
  To: boris.ostrovsky, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel

[-- Attachment #1: Type: text/plain, Size: 3926 bytes --]

On Mon, 2021-01-04 at 17:09 -0500, boris.ostrovsky@oracle.com wrote:
> 
> I actually think this should go further in that only IPI-related ops
> should be conditioned on vector callback presence. The rest are
> generic VCPU routines that are not necessarily interrupt/event-
> related. And if they call something that *is* related then those
> specific routines should decide what to do based on
> xen_have_vector_callback.

Right.

My current patch (that I'm about to test) now looks like this:

commit 8126bf76319257fca0cf0b87fdfaaa42d2c658b6
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Mon Jan 4 20:54:05 2021 +0000

    x86/xen: Fix xen_hvm_smp_init() when vector callback not available
    
    Not all of the smp_ops should be conditional on the vector callback
    being available. Mostly just the IPI-related functions should.
    
    The xen_hvm_smp_prepare_boot_cpu() function does two things, both
    of which should still happen if there is no vector callback support.
    
    The call to xen_vcpu_setup() for vCPU0 should still happen as it just
    sets up the vcpu_info for CPU0. That does happen for the secondary
    vCPUs too, from xen_cpu_up_prepare_hvm().
    
    The second thing xen_hvm_smp_prepare_boot_cpu() does is to call
    xen_init_spinlocks(), which should *also* still be happening in the
    no-vector-callbacks case, so that it can clear its local xen_pvspin
    flag and disable the virt_spin_lock_key accordingly.
    
    Checking xen_have_vector_callback in xen_init_spinlocks() itself would
    affect PV guests, so set the global nopvspin flag in xen_hvm_smp_init()
    instead, when vector callbacks aren't available.
    
    The xen_hvm_smp_prepare_cpus() function does some IPI-related setup
    by calling xen_smp_intr_init() and xen_init_lock_cpu(), which can be
    made conditional. And it sets the xen_vcpu_id to XEN_VCPU_ID_INVALID
    for all possible CPUS, which does need to happen.
    
    Finally, xen_smp_cpus_done() offlines any vCPU which doesn't fit in
    the global shared_info page, if separate vcpu_info placement isn't
    available. That part also needs to happen unconditionally.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>

diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
index f5e7db4f82ab..4c959369f855 100644
--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -33,9 +33,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
 	int cpu;
 
 	native_smp_prepare_cpus(max_cpus);
-	WARN_ON(xen_smp_intr_init(0));
 
-	xen_init_lock_cpu(0);
+	if (xen_have_vector_callback) {
+		WARN_ON(xen_smp_intr_init(0));
+		xen_init_lock_cpu(0);
+	}
 
 	for_each_possible_cpu(cpu) {
 		if (cpu == 0)
@@ -64,14 +66,17 @@ static void xen_hvm_cpu_die(unsigned int cpu)
 
 void __init xen_hvm_smp_init(void)
 {
-	if (!xen_have_vector_callback)
+	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
+	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
+	smp_ops.smp_cpus_done = xen_smp_cpus_done;
+
+	if (!xen_have_vector_callback) {
+		nopvspin = true;
 		return;
+	}
 
-	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
 	smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
 	smp_ops.cpu_die = xen_hvm_cpu_die;
 	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
 	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
-	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
-	smp_ops.smp_cpus_done = xen_smp_cpus_done;
 }

> 
> Also, for the spinlock changes specifically --- I wonder whether it
> would be better to reverse initial value of xen_pvspin and set it to
> 'true' only if initialization succeeds.

I looked at that but it would need to be tristate, since the
'xen_nopvspin' command line option clears it from its default of being
enabled.


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-04 22:37             ` David Woodhouse
@ 2021-01-04 23:04               ` boris.ostrovsky
  2021-01-04 23:19                 ` David Woodhouse
  0 siblings, 1 reply; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-04 23:04 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 1/4/21 5:37 PM, David Woodhouse wrote:
>
> @@ -33,9 +33,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
>  	int cpu;
>  
>  	native_smp_prepare_cpus(max_cpus);
> -	WARN_ON(xen_smp_intr_init(0));
>  
> -	xen_init_lock_cpu(0);
> +	if (xen_have_vector_callback) {
> +		WARN_ON(xen_smp_intr_init(0));
> +		xen_init_lock_cpu(0);


By now you have nopvspin set so you might as well leave xen_init_lock_cpu(0) as is. (and then move the check inside xen_smp_intr_init())


> +	}
>  
>  	for_each_possible_cpu(cpu) {
>  		if (cpu == 0)
> @@ -64,14 +66,17 @@ static void xen_hvm_cpu_die(unsigned int cpu)
>  
>  void __init xen_hvm_smp_init(void)
>  {
> -	if (!xen_have_vector_callback)
> +	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
> +	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
> +	smp_ops.smp_cpus_done = xen_smp_cpus_done;
> +
> +	if (!xen_have_vector_callback) {
> +		nopvspin = true;
>  		return;
> +	}
>  
> -	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
>  	smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
>  	smp_ops.cpu_die = xen_hvm_cpu_die;


Why not xen_hvm_cpu_die too? common_cpu_die() sounds like something we should do, and the other three we call there will be nops.


>  	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
>  	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
> -	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
> -	smp_ops.smp_cpus_done = xen_smp_cpus_done;
>  }
>
>> Also, for the spinlock changes specifically --- I wonder whether it
>> would be better to reverse initial value of xen_pvspin and set it to
>> 'true' only if initialization succeeds.
> I looked at that but it would need to be tristate, since the
> 'xen_nopvspin' command line option clears it from its default of being
> enabled.


Ah, right. How about setting nopvspin in xen_parse_nopvspin()?


-boris



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-04 23:04               ` boris.ostrovsky
@ 2021-01-04 23:19                 ` David Woodhouse
  2021-01-04 23:44                   ` boris.ostrovsky
  0 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2021-01-04 23:19 UTC (permalink / raw)
  To: boris.ostrovsky, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel

[-- Attachment #1: Type: text/plain, Size: 2758 bytes --]

On Mon, 2021-01-04 at 18:04 -0500, boris.ostrovsky@oracle.com wrote:
> On 1/4/21 5:37 PM, David Woodhouse wrote:
> > 
> > @@ -33,9 +33,11 @@ static void __init
> > xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
> >  	int cpu;
> >  
> >  	native_smp_prepare_cpus(max_cpus);
> > -	WARN_ON(xen_smp_intr_init(0));
> >  
> > -	xen_init_lock_cpu(0);
> > +	if (xen_have_vector_callback) {
> > +		WARN_ON(xen_smp_intr_init(0));
> > +		xen_init_lock_cpu(0);
> 
> 
> By now you have nopvspin set so you might as well leave
> xen_init_lock_cpu(0) as is. (and then move the check inside
> xen_smp_intr_init())

I originally started doing it that way but PV guests use
xen_smp_intr_init() too, and want it to work even if nopvspin is set.
And don't set xen_have_vector_callback.

So the condition would need to be xen_pv_domain() ||
xen_have_vector_callback... or something like that. Or I could keep it
simple by keeping the new condition purely in the HVM code path, as I
did.

> 
> > +	}
> >  
> >  	for_each_possible_cpu(cpu) {
> >  		if (cpu == 0)
> > @@ -64,14 +66,17 @@ static void xen_hvm_cpu_die(unsigned int cpu)
> >  
> >  void __init xen_hvm_smp_init(void)
> >  {
> > -	if (!xen_have_vector_callback)
> > +	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
> > +	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
> > +	smp_ops.smp_cpus_done = xen_smp_cpus_done;
> > +
> > +	if (!xen_have_vector_callback) {
> > +		nopvspin = true;
> >  		return;
> > +	}
> >  
> > -	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
> >  	smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
> >  	smp_ops.cpu_die = xen_hvm_cpu_die;
> 
> 
> Why not xen_hvm_cpu_die too? common_cpu_die() sounds like something
> we should do, and the other three we call there will be nops.

native_cpu_die() calls that, and isn't that the function that gets
installed if we don't install our own?

> 
> >  	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
> >  	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
> > -	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
> > -	smp_ops.smp_cpus_done = xen_smp_cpus_done;
> >  }
> > 
> > > Also, for the spinlock changes specifically --- I wonder whether it
> > > would be better to reverse initial value of xen_pvspin and set it to
> > > 'true' only if initialization succeeds.
> > 
> > I looked at that but it would need to be tristate, since the
> > 'xen_nopvspin' command line option clears it from its default of being
> > enabled.
> 
> 
> Ah, right. How about setting nopvspin in xen_parse_nopvspin()?

That would make the xen_nopvspin command line option disable PV
spinlocks even under KVM.


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-04 23:19                 ` David Woodhouse
@ 2021-01-04 23:44                   ` boris.ostrovsky
  2021-01-05  1:41                     ` David Woodhouse
  0 siblings, 1 reply; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-04 23:44 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 1/4/21 6:19 PM, David Woodhouse wrote:
> On Mon, 2021-01-04 at 18:04 -0500, boris.ostrovsky@oracle.com wrote:
>> On 1/4/21 5:37 PM, David Woodhouse wrote:
>>> @@ -33,9 +33,11 @@ static void __init
>>> xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
>>>  	int cpu;
>>>  
>>>  	native_smp_prepare_cpus(max_cpus);
>>> -	WARN_ON(xen_smp_intr_init(0));
>>>  
>>> -	xen_init_lock_cpu(0);
>>> +	if (xen_have_vector_callback) {
>>> +		WARN_ON(xen_smp_intr_init(0));
>>> +		xen_init_lock_cpu(0);
>>
>> By now you have nopvspin set so you might as well leave
>> xen_init_lock_cpu(0) as is. (and then move the check inside
>> xen_smp_intr_init())
> I originally started doing it that way but PV guests use
> xen_smp_intr_init() too, and want it to work even if nopvspin is set.
> And don't set xen_have_vector_callback.
>
> So the condition would need to be xen_pv_domain() ||
> xen_have_vector_callback... or something like that. Or I could keep it
> simple by keeping the new condition purely in the HVM code path, as I
> did.


OK.


>
>>> +	}
>>>  
>>>  	for_each_possible_cpu(cpu) {
>>>  		if (cpu == 0)
>>> @@ -64,14 +66,17 @@ static void xen_hvm_cpu_die(unsigned int cpu)
>>>  
>>>  void __init xen_hvm_smp_init(void)
>>>  {
>>> -	if (!xen_have_vector_callback)
>>> +	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
>>> +	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
>>> +	smp_ops.smp_cpus_done = xen_smp_cpus_done;
>>> +
>>> +	if (!xen_have_vector_callback) {
>>> +		nopvspin = true;
>>>  		return;
>>> +	}
>>>  
>>> -	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
>>>  	smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
>>>  	smp_ops.cpu_die = xen_hvm_cpu_die;
>>
>> Why not xen_hvm_cpu_die too? common_cpu_die() sounds like something
>> we should do, and the other three we call there will be nops.
> native_cpu_die() calls that, and isn't that the function that gets
> installed if we don't install our own?


True.


Still, a Xen guest should call Xen-specific cpu_die() routine if possible. Especially since (now) other cpu (i.e. non-IPI) ops will call Xen versions.


>
>>>  	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
>>>  	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
>>> -	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
>>> -	smp_ops.smp_cpus_done = xen_smp_cpus_done;
>>>  }
>>>
>>>> Also, for the spinlock changes specifically --- I wonder whether it
>>>> would be better to reverse initial value of xen_pvspin and set it to
>>>> 'true' only if initialization succeeds.
>>> I looked at that but it would need to be tristate, since the
>>> 'xen_nopvspin' command line option clears it from its default of being
>>> enabled.
>>
>> Ah, right. How about setting nopvspin in xen_parse_nopvspin()?
> That would make the xen_nopvspin command line option disable PV
> spinlocks even under KVM.


xen_nopvspin is deprecated and nopvspin is recommended anyway so I think it should be OK, no?



-boris



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-04 23:44                   ` boris.ostrovsky
@ 2021-01-05  1:41                     ` David Woodhouse
  2021-01-05 14:45                       ` boris.ostrovsky
  0 siblings, 1 reply; 23+ messages in thread
From: David Woodhouse @ 2021-01-05  1:41 UTC (permalink / raw)
  To: boris.ostrovsky, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel

[-- Attachment #1: Type: text/plain, Size: 2048 bytes --]

On Mon, 2021-01-04 at 18:44 -0500, boris.ostrovsky@oracle.com wrote:
> On 1/4/21 6:19 PM, David Woodhouse wrote:
> > On Mon, 2021-01-04 at 18:04 -0500, boris.ostrovsky@oracle.com
> > wrote:
> > > Why not xen_hvm_cpu_die too? common_cpu_die() sounds like something
> > > we should do, and the other three we call there will be nops.
> > 
> > native_cpu_die() calls that, and isn't that the function that gets
> > installed if we don't install our own?
> 
> 
> True.
> 
> 
> Still, a Xen guest should call Xen-specific cpu_die() routine if
> possible. Especially since (now) other cpu (i.e. non-IPI) ops will
> call Xen versions.

But as you said, the other three things that xen_hvm_cpu_die() does are
all no-ops (or at least we'd have to make them no-ops; I haven't
checked). I don't see the point in calling it, only for it to do
nothing.

> 
> > 
> > > >  	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
> > > >  	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
> > > > -	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
> > > > -	smp_ops.smp_cpus_done = xen_smp_cpus_done;
> > > >  }
> > > > 
> > > > > Also, for the spinlock changes specifically --- I wonder whether it
> > > > > would be better to reverse initial value of xen_pvspin and set it to
> > > > > 'true' only if initialization succeeds.
> > > > 
> > > > I looked at that but it would need to be tristate, since the
> > > > 'xen_nopvspin' command line option clears it from its default of being
> > > > enabled.
> > > 
> > > Ah, right. How about setting nopvspin in xen_parse_nopvspin()?
> > 
> > That would make the xen_nopvspin command line option disable PV
> > spinlocks even under KVM.
> 
> 
> xen_nopvspin is deprecated and nopvspin is recommended anyway so I think it should be OK, no?

They do different things. If someone has an image (which might run on
both Xen and KVM) which has xen_nopvspin, we don't suddenly want that
to start disabling PV spinlocks on KVM too.


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-05  1:41                     ` David Woodhouse
@ 2021-01-05 14:45                       ` boris.ostrovsky
  2021-01-05 15:29                         ` David Woodhouse
  0 siblings, 1 reply; 23+ messages in thread
From: boris.ostrovsky @ 2021-01-05 14:45 UTC (permalink / raw)
  To: David Woodhouse, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel


On 1/4/21 8:41 PM, David Woodhouse wrote:
> On Mon, 2021-01-04 at 18:44 -0500, boris.ostrovsky@oracle.com wrote:
>> On 1/4/21 6:19 PM, David Woodhouse wrote:
>>> On Mon, 2021-01-04 at 18:04 -0500, boris.ostrovsky@oracle.com
>>> wrote:
>>>> Why not xen_hvm_cpu_die too? common_cpu_die() sounds like something
>>>> we should do, and the other three we call there will be nops.
>>> native_cpu_die() calls that, and isn't that the function that gets
>>> installed if we don't install our own?
>>
>> True.
>>
>>
>> Still, a Xen guest should call Xen-specific cpu_die() routine if
>> possible. Especially since (now) other cpu (i.e. non-IPI) ops will
>> call Xen versions.
> But as you said, the other three things that xen_hvm_cpu_die() does are
> all no-ops (or at least we'd have to make them no-ops; I haven't
> checked). I don't see the point in calling it, only for it to do
> nothing.


For maintenance purposes. When something gets added in initialization path (prepare_boot_cpu() and such) we likely want to tear it down in cpu_die(). Today both native and Xen cpu_die() ops work the same but things change. And chances are people will not test no_vector_callback case so we will then have to hunt down what broke when.


no_vector_callback is a pretty esoteric case already so making it take different code path from common case, when there is little or no penalty for doing the latter, seems undesirable to me.


>
>>>>>  	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
>>>>>  	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
>>>>> -	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
>>>>> -	smp_ops.smp_cpus_done = xen_smp_cpus_done;
>>>>>  }
>>>>>
>>>>>> Also, for the spinlock changes specifically --- I wonder whether it
>>>>>> would be better to reverse initial value of xen_pvspin and set it to
>>>>>> 'true' only if initialization succeeds.
>>>>> I looked at that but it would need to be tristate, since the
>>>>> 'xen_nopvspin' command line option clears it from its default of being
>>>>> enabled.
>>>> Ah, right. How about setting nopvspin in xen_parse_nopvspin()?
>>> That would make the xen_nopvspin command line option disable PV
>>> spinlocks even under KVM.
>>
>> xen_nopvspin is deprecated and nopvspin is recommended anyway so I think it should be OK, no?
> They do different things. If someone has an image (which might run on
> both Xen and KVM) which has xen_nopvspin, we don't suddenly want that
> to start disabling PV spinlocks on KVM too.


True, but this option is deprecated and will be removed in the future.


-boris



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
  2021-01-05 14:45                       ` boris.ostrovsky
@ 2021-01-05 15:29                         ` David Woodhouse
  0 siblings, 0 replies; 23+ messages in thread
From: David Woodhouse @ 2021-01-05 15:29 UTC (permalink / raw)
  To: boris.ostrovsky, x86
  Cc: Stefano Stabellini, Juergen Gross, Paul Durrant, jgrall,
	karahmed, xen-devel

[-- Attachment #1: Type: text/plain, Size: 5148 bytes --]

On Tue, 2021-01-05 at 09:45 -0500, boris.ostrovsky@oracle.com wrote:
> For maintenance purposes. When something gets added in initialization
> path (prepare_boot_cpu() and such) we likely want to tear it down in
> cpu_die(). Today both native and Xen cpu_die() ops work the same but
> things change. And chances are people will not test
> no_vector_callback case so we will then have to hunt down what broke
> when.
> 
> 
> no_vector_callback is a pretty esoteric case already so making it
> take different code path from common case, when there is little or no
> penalty for doing the latter, seems undesirable to me.


That makes sense.

I'll also not even bother to look at those three teardown functions and
work out if it's harmless to call them unconditionally. If the point is
to remind those who come after us, then it's actually far more
effective to have an explicit 'if (xen_have_vector_callback)'.

Looks like this. I suppose I should actually try booting it under real
Xen instead of just KVM-pretending-to-be-Xen, and then I'll repost the
series (which is the top five commits of  
https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/xen-pci-irq )

From: David Woodhouse <dwmw@amazon.co.uk>
Subject: [PATCH] x86/xen: Fix xen_hvm_smp_init() when vector callback not available

Only the IPI-related functions in the smp_ops should be conditional
on the vector callback being available. The rest should still happen:

 • xen_hvm_smp_prepare_boot_cpu()

   This function does two things, both of which should still happen if
   there is no vector callback support.

   The call to xen_vcpu_setup() for vCPU0 should still happen as it just
   sets up the vcpu_info for CPU0. That does happen for the secondary
   vCPUs too, from xen_cpu_up_prepare_hvm().

   The second thing it does is call xen_init_spinlocks(), which perhaps
   counter-intuitively should *also* still be happening in the case
   without vector callbacks, so that it can clear its local xen_pvspin
   flag and disable the virt_spin_lock_key accordingly.

   Checking xen_have_vector_callback in xen_init_spinlocks() itself
   would affect PV guests, so set the global nopvspin flag in
   xen_hvm_smp_init() instead, when vector callbacks aren't available.

 • xen_hvm_smp_prepare_cpus()

   This does some IPI-related setup by calling xen_smp_intr_init() and
   xen_init_lock_cpu(), which can be made conditional. And it sets the
   xen_vcpu_id to XEN_VCPU_ID_INVALID for all possible CPUS, which does
   need to happen.

 • xen_smp_cpus_done()

   This offlines any vCPUs which doesn't fit in the global shared_info
   page, if separate vcpu_info placement isn't available. That part also
   needs to happen regardless of vector callback support.

 • xen_hvm_cpu_die()

   This doesn't actually do anything other than commin_cpu_die() right
   right now in the !vector_callback case; all three teardown functions
   it calls should be no-ops. But to guard against future regressions
   it's useful to call it anyway, and for it to explicitly check for
   xen_have_vector_callback before calling those additional functions.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/xen/smp_hvm.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
index f5e7db4f82ab..056430a1080b 100644
--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -33,9 +33,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
 	int cpu;
 
 	native_smp_prepare_cpus(max_cpus);
-	WARN_ON(xen_smp_intr_init(0));
 
-	xen_init_lock_cpu(0);
+	if (xen_have_vector_callback) {
+		WARN_ON(xen_smp_intr_init(0));
+		xen_init_lock_cpu(0);
+	}
 
 	for_each_possible_cpu(cpu) {
 		if (cpu == 0)
@@ -50,9 +52,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
 static void xen_hvm_cpu_die(unsigned int cpu)
 {
 	if (common_cpu_die(cpu) == 0) {
-		xen_smp_intr_free(cpu);
-		xen_uninit_lock_cpu(cpu);
-		xen_teardown_timer(cpu);
+		if (xen_have_vector_callback) {
+			xen_smp_intr_free(cpu);
+			xen_uninit_lock_cpu(cpu);
+			xen_teardown_timer(cpu);
+		}
 	}
 }
 #else
@@ -64,14 +68,17 @@ static void xen_hvm_cpu_die(unsigned int cpu)
 
 void __init xen_hvm_smp_init(void)
 {
-	if (!xen_have_vector_callback)
+	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
+	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
+	smp_ops.smp_cpus_done = xen_smp_cpus_done;
+	smp_ops.cpu_die = xen_hvm_cpu_die;
+
+	if (!xen_have_vector_callback) {
+		nopvspin = true;
 		return;
+	}
 
-	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
 	smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
-	smp_ops.cpu_die = xen_hvm_cpu_die;
 	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
 	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
-	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
-	smp_ops.smp_cpus_done = xen_smp_cpus_done;
 }
-- 
2.29.2


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2021-01-05 15:29 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-24 11:53 [PATCH 0/5] Xen INTX/GSI event channel delivery fixes David Woodhouse
2020-12-24 11:53 ` [PATCH 1/5] xen: Fix event channel callback via INTX/GSI David Woodhouse
2021-01-04 16:35   ` boris.ostrovsky
2020-12-24 11:53 ` [PATCH 2/5] xen: Set platform PCI device INTX affinity to CPU0 David Woodhouse
2021-01-04 16:42   ` boris.ostrovsky
2020-12-24 11:53 ` [PATCH 3/5] x86/xen: Add a no_vector_callback option to test PCI INTX delivery David Woodhouse
2021-01-04 16:44   ` boris.ostrovsky
2020-12-24 11:53 ` [PATCH 4/5] x86/xen: Don't register Xen IPIs when they aren't going to be used David Woodhouse
2021-01-04 16:50   ` boris.ostrovsky
2021-01-04 17:29     ` David Woodhouse
2020-12-24 11:53 ` [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't " David Woodhouse
2021-01-04 17:06   ` boris.ostrovsky
2021-01-04 17:32     ` David Woodhouse
2021-01-04 19:06       ` boris.ostrovsky
2021-01-04 20:51         ` David Woodhouse
2021-01-04 22:09           ` boris.ostrovsky
2021-01-04 22:37             ` David Woodhouse
2021-01-04 23:04               ` boris.ostrovsky
2021-01-04 23:19                 ` David Woodhouse
2021-01-04 23:44                   ` boris.ostrovsky
2021-01-05  1:41                     ` David Woodhouse
2021-01-05 14:45                       ` boris.ostrovsky
2021-01-05 15:29                         ` David Woodhouse

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.