xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/2] XSA-343 followup patches
@ 2020-11-09  6:41 Juergen Gross
  2020-11-09  6:41 ` [PATCH v5 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Juergen Gross @ 2020-11-09  6:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu,
	Roger Pau Monné

The patches for XSA-343 produced some fallout, especially the event
channel locking has shown to be problematic.

Patch 1 is targeting fifo event channels for avoiding any races for the
case that the fifo queue has been changed for a specific event channel.

The second patch is modifying the per event channel locking scheme in
order to avoid deadlocks and problems due to the event channel lock
having been changed to require IRQs off by the XSA-343 patches.

Changes in V5:
- moved evtchn_write_[un]lock() to event_channel.c (Jan Beulich)
- used normal read_lock() in some cases (Jan Beulich)

Changes in V4:
- switched to real rwlock

Changes in V3:
- addressed comments

Juergen Gross (2):
  xen/events: access last_priority and last_vcpu_id together
  xen/evtchn: rework per event channel lock

 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 144 +++++++++++++++++++++----------------
 xen/common/event_fifo.c    |  25 +++++--
 xen/include/xen/event.h    |  32 ++++++---
 xen/include/xen/sched.h    |   8 ++-
 6 files changed, 136 insertions(+), 88 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v5 1/2] xen/events: access last_priority and last_vcpu_id together
  2020-11-09  6:41 [PATCH v5 0/2] XSA-343 followup patches Juergen Gross
@ 2020-11-09  6:41 ` Juergen Gross
  2020-11-09  6:41 ` [PATCH v5 2/2] xen/evtchn: rework per event channel lock Juergen Gross
  2020-11-09 12:00 ` [PATCH v5 0/2] XSA-343 followup patches Jan Beulich
  2 siblings, 0 replies; 8+ messages in thread
From: Juergen Gross @ 2020-11-09  6:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu,
	Julien Grall

The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.

In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/event_fifo.c | 25 +++++++++++++++++++------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index c6e58d2a1a..79090c04ca 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
     unsigned int num_evtchns;
 };
 
+union evtchn_fifo_lastq {
+    uint32_t raw;
+    struct {
+        uint8_t last_priority;
+        uint16_t last_vcpu_id;
+    };
+};
+
 static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
                                                        unsigned int port)
 {
@@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
     struct vcpu *v;
     struct evtchn_fifo_queue *q, *old_q;
     unsigned int try;
+    union evtchn_fifo_lastq lastq;
 
     for ( try = 0; try < 3; try++ )
     {
-        v = d->vcpu[evtchn->last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        v = d->vcpu[lastq.last_vcpu_id];
+        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         spin_lock_irqsave(&old_q->lock, *flags);
 
-        v = d->vcpu[evtchn->last_vcpu_id];
-        q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        v = d->vcpu[lastq.last_vcpu_id];
+        q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         if ( old_q == q )
             return old_q;
@@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         /* Moved to a different queue? */
         if ( old_q != q )
         {
-            evtchn->last_vcpu_id = v->vcpu_id;
-            evtchn->last_priority = q->priority;
+            union evtchn_fifo_lastq lastq = { };
+
+            lastq.last_vcpu_id = v->vcpu_id;
+            lastq.last_priority = q->priority;
+            write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
             spin_unlock_irqrestore(&old_q->lock, flags);
             spin_lock_irqsave(&q->lock, flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f869..a298ff4df8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -114,8 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
-    u8 last_priority;
-    u16 last_vcpu_id;
+    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 2/2] xen/evtchn: rework per event channel lock
  2020-11-09  6:41 [PATCH v5 0/2] XSA-343 followup patches Juergen Gross
  2020-11-09  6:41 ` [PATCH v5 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
@ 2020-11-09  6:41 ` Juergen Gross
  2020-11-09 11:58   ` Jan Beulich
  2020-11-09 12:00 ` [PATCH v5 0/2] XSA-343 followup patches Jan Beulich
  2 siblings, 1 reply; 8+ messages in thread
From: Juergen Gross @ 2020-11-09  6:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Jan Beulich, Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini

Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.

Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.

The lock is needed for avoiding races between event channel state
changes (creation, closing, binding) against normal operations (set
pending, [un]masking, priority changes).

Use a rwlock, but with some restrictions:

- normal operations use read_trylock(), in case of not obtaining the
  lock the operation is omitted or a default state is returned

- closing an event channel is using write_lock(), with ASSERT()ing that
  the lock is taken as writer only when the state of the event channel
  is either before or after the locked region appropriate (either free
  or unbound).

Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- switch to rwlock
- add ASSERT() to verify correct write_lock() usage

V3:
- corrected a copy-and-paste error (Jan Beulich)
- corrected unlocking in two cases (Jan Beulich)
- renamed evtchn_read_trylock() (Jan Beulich)
- added some comments and an ASSERT() for evtchn_write_lock()
- set EVENT_WRITE_LOCK_INC to INT_MIN

V2:
- added needed barriers

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 144 +++++++++++++++++++++----------------
 xen/include/xen/event.h    |  32 ++++++---
 xen/include/xen/sched.h    |   5 +-
 5 files changed, 116 insertions(+), 80 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..8d1f9a9fc6 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
                 pirq = domain_irq_to_pirq(d, irq);
                 info = pirq_info(d, pirq);
                 evtchn = evtchn_from_port(d, info->evtchn);
-                local_irq_disable();
-                if ( spin_trylock(&evtchn->lock) )
+                if ( evtchn_read_trylock(evtchn) )
                 {
                     pending = evtchn_is_pending(d, evtchn);
                     masked = evtchn_is_masked(d, evtchn);
-                    spin_unlock(&evtchn->lock);
+                    evtchn_read_unlock(evtchn);
                 }
-                local_irq_enable();
                 printk("d%d:%3d(%c%c%c)%c",
                        d->domain_id, pirq, "-P?"[pending],
                        "-M?"[masked], info->masked ? 'M' : '-',
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 9aef7a860a..b4e83e0778 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
     if ( port_is_valid(guest, port) )
     {
         struct evtchn *chn = evtchn_from_port(guest, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&chn->lock, flags);
-        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
-        spin_unlock_irqrestore(&chn->lock, flags);
+        if ( evtchn_read_trylock(chn) )
+        {
+            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
+            evtchn_read_unlock(chn);
+        }
     }
 }
 
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index cd4a2c0501..4a6b6fc5ad 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -50,6 +50,34 @@
 
 #define consumer_is_xen(e) (!!(e)->xen_consumer)
 
+/*
+ * Lock an event channel exclusively. This is allowed only when the channel is
+ * free or unbound either when taking or when releasing the lock, as any
+ * concurrent operation on the event channel using evtchn_read_trylock() will
+ * just assume the event channel is free or unbound at the moment when the
+ * evtchn_read_trylock() returns false.
+ */
+static inline void evtchn_write_lock(struct evtchn *evtchn)
+{
+    write_lock(&evtchn->lock);
+
+    evtchn->old_state = evtchn->state;
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    /* Enforce lock discipline. */
+    ASSERT(evtchn->old_state == ECS_FREE || evtchn->old_state == ECS_UNBOUND ||
+           evtchn->state == ECS_FREE || evtchn->state == ECS_UNBOUND);
+
+    write_unlock(&evtchn->lock);
+}
+
+static inline void evtchn_read_lock(struct evtchn *evtchn)
+{
+    read_lock(&evtchn->lock);
+}
+
 /*
  * The function alloc_unbound_xen_event_channel() allows an arbitrary
  * notifier function to be specified. However, very few unique functions
@@ -133,7 +161,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
             return NULL;
         }
         chn[i].port = port + i;
-        spin_lock_init(&chn[i].lock);
+        rwlock_init(&chn[i].lock);
     }
     return chn;
 }
@@ -255,7 +283,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     int            port;
     domid_t        dom = alloc->dom;
     long           rc;
-    unsigned long  flags;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -271,14 +298,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     alloc->port = port;
 
@@ -291,32 +318,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 }
 
 
-static unsigned long double_evtchn_lock(struct evtchn *lchn,
-                                        struct evtchn *rchn)
+static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    unsigned long flags;
-
     if ( lchn <= rchn )
     {
-        spin_lock_irqsave(&lchn->lock, flags);
+        evtchn_write_lock(lchn);
         if ( lchn != rchn )
-            spin_lock(&rchn->lock);
+            evtchn_write_lock(rchn);
     }
     else
     {
-        spin_lock_irqsave(&rchn->lock, flags);
-        spin_lock(&lchn->lock);
+        evtchn_write_lock(rchn);
+        evtchn_write_lock(lchn);
     }
-
-    return flags;
 }
 
-static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn,
-                                 unsigned long flags)
+static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
 {
     if ( lchn != rchn )
-        spin_unlock(&lchn->lock);
-    spin_unlock_irqrestore(&rchn->lock, flags);
+        evtchn_write_unlock(lchn);
+    evtchn_write_unlock(rchn);
 }
 
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
@@ -326,7 +347,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     int            lport, rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
     long           rc;
-    unsigned long  flags;
 
     if ( rdom == DOMID_SELF )
         rdom = current->domain->domain_id;
@@ -362,7 +382,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     if ( rc )
         goto out;
 
-    flags = double_evtchn_lock(lchn, rchn);
+    double_evtchn_lock(lchn, rchn);
 
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
@@ -379,7 +399,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
      */
     evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
-    double_evtchn_unlock(lchn, rchn, flags);
+    double_evtchn_unlock(lchn, rchn);
 
     bind->local_port = lport;
 
@@ -402,7 +422,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
     struct domain *d = current->domain;
     int            virq = bind->virq, vcpu = bind->vcpu;
     int            rc = 0;
-    unsigned long  flags;
 
     if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
         return -EINVAL;
@@ -440,14 +459,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;
     chn->u.virq         = virq;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     v->virq_to_evtchn[virq] = bind->port = port;
 
@@ -464,7 +483,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
     struct domain *d = current->domain;
     int            port, vcpu = bind->vcpu;
     long           rc = 0;
-    unsigned long  flags;
 
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
@@ -476,13 +494,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_IPI;
     chn->notify_vcpu_id = vcpu;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -526,7 +544,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     struct pirq   *info;
     int            port = 0, pirq = bind->pirq;
     long           rc;
-    unsigned long  flags;
 
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
@@ -559,14 +576,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
         goto out;
     }
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state  = ECS_PIRQ;
     chn->u.pirq.irq = pirq;
     link_pirq_port(port, chn, v);
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -587,7 +604,6 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
     struct evtchn *chn1, *chn2;
     int            port2;
     long           rc = 0;
-    unsigned long  flags;
 
  again:
     spin_lock(&d1->event_lock);
@@ -688,14 +704,14 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
-        flags = double_evtchn_lock(chn1, chn2);
+        double_evtchn_lock(chn1, chn2);
 
         evtchn_free(d1, chn1);
 
         chn2->state = ECS_UNBOUND;
         chn2->u.unbound.remote_domid = d1->domain_id;
 
-        double_evtchn_unlock(chn1, chn2, flags);
+        double_evtchn_unlock(chn1, chn2);
 
         goto out;
 
@@ -703,9 +719,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG();
     }
 
-    spin_lock_irqsave(&chn1->lock, flags);
+    evtchn_write_lock(chn1);
     evtchn_free(d1, chn1);
-    spin_unlock_irqrestore(&chn1->lock, flags);
+    evtchn_write_unlock(chn1);
 
  out:
     if ( d2 != NULL )
@@ -725,7 +741,6 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     struct evtchn *lchn, *rchn;
     struct domain *rd;
     int            rport, ret = 0;
-    unsigned long  flags;
 
     if ( !port_is_valid(ld, lport) )
         return -EINVAL;
@@ -738,7 +753,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    evtchn_read_lock(lchn);
 
     /* Guest cannot send via a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(lchn)) )
@@ -773,7 +788,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     }
 
 out:
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 
     return ret;
 }
@@ -806,9 +821,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
 
     d = v->domain;
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, v->vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -837,9 +854,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
         goto out;
 
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -849,7 +868,6 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
 {
     int port;
     struct evtchn *chn;
-    unsigned long flags;
 
     /*
      * PV guests: It should not be possible to race with __evtchn_close(). The
@@ -864,9 +882,11 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
     }
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 }
 
 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly;
@@ -1068,15 +1088,16 @@ int evtchn_unmask(unsigned int port)
 {
     struct domain *d = current->domain;
     struct evtchn *evtchn;
-    unsigned long flags;
 
     if ( unlikely(!port_is_valid(d, port)) )
         return -EINVAL;
 
     evtchn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&evtchn->lock, flags);
-    evtchn_port_unmask(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        evtchn_port_unmask(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return 0;
 }
@@ -1156,15 +1177,17 @@ static long evtchn_set_priority(const struct evtchn_set_priority *set_priority)
     unsigned int port = set_priority->port;
     struct evtchn *chn;
     long ret;
-    unsigned long flags;
 
     if ( !port_is_valid(d, port) )
         return -EINVAL;
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
+
+    evtchn_read_lock(chn);
+
     ret = evtchn_port_set_priority(d, chn, set_priority->priority);
-    spin_unlock_irqrestore(&chn->lock, flags);
+
+    evtchn_read_unlock(chn);
 
     return ret;
 }
@@ -1332,7 +1355,6 @@ int alloc_unbound_xen_event_channel(
 {
     struct evtchn *chn;
     int            port, rc;
-    unsigned long  flags;
 
     spin_lock(&ld->event_lock);
 
@@ -1345,14 +1367,14 @@ int alloc_unbound_xen_event_channel(
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     chn->xen_consumer = get_xen_consumer(notification_fn);
     chn->notify_vcpu_id = lvcpu;
     chn->u.unbound.remote_domid = remote_domid;
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     /*
      * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
@@ -1388,7 +1410,6 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 {
     struct evtchn *lchn, *rchn;
     struct domain *rd;
-    unsigned long flags;
 
     if ( !port_is_valid(ld, lport) )
     {
@@ -1403,7 +1424,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return;
 
     if ( likely(lchn->state == ECS_INTERDOMAIN) )
     {
@@ -1413,7 +1435,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
         evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
     }
 
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 }
 
 void evtchn_check_pollers(struct domain *d, unsigned int port)
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 2ed4be78f6..9c84265f53 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,16 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
 #define bucket_from_port(d, p) \
     ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
 
+static inline bool evtchn_read_trylock(struct evtchn *evtchn)
+{
+    return read_trylock(&evtchn->lock);
+}
+
+static inline void evtchn_read_unlock(struct evtchn *evtchn)
+{
+    read_unlock(&evtchn->lock);
+}
+
 static inline bool_t port_is_valid(struct domain *d, unsigned int p)
 {
     if ( p >= read_atomic(&d->valid_evtchns) )
@@ -234,12 +244,13 @@ static inline bool evtchn_is_masked(const struct domain *d,
 static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
 {
     struct evtchn *evtchn = evtchn_from_port(d, port);
-    bool rc;
-    unsigned long flags;
+    bool rc = true;
 
-    spin_lock_irqsave(&evtchn->lock, flags);
-    rc = evtchn_is_masked(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        rc = evtchn_is_masked(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return rc;
 }
@@ -252,12 +263,13 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
     if ( port_is_valid(d, port) )
     {
         struct evtchn *evtchn = evtchn_from_port(d, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&evtchn->lock, flags);
-        if ( evtchn_usable(evtchn) )
-            rc = evtchn_is_pending(d, evtchn);
-        spin_unlock_irqrestore(&evtchn->lock, flags);
+        if ( evtchn_read_trylock(evtchn) )
+        {
+            if ( evtchn_usable(evtchn) )
+                rc = evtchn_is_pending(d, evtchn);
+            evtchn_read_unlock(evtchn);
+        }
     }
 
     return rc;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a298ff4df8..66d8f058be 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -85,7 +85,7 @@ extern domid_t hardware_domid;
 
 struct evtchn
 {
-    spinlock_t lock;
+    rwlock_t lock;
 #define ECS_FREE         0 /* Channel is available for use.                  */
 #define ECS_RESERVED     1 /* Channel is reserved.                           */
 #define ECS_UNBOUND      2 /* Channel is waiting to bind to a remote domain. */
@@ -114,6 +114,9 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
+#ifndef NDEBUG
+    u8 old_state;      /* State when taking lock in write mode. */
+#endif
     u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 2/2] xen/evtchn: rework per event channel lock
  2020-11-09  6:41 ` [PATCH v5 2/2] xen/evtchn: rework per event channel lock Juergen Gross
@ 2020-11-09 11:58   ` Jan Beulich
  2020-11-09 13:16     ` Jürgen Groß
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2020-11-09 11:58 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, xen-devel

On 09.11.2020 07:41, Juergen Gross wrote:
> Currently the lock for a single event channel needs to be taken with
> interrupts off, which causes deadlocks in some cases.
> 
> Rework the per event channel lock to be non-blocking for the case of
> sending an event and removing the need for disabling interrupts for
> taking the lock.
> 
> The lock is needed for avoiding races between event channel state
> changes (creation, closing, binding) against normal operations (set
> pending, [un]masking, priority changes).
> 
> Use a rwlock, but with some restrictions:
> 
> - normal operations use read_trylock(), in case of not obtaining the
>   lock the operation is omitted or a default state is returned
> 
> - closing an event channel is using write_lock(), with ASSERT()ing that
>   the lock is taken as writer only when the state of the event channel
>   is either before or after the locked region appropriate (either free
>   or unbound).

This has become stale, and may have been incomplete already before:
- Normal operations now may use two diffeent approaches. Which one
is to be used when would want writing down here.
- write_lock() use goes beyond closing.

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
>                  pirq = domain_irq_to_pirq(d, irq);
>                  info = pirq_info(d, pirq);
>                  evtchn = evtchn_from_port(d, info->evtchn);
> -                local_irq_disable();
> -                if ( spin_trylock(&evtchn->lock) )
> +                if ( evtchn_read_trylock(evtchn) )
>                  {
>                      pending = evtchn_is_pending(d, evtchn);
>                      masked = evtchn_is_masked(d, evtchn);
> -                    spin_unlock(&evtchn->lock);
> +                    evtchn_read_unlock(evtchn);
>                  }
> -                local_irq_enable();
>                  printk("d%d:%3d(%c%c%c)%c",
>                         d->domain_id, pirq, "-P?"[pending],
>                         "-M?"[masked], info->masked ? 'M' : '-',

Using trylock here has a reason different from that in sending
functions, aiui. Please say so in the description, to justify
exposure of evtchn_read_lock().

> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>      if ( port_is_valid(guest, port) )
>      {
>          struct evtchn *chn = evtchn_from_port(guest, port);
> -        unsigned long flags;
>  
> -        spin_lock_irqsave(&chn->lock, flags);
> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
> -        spin_unlock_irqrestore(&chn->lock, flags);
> +        if ( evtchn_read_trylock(chn) )
> +        {
> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
> +            evtchn_read_unlock(chn);
> +        }

Does this need trylock?

> @@ -1068,15 +1088,16 @@ int evtchn_unmask(unsigned int port)
>  {
>      struct domain *d = current->domain;
>      struct evtchn *evtchn;
> -    unsigned long flags;
>  
>      if ( unlikely(!port_is_valid(d, port)) )
>          return -EINVAL;
>  
>      evtchn = evtchn_from_port(d, port);
> -    spin_lock_irqsave(&evtchn->lock, flags);
> -    evtchn_port_unmask(d, evtchn);
> -    spin_unlock_irqrestore(&evtchn->lock, flags);
> +    if ( evtchn_read_trylock(evtchn) )
> +    {
> +        evtchn_port_unmask(d, evtchn);
> +        evtchn_read_unlock(evtchn);
> +    }

I think this one could as well use plain read_lock().

> @@ -234,12 +244,13 @@ static inline bool evtchn_is_masked(const struct domain *d,
>  static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
>  {
>      struct evtchn *evtchn = evtchn_from_port(d, port);
> -    bool rc;
> -    unsigned long flags;
> +    bool rc = true;
>  
> -    spin_lock_irqsave(&evtchn->lock, flags);
> -    rc = evtchn_is_masked(d, evtchn);
> -    spin_unlock_irqrestore(&evtchn->lock, flags);
> +    if ( evtchn_read_trylock(evtchn) )
> +    {
> +        rc = evtchn_is_masked(d, evtchn);
> +        evtchn_read_unlock(evtchn);
> +    }
>  
>      return rc;
>  }
> @@ -252,12 +263,13 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
>      if ( port_is_valid(d, port) )
>      {
>          struct evtchn *evtchn = evtchn_from_port(d, port);
> -        unsigned long flags;
>  
> -        spin_lock_irqsave(&evtchn->lock, flags);
> -        if ( evtchn_usable(evtchn) )
> -            rc = evtchn_is_pending(d, evtchn);
> -        spin_unlock_irqrestore(&evtchn->lock, flags);
> +        if ( evtchn_read_trylock(evtchn) )
> +        {
> +            if ( evtchn_usable(evtchn) )
> +                rc = evtchn_is_pending(d, evtchn);
> +            evtchn_read_unlock(evtchn);
> +        }
>      }
>  
>      return rc;

At least for the latter I suppose it should also be plain read_lock().
We ought to keep the exceptions to where they're actually needed, I
think.

Jan


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 0/2] XSA-343 followup patches
  2020-11-09  6:41 [PATCH v5 0/2] XSA-343 followup patches Juergen Gross
  2020-11-09  6:41 ` [PATCH v5 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
  2020-11-09  6:41 ` [PATCH v5 2/2] xen/evtchn: rework per event channel lock Juergen Gross
@ 2020-11-09 12:00 ` Jan Beulich
  2020-11-09 13:05   ` Jürgen Groß
  2 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2020-11-09 12:00 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, Roger Pau Monné,
	xen-devel

On 09.11.2020 07:41, Juergen Gross wrote:
> The patches for XSA-343 produced some fallout, especially the event
> channel locking has shown to be problematic.
> 
> Patch 1 is targeting fifo event channels for avoiding any races for the
> case that the fifo queue has been changed for a specific event channel.
> 
> The second patch is modifying the per event channel locking scheme in
> order to avoid deadlocks and problems due to the event channel lock
> having been changed to require IRQs off by the XSA-343 patches.
> 
> Changes in V5:
> - moved evtchn_write_[un]lock() to event_channel.c (Jan Beulich)
> - used normal read_lock() in some cases (Jan Beulich)
> 
> Changes in V4:
> - switched to real rwlock
> 
> Changes in V3:
> - addressed comments
> 
> Juergen Gross (2):
>   xen/events: access last_priority and last_vcpu_id together
>   xen/evtchn: rework per event channel lock

Didn't you mean to add a 3rd patch here to drop the 2nd call to
xsm_evtchn_send() again?

Jan


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 0/2] XSA-343 followup patches
  2020-11-09 12:00 ` [PATCH v5 0/2] XSA-343 followup patches Jan Beulich
@ 2020-11-09 13:05   ` Jürgen Groß
  0 siblings, 0 replies; 8+ messages in thread
From: Jürgen Groß @ 2020-11-09 13:05 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, Roger Pau Monné,
	xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 1182 bytes --]

On 09.11.20 13:00, Jan Beulich wrote:
> On 09.11.2020 07:41, Juergen Gross wrote:
>> The patches for XSA-343 produced some fallout, especially the event
>> channel locking has shown to be problematic.
>>
>> Patch 1 is targeting fifo event channels for avoiding any races for the
>> case that the fifo queue has been changed for a specific event channel.
>>
>> The second patch is modifying the per event channel locking scheme in
>> order to avoid deadlocks and problems due to the event channel lock
>> having been changed to require IRQs off by the XSA-343 patches.
>>
>> Changes in V5:
>> - moved evtchn_write_[un]lock() to event_channel.c (Jan Beulich)
>> - used normal read_lock() in some cases (Jan Beulich)
>>
>> Changes in V4:
>> - switched to real rwlock
>>
>> Changes in V3:
>> - addressed comments
>>
>> Juergen Gross (2):
>>    xen/events: access last_priority and last_vcpu_id together
>>    xen/evtchn: rework per event channel lock
> 
> Didn't you mean to add a 3rd patch here to drop the 2nd call to
> xsm_evtchn_send() again?

I wanted to that after this series has been accepted, but I can include
it here, of course.


Juergen


[-- Attachment #1.1.2: OpenPGP_0xB0DE9DD628BF132F.asc --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 2/2] xen/evtchn: rework per event channel lock
  2020-11-09 11:58   ` Jan Beulich
@ 2020-11-09 13:16     ` Jürgen Groß
  2020-11-09 15:07       ` Jan Beulich
  0 siblings, 1 reply; 8+ messages in thread
From: Jürgen Groß @ 2020-11-09 13:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 5596 bytes --]

On 09.11.20 12:58, Jan Beulich wrote:
> On 09.11.2020 07:41, Juergen Gross wrote:
>> Currently the lock for a single event channel needs to be taken with
>> interrupts off, which causes deadlocks in some cases.
>>
>> Rework the per event channel lock to be non-blocking for the case of
>> sending an event and removing the need for disabling interrupts for
>> taking the lock.
>>
>> The lock is needed for avoiding races between event channel state
>> changes (creation, closing, binding) against normal operations (set
>> pending, [un]masking, priority changes).
>>
>> Use a rwlock, but with some restrictions:
>>
>> - normal operations use read_trylock(), in case of not obtaining the
>>    lock the operation is omitted or a default state is returned
>>
>> - closing an event channel is using write_lock(), with ASSERT()ing that
>>    the lock is taken as writer only when the state of the event channel
>>    is either before or after the locked region appropriate (either free
>>    or unbound).
> 
> This has become stale, and may have been incomplete already before:
> - Normal operations now may use two diffeent approaches. Which one
> is to be used when would want writing down here.
> - write_lock() use goes beyond closing.

Yes, you are right.

> 
>> --- a/xen/arch/x86/irq.c
>> +++ b/xen/arch/x86/irq.c
>> @@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
>>                   pirq = domain_irq_to_pirq(d, irq);
>>                   info = pirq_info(d, pirq);
>>                   evtchn = evtchn_from_port(d, info->evtchn);
>> -                local_irq_disable();
>> -                if ( spin_trylock(&evtchn->lock) )
>> +                if ( evtchn_read_trylock(evtchn) )
>>                   {
>>                       pending = evtchn_is_pending(d, evtchn);
>>                       masked = evtchn_is_masked(d, evtchn);
>> -                    spin_unlock(&evtchn->lock);
>> +                    evtchn_read_unlock(evtchn);
>>                   }
>> -                local_irq_enable();
>>                   printk("d%d:%3d(%c%c%c)%c",
>>                          d->domain_id, pirq, "-P?"[pending],
>>                          "-M?"[masked], info->masked ? 'M' : '-',
> 
> Using trylock here has a reason different from that in sending
> functions, aiui. Please say so in the description, to justify
> exposure of evtchn_read_lock().

Okay.

> 
>> --- a/xen/arch/x86/pv/shim.c
>> +++ b/xen/arch/x86/pv/shim.c
>> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>>       if ( port_is_valid(guest, port) )
>>       {
>>           struct evtchn *chn = evtchn_from_port(guest, port);
>> -        unsigned long flags;
>>   
>> -        spin_lock_irqsave(&chn->lock, flags);
>> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>> -        spin_unlock_irqrestore(&chn->lock, flags);
>> +        if ( evtchn_read_trylock(chn) )
>> +        {
>> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>> +            evtchn_read_unlock(chn);
>> +        }
> 
> Does this need trylock?

It is called directly from the event upcall, so interrupts should be
off here. Without trylock this would result in check_lock() triggering.

> 
>> @@ -1068,15 +1088,16 @@ int evtchn_unmask(unsigned int port)
>>   {
>>       struct domain *d = current->domain;
>>       struct evtchn *evtchn;
>> -    unsigned long flags;
>>   
>>       if ( unlikely(!port_is_valid(d, port)) )
>>           return -EINVAL;
>>   
>>       evtchn = evtchn_from_port(d, port);
>> -    spin_lock_irqsave(&evtchn->lock, flags);
>> -    evtchn_port_unmask(d, evtchn);
>> -    spin_unlock_irqrestore(&evtchn->lock, flags);
>> +    if ( evtchn_read_trylock(evtchn) )
>> +    {
>> +        evtchn_port_unmask(d, evtchn);
>> +        evtchn_read_unlock(evtchn);
>> +    }
> 
> I think this one could as well use plain read_lock().

Oh, indeed.

> 
>> @@ -234,12 +244,13 @@ static inline bool evtchn_is_masked(const struct domain *d,
>>   static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
>>   {
>>       struct evtchn *evtchn = evtchn_from_port(d, port);
>> -    bool rc;
>> -    unsigned long flags;
>> +    bool rc = true;
>>   
>> -    spin_lock_irqsave(&evtchn->lock, flags);
>> -    rc = evtchn_is_masked(d, evtchn);
>> -    spin_unlock_irqrestore(&evtchn->lock, flags);
>> +    if ( evtchn_read_trylock(evtchn) )
>> +    {
>> +        rc = evtchn_is_masked(d, evtchn);
>> +        evtchn_read_unlock(evtchn);
>> +    }
>>   
>>       return rc;
>>   }
>> @@ -252,12 +263,13 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
>>       if ( port_is_valid(d, port) )
>>       {
>>           struct evtchn *evtchn = evtchn_from_port(d, port);
>> -        unsigned long flags;
>>   
>> -        spin_lock_irqsave(&evtchn->lock, flags);
>> -        if ( evtchn_usable(evtchn) )
>> -            rc = evtchn_is_pending(d, evtchn);
>> -        spin_unlock_irqrestore(&evtchn->lock, flags);
>> +        if ( evtchn_read_trylock(evtchn) )
>> +        {
>> +            if ( evtchn_usable(evtchn) )
>> +                rc = evtchn_is_pending(d, evtchn);
>> +            evtchn_read_unlock(evtchn);
>> +        }
>>       }
>>   
>>       return rc;
> 
> At least for the latter I suppose it should also be plain read_lock().
> We ought to keep the exceptions to where they're actually needed, I
> think.

I think both instances can be switched.


Juergen

[-- Attachment #1.1.2: OpenPGP_0xB0DE9DD628BF132F.asc --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 2/2] xen/evtchn: rework per event channel lock
  2020-11-09 13:16     ` Jürgen Groß
@ 2020-11-09 15:07       ` Jan Beulich
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2020-11-09 15:07 UTC (permalink / raw)
  To: Jürgen Groß
  Cc: Andrew Cooper, Roger Pau Monné,
	Wei Liu, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, xen-devel

On 09.11.2020 14:16, Jürgen Groß wrote:
> On 09.11.20 12:58, Jan Beulich wrote:
>> On 09.11.2020 07:41, Juergen Gross wrote:
>>> --- a/xen/arch/x86/pv/shim.c
>>> +++ b/xen/arch/x86/pv/shim.c
>>> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>>>       if ( port_is_valid(guest, port) )
>>>       {
>>>           struct evtchn *chn = evtchn_from_port(guest, port);
>>> -        unsigned long flags;
>>>   
>>> -        spin_lock_irqsave(&chn->lock, flags);
>>> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>>> -        spin_unlock_irqrestore(&chn->lock, flags);
>>> +        if ( evtchn_read_trylock(chn) )
>>> +        {
>>> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>>> +            evtchn_read_unlock(chn);
>>> +        }
>>
>> Does this need trylock?
> 
> It is called directly from the event upcall, so interrupts should be
> off here. Without trylock this would result in check_lock() triggering.

Hmm, right. Since the trylock function needs exposing anyway, this
isn't much of a problem, the more that it fits the pattern of being
a send.

Jan


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-11-09 15:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-09  6:41 [PATCH v5 0/2] XSA-343 followup patches Juergen Gross
2020-11-09  6:41 ` [PATCH v5 1/2] xen/events: access last_priority and last_vcpu_id together Juergen Gross
2020-11-09  6:41 ` [PATCH v5 2/2] xen/evtchn: rework per event channel lock Juergen Gross
2020-11-09 11:58   ` Jan Beulich
2020-11-09 13:16     ` Jürgen Groß
2020-11-09 15:07       ` Jan Beulich
2020-11-09 12:00 ` [PATCH v5 0/2] XSA-343 followup patches Jan Beulich
2020-11-09 13:05   ` Jürgen Groß

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).