From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 3/5] evtchn: convert vIRQ lock to an r/w one
Date: Mon, 23 Nov 2020 14:28:42 +0100 [thread overview]
Message-ID: <d2461bd6-fb2f-447f-11c6-bd8afd573d7b@suse.com> (raw)
In-Reply-To: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
There's no need to serialize all sending of vIRQ-s; all that's needed
is serialization against the closing of the respective event channels
(so far by means of a barrier). To facilitate the conversion, switch to
an ordinary write locked region in evtchn_close().
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Re-base over added new earlier patch.
v2: Don't introduce/use rw_barrier() here. Add comment to
evtchn_bind_virq(). Re-base.
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -160,7 +160,7 @@ struct vcpu *vcpu_create(struct domain *
v->vcpu_id = vcpu_id;
v->dirty_cpu = VCPU_CPU_CLEAN;
- spin_lock_init(&v->virq_lock);
+ rwlock_init(&v->virq_lock);
tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -475,6 +475,13 @@ int evtchn_bind_virq(evtchn_bind_virq_t
evtchn_write_unlock(chn);
bind->port = port;
+ /*
+ * If by any, the update of virq_to_evtchn[] would need guarding by
+ * virq_lock, but since this is the last action here, there's no strict
+ * need to acquire the lock. Hence holding event_lock isn't helpful
+ * anymore at this point, but utilize that its unlocking acts as the
+ * otherwise necessary smp_wmb() here.
+ */
write_atomic(&v->virq_to_evtchn[virq], port);
out:
@@ -661,10 +668,12 @@ int evtchn_close(struct domain *d1, int
case ECS_VIRQ:
for_each_vcpu ( d1, v )
{
- if ( read_atomic(&v->virq_to_evtchn[chn1->u.virq]) != port1 )
- continue;
- write_atomic(&v->virq_to_evtchn[chn1->u.virq], 0);
- spin_barrier(&v->virq_lock);
+ unsigned long flags;
+
+ write_lock_irqsave(&v->virq_lock, flags);
+ if ( read_atomic(&v->virq_to_evtchn[chn1->u.virq]) == port1 )
+ write_atomic(&v->virq_to_evtchn[chn1->u.virq], 0);
+ write_unlock_irqrestore(&v->virq_lock, flags);
}
break;
@@ -813,7 +822,7 @@ void send_guest_vcpu_virq(struct vcpu *v
ASSERT(!virq_is_global(virq));
- spin_lock_irqsave(&v->virq_lock, flags);
+ read_lock_irqsave(&v->virq_lock, flags);
port = read_atomic(&v->virq_to_evtchn[virq]);
if ( unlikely(port == 0) )
@@ -823,7 +832,7 @@ void send_guest_vcpu_virq(struct vcpu *v
evtchn_port_set_pending(d, v->vcpu_id, evtchn_from_port(d, port));
out:
- spin_unlock_irqrestore(&v->virq_lock, flags);
+ read_unlock_irqrestore(&v->virq_lock, flags);
}
void send_guest_global_virq(struct domain *d, uint32_t virq)
@@ -842,7 +851,7 @@ void send_guest_global_virq(struct domai
if ( unlikely(v == NULL) )
return;
- spin_lock_irqsave(&v->virq_lock, flags);
+ read_lock_irqsave(&v->virq_lock, flags);
port = read_atomic(&v->virq_to_evtchn[virq]);
if ( unlikely(port == 0) )
@@ -852,7 +861,7 @@ void send_guest_global_virq(struct domai
evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
out:
- spin_unlock_irqrestore(&v->virq_lock, flags);
+ read_unlock_irqrestore(&v->virq_lock, flags);
}
void send_guest_pirq(struct domain *d, const struct pirq *pirq)
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -238,7 +238,7 @@ struct vcpu
/* IRQ-safe virq_lock protects against delivering VIRQ to stale evtchn. */
evtchn_port_t virq_to_evtchn[NR_VIRQS];
- spinlock_t virq_lock;
+ rwlock_t virq_lock;
/* Tasklet for continue_hypercall_on_cpu(). */
struct tasklet continue_hypercall_tasklet;
next prev parent reply other threads:[~2020-11-23 13:28 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-23 13:26 [PATCH v3 0/5] evtchn: (not so) recent XSAs follow-on Jan Beulich
2020-11-23 13:28 ` [PATCH v3 1/5] evtchn: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq() Jan Beulich
2020-12-02 19:03 ` Julien Grall
2020-12-03 9:46 ` Jan Beulich
2020-12-09 9:53 ` Julien Grall
2020-12-09 14:24 ` Jan Beulich
2020-11-23 13:28 ` [PATCH v3 2/5] evtchn: avoid access tearing for ->virq_to_evtchn[] accesses Jan Beulich
2020-12-02 21:14 ` Julien Grall
2020-11-23 13:28 ` Jan Beulich [this message]
2020-12-09 11:16 ` [PATCH v3 3/5] evtchn: convert vIRQ lock to an r/w one Julien Grall
2020-11-23 13:29 ` [PATCH v3 4/5] evtchn: convert domain event " Jan Beulich
2020-12-09 11:54 ` Julien Grall
2020-12-11 10:32 ` Jan Beulich
2020-12-11 10:57 ` Julien Grall
2020-12-14 9:40 ` Jan Beulich
2020-12-21 17:45 ` Julien Grall
2020-12-22 9:46 ` Jan Beulich
2020-12-23 11:22 ` Julien Grall
2020-12-23 12:57 ` Jan Beulich
2020-12-23 13:19 ` Julien Grall
2020-12-23 13:36 ` Jan Beulich
2020-11-23 13:30 ` [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held Jan Beulich
2020-11-30 10:39 ` Isaila Alexandru
2020-12-02 21:10 ` Julien Grall
2020-12-03 10:09 ` Jan Beulich
2020-12-03 14:40 ` Tamas K Lengyel
2020-12-04 11:28 ` Julien Grall
2020-12-04 11:48 ` Jan Beulich
2020-12-04 11:51 ` Julien Grall
2020-12-04 12:01 ` Jan Beulich
2020-12-04 15:09 ` Julien Grall
2020-12-07 8:02 ` Jan Beulich
2020-12-07 17:22 ` Julien Grall
2020-12-04 15:21 ` Tamas K Lengyel
2020-12-04 15:29 ` Julien Grall
2020-12-04 19:15 ` Tamas K Lengyel
2020-12-04 19:22 ` Julien Grall
2020-12-04 21:23 ` Tamas K Lengyel
2020-12-07 15:28 ` Jan Beulich
2020-12-07 17:30 ` Julien Grall
2020-12-07 17:35 ` Tamas K Lengyel
2020-12-23 13:12 ` Jan Beulich
2020-12-23 13:33 ` Julien Grall
2020-12-23 13:41 ` Jan Beulich
2020-12-23 14:44 ` Julien Grall
2020-12-23 14:56 ` Jan Beulich
2020-12-23 15:08 ` Julien Grall
2020-12-23 15:15 ` Tamas K Lengyel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d2461bd6-fb2f-447f-11c6-bd8afd573d7b@suse.com \
--to=jbeulich@suse.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=iwj@xenproject.org \
--cc=julien@xen.org \
--cc=sstabellini@kernel.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).