From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
To: Julien Grall <julien.grall.oss@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
Stefano Stabellini <sstabellini@kernel.org>,
Tamas K Lengyel <lengyelt@ainfosec.com>,
Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
Alexandru Isaila <aisaila@bitdefender.com>,
"xen-devel@lists.xenproject.org"
<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held
Date: Fri, 4 Dec 2020 16:23:03 -0500 [thread overview]
Message-ID: <CABfawhmqk8aOEe4RMUtTjq_jgSCuGrL5vpuNdRBPNmmxRnfxFg@mail.gmail.com> (raw)
In-Reply-To: <CAJ=z9a2yEsvUcu8c=pjv5ymLgLHebZCJcTh7c+yeW44J6jDgWw@mail.gmail.com>
On Fri, Dec 4, 2020 at 2:22 PM Julien Grall <julien.grall.oss@gmail.com> wrote:
>
> On Fri, 4 Dec 2020 at 19:15, Tamas K Lengyel <tamas.k.lengyel@gmail.com> wrote:
> >
> > On Fri, Dec 4, 2020 at 10:29 AM Julien Grall <julien@xen.org> wrote:
> > >
> > >
> > >
> > > On 04/12/2020 15:21, Tamas K Lengyel wrote:
> > > > On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xen.org> wrote:
> > > >>
> > > >> Hi Jan,
> > > >>
> > > >> On 03/12/2020 10:09, Jan Beulich wrote:
> > > >>> On 02.12.2020 22:10, Julien Grall wrote:
> > > >>>> On 23/11/2020 13:30, Jan Beulich wrote:
> > > >>>>> While there don't look to be any problems with this right now, the lock
> > > >>>>> order implications from holding the lock can be very difficult to follow
> > > >>>>> (and may be easy to violate unknowingly). The present callbacks don't
> > > >>>>> (and no such callback should) have any need for the lock to be held.
> > > >>>>>
> > > >>>>> However, vm_event_disable() frees the structures used by respective
> > > >>>>> callbacks and isn't otherwise synchronized with invocations of these
> > > >>>>> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> > > >>>>> to wait to drop to zero before freeing the port (and dropping the lock).
> > > >>>>
> > > >>>> AFAICT, this callback is not the only place where the synchronization is
> > > >>>> missing in the VM event code.
> > > >>>>
> > > >>>> For instance, vm_event_put_request() can also race against
> > > >>>> vm_event_disable().
> > > >>>>
> > > >>>> So shouldn't we handle this issue properly in VM event?
> > > >>>
> > > >>> I suppose that's a question to the VM event folks rather than me?
> > > >>
> > > >> Yes. From my understanding of Tamas's e-mail, they are relying on the
> > > >> monitoring software to do the right thing.
> > > >>
> > > >> I will refrain to comment on this approach. However, given the race is
> > > >> much wider than the event channel, I would recommend to not add more
> > > >> code in the event channel to deal with such problem.
> > > >>
> > > >> Instead, this should be fixed in the VM event code when someone has time
> > > >> to harden the subsystem.
> > > >
> > > > I double-checked and the disable route is actually more robust, we
> > > > don't just rely on the toolstack doing the right thing. The domain
> > > > gets paused before any calls to vm_event_disable. So I don't think
> > > > there is really a race-condition here.
> > >
> > > The code will *only* pause the monitored domain. I can see two issues:
> > > 1) The toolstack is still sending event while destroy is happening.
> > > This is the race discussed here.
> > > 2) The implement of vm_event_put_request() suggests that it can be
> > > called with not-current domain.
> > >
> > > I don't see how just pausing the monitored domain is enough here.
> >
> > Requests only get generated by the monitored domain.
>
> If that's the case, then why is vm_event_put_request() able to
> deal with a non-current domain?
>
> I understand that you are possibly trusting who may call it, but this
> looks quite fragile.
I didn't write the system. You probably want to ask that question from
the original author.
Tamas
next prev parent reply other threads:[~2020-12-04 21:24 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-23 13:26 [PATCH v3 0/5] evtchn: (not so) recent XSAs follow-on Jan Beulich
2020-11-23 13:28 ` [PATCH v3 1/5] evtchn: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq() Jan Beulich
2020-12-02 19:03 ` Julien Grall
2020-12-03 9:46 ` Jan Beulich
2020-12-09 9:53 ` Julien Grall
2020-12-09 14:24 ` Jan Beulich
2020-11-23 13:28 ` [PATCH v3 2/5] evtchn: avoid access tearing for ->virq_to_evtchn[] accesses Jan Beulich
2020-12-02 21:14 ` Julien Grall
2020-11-23 13:28 ` [PATCH v3 3/5] evtchn: convert vIRQ lock to an r/w one Jan Beulich
2020-12-09 11:16 ` Julien Grall
2020-11-23 13:29 ` [PATCH v3 4/5] evtchn: convert domain event " Jan Beulich
2020-12-09 11:54 ` Julien Grall
2020-12-11 10:32 ` Jan Beulich
2020-12-11 10:57 ` Julien Grall
2020-12-14 9:40 ` Jan Beulich
2020-12-21 17:45 ` Julien Grall
2020-12-22 9:46 ` Jan Beulich
2020-12-23 11:22 ` Julien Grall
2020-12-23 12:57 ` Jan Beulich
2020-12-23 13:19 ` Julien Grall
2020-12-23 13:36 ` Jan Beulich
2020-11-23 13:30 ` [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held Jan Beulich
2020-11-30 10:39 ` Isaila Alexandru
2020-12-02 21:10 ` Julien Grall
2020-12-03 10:09 ` Jan Beulich
2020-12-03 14:40 ` Tamas K Lengyel
2020-12-04 11:28 ` Julien Grall
2020-12-04 11:48 ` Jan Beulich
2020-12-04 11:51 ` Julien Grall
2020-12-04 12:01 ` Jan Beulich
2020-12-04 15:09 ` Julien Grall
2020-12-07 8:02 ` Jan Beulich
2020-12-07 17:22 ` Julien Grall
2020-12-04 15:21 ` Tamas K Lengyel
2020-12-04 15:29 ` Julien Grall
2020-12-04 19:15 ` Tamas K Lengyel
2020-12-04 19:22 ` Julien Grall
2020-12-04 21:23 ` Tamas K Lengyel [this message]
2020-12-07 15:28 ` Jan Beulich
2020-12-07 17:30 ` Julien Grall
2020-12-07 17:35 ` Tamas K Lengyel
2020-12-23 13:12 ` Jan Beulich
2020-12-23 13:33 ` Julien Grall
2020-12-23 13:41 ` Jan Beulich
2020-12-23 14:44 ` Julien Grall
2020-12-23 14:56 ` Jan Beulich
2020-12-23 15:08 ` Julien Grall
2020-12-23 15:15 ` Tamas K Lengyel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CABfawhmqk8aOEe4RMUtTjq_jgSCuGrL5vpuNdRBPNmmxRnfxFg@mail.gmail.com \
--to=tamas.k.lengyel@gmail.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=aisaila@bitdefender.com \
--cc=andrew.cooper3@citrix.com \
--cc=iwj@xenproject.org \
--cc=jbeulich@suse.com \
--cc=julien.grall.oss@gmail.com \
--cc=lengyelt@ainfosec.com \
--cc=ppircalabu@bitdefender.com \
--cc=sstabellini@kernel.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).