All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: "Jürgen Groß" <jgross@suse.com>
Cc: Julien Grall <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Daniel de Graaf <dgdegra@tycho.nsa.gov>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] evtchn/Flask: pre-allocate node on send path
Date: Fri, 25 Sep 2020 17:30:28 +0200	[thread overview]
Message-ID: <fa46f884-ce64-d17d-9924-f90d80ad6dee@suse.com> (raw)
In-Reply-To: <4390eb35-768e-1ca1-099e-da33da9f939e@suse.com>

On 25.09.2020 16:57, Jürgen Groß wrote:
> On 25.09.20 14:21, Jan Beulich wrote:
>> On 25.09.2020 12:34, Julien Grall wrote:
>>> On 24/09/2020 11:53, Jan Beulich wrote:
>>>> xmalloc() & Co may not be called with IRQs off, or else check_lock()
>>>> will have its assertion trigger about locks getting acquired
>>>> inconsistently. Re-arranging the locking in evtchn_send() doesn't seem
>>>> very reasonable, especially since the per-channel lock was introduced to
>>>> avoid acquiring the per-domain event lock on the send paths. Issue a
>>>> second call to xsm_evtchn_send() instead, before acquiring the lock, to
>>>> give XSM / Flask a chance to pre-allocate whatever it may need.
>>>
>>> This is the sort of fall-out I was expecting when we decide to turn off
>>> the interrupts for big chunk of code. I couldn't find any at the time
>>> though...
>>>
>>> Can you remind which caller of send_guest{global, vcpu}_virq() will call
>>> them with interrupts off?
>>
>> I don't recall which one of the two it was that I hit; we wanted
>> both to use the lock anyway. send_guest_pirq() very clearly also
>> gets called with IRQs off.
>>
>>> Would it be possible to consider deferring the call to a softirq
>>> taslket? If so, this would allow us to turn the interrupts again.
>>
>> Of course this is in principle possible; the question is how
>> involved this is going to get. However, on x86 oprofile's call to
>> send_guest_vcpu_virq() can't easily be replaced - it's dangerous
>> enough already that in involves locks in NMI context. I don't
>> fancy seeing it use more commonly used ones.
> 
> Is it really so hard to avoid calling send_guest_vcpu_virq() in NMI
> context?
> 
> Today it is called only if the NMI happened inside the guest, so the
> main Xen stack is unused at this time. It should be rather straight
> forward to mimic a stack frame on the main stack and iret to a special
> handler from NMI context. This handler would then call
> send_guest_vcpu_virq() and return to the guest.

Quite possible that it's not overly difficult to arrange for. But
even with this out of the way I don't really view this softirq
tasklet route as viable; I could be proven wrong by demonstrating
that it's sufficiently straightforward.

Jan


  reply	other threads:[~2020-09-25 15:30 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-24 10:53 [PATCH] evtchn/Flask: pre-allocate node on send path Jan Beulich
2020-09-25 10:34 ` Julien Grall
2020-09-25 12:21   ` Jan Beulich
2020-09-25 13:16     ` Julien Grall
2020-09-25 13:58       ` Jan Beulich
2020-09-25 14:33         ` Julien Grall
2020-09-25 15:36           ` Jan Beulich
2020-09-25 16:00             ` Julien Grall
2020-09-25 16:13               ` Jan Beulich
2020-09-25 18:08               ` Jason Andryuk
2020-09-28  7:49                 ` Jan Beulich
2020-09-29 17:20                   ` Jason Andryuk
2020-09-30  6:20                     ` Jan Beulich
2020-09-30 15:06                       ` Jason Andryuk
2020-09-25 14:57     ` Jürgen Groß
2020-09-25 15:30       ` Jan Beulich [this message]
2020-10-01 16:04 ` Julien Grall
2020-10-01 17:27   ` Jason Andryuk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fa46f884-ce64-d17d-9924-f90d80ad6dee@suse.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dgdegra@tycho.nsa.gov \
    --cc=iwj@xenproject.org \
    --cc=jgross@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.