All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tamas K Lengyel <tamas@tklengyel.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "Tamas K Lengyel" <tamas.lengyel@intel.com>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>, "Wei Liu" <wl@xen.org>,
	Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/2] x86/hap: Resolve mm-lock order violations when forking VMs with nested p2m
Date: Wed, 6 Jan 2021 11:26:13 -0500	[thread overview]
Message-ID: <CABfawh=+nd+Lm59Ofy31yDVvcQ9fYXNbm_NBNvu8xsnxti+8sQ@mail.gmail.com> (raw)
In-Reply-To: <a3f12f54-926e-9810-f78f-534f057449de@suse.com>

On Wed, Jan 6, 2021 at 11:11 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 06.01.2021 16:29, Tamas K Lengyel wrote:
> > On Wed, Jan 6, 2021 at 7:03 AM Jan Beulich <jbeulich@suse.com> wrote:
> >> On 04.01.2021 18:41, Tamas K Lengyel wrote:
> >>> @@ -1226,6 +1224,15 @@ int __mem_sharing_unshare_page(struct domain *d,
> >>>          return 0;
> >>>      }
> >>>
> >>> +    /* lock nested p2ms to avoid lock-order violation */
> >>
> >> Would you mind mentioning here the other side of the possible
> >> violation, to aid the reader?
> >
> > You mean what the nested p2m locks would conflict with? I think in the
> > context of mem_sharing it's clear that the only thing it can conflict
> > with is the mem_sharing mm lock.
>
> I don't think it's all this obvious. It wouldn't been to me, at
> least, without also having this change's description at hand.
>
> >>> +    if ( unlikely(nestedhvm_enabled(d)) )
> >>> +    {
> >>> +        int i;
> >>
> >> unsigned int please (also further down), no matter that there may
> >> be other similar examples of (bad) use of plain int.
> >
> > IMHO this is the type of change request that makes absolutely 0
> > difference at the end.
>
> (see below, applies here as well)
>
> >>> +        for ( i = 0; i < MAX_NESTEDP2M; i++ )
> >>> +            p2m_lock(d->arch.nested_p2m[i]);
> >>
> >> From a brief scan, this is the first instance of acquiring all
> >> nested p2m locks in one go. Ordering these by index is perhaps
> >> fine, but I think this wants spelling out in e.g. mm-locks.h. Of
> >> course the question is if you really need to go this far, i.e.
> >> whether really all of the locks need holding. This is even more
> >> so with p2m_flush_table_locked() not really looking to be a
> >> quick operation, when there have many pages accumulated for it.
> >> I.e. the overall lock holding time may turn out even more
> >> excessive this way than it apparently already is.
> >
> > I agree this is not ideal but it gets things working without Xen
> > crashing. I would prefer if we could get rid of the mm lock ordering
> > altogether in this context.
>
> How would this do any good? You'd then be at risk of ac"ually
> hitting a lock order violation. These are often quite hard to
> debug.

The whole lock ordering is just a pain and it gets us into situations
like this where we are forced to take a bunch of locks to just change
one thing. I don't have a better solution but I'm also not 100%
convinced that this lock ordering setup is even sane. Sometimes it
really ought to be enough to just take one "mm master lock" without
having to chase down all of them individually.

>
> > We already hold the host p2m lock and the
> > sharing lock, that ought to suffice.
>
> I don't see how holding any locks can prevent lock order
> violations when further ones get acquired. I also didn't think
> the nested p2m locks were redundant with the host one.
>
> >>> --- a/xen/arch/x86/mm/p2m.c
> >>> +++ b/xen/arch/x86/mm/p2m.c
> >>> @@ -1598,8 +1598,17 @@ void
> >>>  p2m_flush_nestedp2m(struct domain *d)
> >>>  {
> >>>      int i;
> >>> +    struct p2m_domain *p2m;
> >>> +
> >>>      for ( i = 0; i < MAX_NESTEDP2M; i++ )
> >>> -        p2m_flush_table(d->arch.nested_p2m[i]);
> >>> +    {
> >>> +        p2m = d->arch.nested_p2m[i];
> >>
> >> Please move the declaration here, making this the variable's
> >> initializer (unless line length constraints make the latter
> >> undesirable).
> >
> > I really don't get what difference this would make.
>
> Both choice of (generally) inappropriate types (further up)
> and placement of declarations (here) (and of course also
> other style violations) can set bad precedents even if in a
> specific case it may not matter much. So yes, it may be
> good enough here, but it would violate our desire to
> - use unsigned types when a variable will hold only non-
>   negative values (which in the general case may improve
>   generated code in particular on x86-64),
> - limit the scopes of variables as much as possible, to
>   more easily spot inappropriate uses (like bypassing
>   initialization).
>
> This code here actually demonstrates such a bad precedent,
> using plain int for the loop induction variable. While I
> can't be any way near sure, there's a certain chance you
> actually took it and copied it to
> __mem_sharing_unshare_page(). The chance of such happening
> is what we'd like to reduce over time.

Yes, I copied it from p2m.c. All I meant was that such minor changes
are generally speaking not worth a round-trip of sending new patches.
I obviously don't care whether this is signed or unsigned. Minor stuff
like that could be changed on commit and is not even worth having a
discussion about.

Tamas


  reply	other threads:[~2021-01-06 16:27 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-04 17:41 [PATCH 1/2] x86/mem_sharing: copy cpuid during vm forking Tamas K Lengyel
2021-01-04 17:41 ` [PATCH 2/2] x86/hap: Resolve mm-lock order violations when forking VMs with nested p2m Tamas K Lengyel
2021-01-06 12:03   ` Jan Beulich
2021-01-06 15:29     ` Tamas K Lengyel
2021-01-06 16:11       ` Jan Beulich
2021-01-06 16:26         ` Tamas K Lengyel [this message]
2021-01-07 12:25           ` Jan Beulich
2021-01-07 12:43             ` Tamas K Lengyel
2021-01-07 12:56               ` Jan Beulich
2021-01-07 13:27                 ` Tamas K Lengyel
2021-01-05  8:40 ` [PATCH 1/2] x86/mem_sharing: copy cpuid during vm forking Jan Beulich
2021-01-05 11:04 ` Andrew Cooper
2021-01-05 15:50   ` Lengyel, Tamas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABfawh=+nd+Lm59Ofy31yDVvcQ9fYXNbm_NBNvu8xsnxti+8sQ@mail.gmail.com' \
    --to=tamas@tklengyel.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=roger.pau@citrix.com \
    --cc=tamas.lengyel@intel.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.