linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Gonda <pgonda@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: John Sperbeck <jsperbeck@google.com>,
	kvm list <kvm@vger.kernel.org>,
	David Rientjes <rientjes@google.com>,
	Sean Christopherson <seanjc@google.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock
Date: Fri, 29 Apr 2022 09:51:49 -0600	[thread overview]
Message-ID: <CAMkAt6qAW5zFyTAqX_Az2DT2J3KROPo4u-Ak1sC0J+UTUeFfXA@mail.gmail.com> (raw)
In-Reply-To: <8a2c5f8c-503c-b4f0-75e7-039533c9852d@redhat.com>

On Fri, Apr 29, 2022 at 9:38 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 4/29/22 17:35, Peter Gonda wrote:
> > On Thu, Apr 28, 2022 at 5:59 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
> >>
> >> On 4/28/22 23:28, Peter Gonda wrote:
> >>>
> >>> So when actually trying this out I noticed that we are releasing the
> >>> current vcpu iterator but really we haven't actually taken that lock
> >>> yet. So we'd need to maintain a prev_* pointer and release that one.
> >>
> >> Not entirely true because all vcpu->mutex.dep_maps will be for the same
> >> lock.  The dep_map is essentially a fancy string, in this case
> >> "&vcpu->mutex".
> >>
> >> See the definition of mutex_init:
> >>
> >> #define mutex_init(mutex)                                              \
> >> do {                                                                   \
> >>           static struct lock_class_key __key;                            \
> >>                                                                          \
> >>           __mutex_init((mutex), #mutex, &__key);                         \
> >> } while (0)
> >>
> >> and the dep_map field is initialized with
> >>
> >>           lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_SLEEP);
> >>
> >> (i.e. all vcpu->mutexes share the same name and key because they have a
> >> single mutex_init-ialization site).  Lockdep is as crude in theory as it
> >> is effective in practice!
> >>
> >>>
> >>>            bool acquired = false;
> >>>            kvm_for_each_vcpu(...) {
> >>>                    if (!acquired) {
> >>>                       if (mutex_lock_killable_nested(&vcpu->mutex, role)
> >>>                           goto out_unlock;
> >>>                       acquired = true;
> >>>                    } else {
> >>>                       if (mutex_lock_killable(&vcpu->mutex, role)
> >>>                           goto out_unlock;
> >>
> >> This will cause a lockdep splat because it uses subclass 0.  All the
> >> *_nested functions is allow you to specify a subclass other than zero.
> >
> > OK got it. I now have this to lock:
> >
> >           kvm_for_each_vcpu (i, vcpu, kvm) {
> >                    if (prev_vcpu != NULL) {
> >                            mutex_release(&prev_vcpu->mutex.dep_map, _THIS_IP_);
> >                            prev_vcpu = NULL;
> >                    }
> >
> >                    if (mutex_lock_killable_nested(&vcpu->mutex, role))
> >                            goto out_unlock;
> >                    prev_vcpu = vcpu;
> >            }
> >
> > But I've noticed the unlocking is in the wrong order since we are
> > using kvm_for_each_vcpu() I think we need a kvm_for_each_vcpu_rev() or
> > something. Which maybe a bit for work:
> > https://elixir.bootlin.com/linux/latest/source/lib/xarray.c#L1119.
>
> No, you don't need any of this.  You can rely on there being only one
> depmap, otherwise you wouldn't need the mock releases and acquires at
> all.  Also the unlocking order does not matter for deadlocks, only the
> locking order does.  You're overdoing it. :)

Hmm I'm slightly confused here then. If I take your original suggestion of:

        bool acquired = false;
        kvm_for_each_vcpu(...) {
                if (acquired)
                        mutex_release(&vcpu->mutex.dep_map,
_THIS_IP_);  <-- Warning here
                if (mutex_lock_killable_nested(&vcpu->mutex, role)
                        goto out_unlock;
                acquired = true;

"""
[ 2810.088982] =====================================
[ 2810.093687] WARNING: bad unlock balance detected!
[ 2810.098388] 5.17.0-dbg-DEV #5 Tainted: G           O
[ 2810.103788] -------------------------------------
[ 2810.108490] sev_migrate_tes/107600 is trying to release lock
(&vcpu->mutex) at:
[ 2810.115798] [<ffffffffb7cd3592>] sev_lock_vcpus_for_migration+0xe2/0x1e0
[ 2810.122497] but there are no more locks to release!
[ 2810.127376]
               other info that might help us debug this:
[ 2810.133911] 3 locks held by sev_migrate_tes/107600:
[ 2810.138791]  #0: ffffa6cbf31ca3b8 (&kvm->lock){+.+.}-{3:3}, at:
sev_vm_move_enc_context_from+0x96/0x690
[ 2810.148178]  #1: ffffa6cbf28523b8 (&kvm->lock/1){+.+.}-{3:3}, at:
sev_vm_move_enc_context_from+0xae/0x690
[ 2810.157738]  #2: ffff9220683b01f8 (&vcpu->mutex){+.+.}-{3:3}, at:
sev_lock_vcpus_for_migration+0x89/0x1e0
"""

This makes sense to me given we are actually trying to release lock we
haven't locked yet. So thats why I thought we'd need to work with a
prev_vcpu pointer. So the behavior I've observed is slightly different
than I'd expect from your statement "(i.e. all vcpu->mutexes share the
same name and key because they have a
single mutex_init-ialization site)."

Ack about the unlocking order, that makes things easier.

>
> Paolo
>

  reply	other threads:[~2022-04-29 15:52 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-07 19:59 [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock Peter Gonda
2022-04-07 21:17 ` John Sperbeck
2022-04-08 15:08   ` Peter Gonda
2022-04-20 20:14     ` Peter Gonda
2022-04-21 15:56       ` Paolo Bonzini
2022-04-26 19:06         ` Peter Gonda
2022-04-27 16:04           ` Paolo Bonzini
2022-04-27 20:18             ` Peter Gonda
2022-04-28 21:28               ` Peter Gonda
2022-04-28 23:59                 ` Paolo Bonzini
2022-04-29 15:35                   ` Peter Gonda
2022-04-29 15:38                     ` Paolo Bonzini
2022-04-29 15:51                       ` Peter Gonda [this message]
2022-04-29 15:58                         ` Paolo Bonzini
2022-04-29 17:12                           ` Peter Gonda
2022-04-29 17:21                             ` Paolo Bonzini
2022-04-29 17:27                               ` Peter Gonda
2022-04-29 17:32                                 ` Paolo Bonzini
2022-04-29 17:33                                   ` Peter Gonda
     [not found]                 ` <20220429010312.4013-1-hdanton@sina.com>
2022-04-29  8:48                   ` Paolo Bonzini
     [not found]                   ` <20220429114012.4127-1-hdanton@sina.com>
2022-04-29 13:44                     ` Paolo Bonzini
     [not found]                     ` <20220430015008.4257-1-hdanton@sina.com>
2022-04-30  8:11                       ` Paolo Bonzini
2022-04-30  8:11                       ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMkAt6qAW5zFyTAqX_Az2DT2J3KROPo4u-Ak1sC0J+UTUeFfXA@mail.gmail.com \
    --to=pgonda@google.com \
    --cc=jsperbeck@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rientjes@google.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).