linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Gonda <pgonda@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: John Sperbeck <jsperbeck@google.com>,
	kvm list <kvm@vger.kernel.org>,
	David Rientjes <rientjes@google.com>,
	Sean Christopherson <seanjc@google.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock
Date: Fri, 29 Apr 2022 11:12:36 -0600	[thread overview]
Message-ID: <CAMkAt6o8u9=H_kjr_xyRO05J=RDFUZRiTc_Bw-FFDKEUaiyp2Q@mail.gmail.com> (raw)
In-Reply-To: <4afce434-ab25-66d6-76f4-3a987f64e88e@redhat.com>

On Fri, Apr 29, 2022 at 9:59 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 4/29/22 17:51, Peter Gonda wrote:
> >> No, you don't need any of this.  You can rely on there being only one
> >> depmap, otherwise you wouldn't need the mock releases and acquires at
> >> all.  Also the unlocking order does not matter for deadlocks, only the
> >> locking order does.  You're overdoing it. :)
> >
> > Hmm I'm slightly confused here then. If I take your original suggestion of:
> >
> >          bool acquired = false;
> >          kvm_for_each_vcpu(...) {
> >                  if (acquired)
> >                          mutex_release(&vcpu->mutex.dep_map,
> > _THIS_IP_);  <-- Warning here
> >                  if (mutex_lock_killable_nested(&vcpu->mutex, role)
> >                          goto out_unlock;
> >                  acquired = true;
> >
> > """
> > [ 2810.088982] =====================================
> > [ 2810.093687] WARNING: bad unlock balance detected!
> > [ 2810.098388] 5.17.0-dbg-DEV #5 Tainted: G           O
> > [ 2810.103788] -------------------------------------
>
> Ah even if the contents of the dep_map are the same for all locks, it
> also uses the *pointer* to the dep_map to track (class, subclass) ->
> pointer and checks for a match.
>
> So yeah, prev_cpu is needed.  The unlock ordering OTOH is irrelevant so
> you don't need to visit the xarray backwards.

Sounds good. Instead of doing this prev_vcpu solution we could just
keep the 1st vcpu for source and target. I think this could work since
all the vcpu->mutex.dep_maps do not point to the same string.

Lock:
         bool acquired = false;
         kvm_for_each_vcpu(...) {
                 if (mutex_lock_killable_nested(&vcpu->mutex, role)
                     goto out_unlock;
                acquired = true;
                 if (acquired)
                      mutex_release(&vcpu->mutex, role)
         }

Unlock

         bool acquired = true;
         kvm_for_each_vcpu(...) {
               if (!acquired)
                     mutex_acquire(&vcpu->mutex, role)
               mutex_unlock(&vcpu->mutex);
               acquired = false;
         }

So when locking we release all but the first dep_maps. Then when
unlocking we acquire all but the first dep_maps. Thoughts?

>
> Paolo
>

  reply	other threads:[~2022-04-29 17:12 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-07 19:59 [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock Peter Gonda
2022-04-07 21:17 ` John Sperbeck
2022-04-08 15:08   ` Peter Gonda
2022-04-20 20:14     ` Peter Gonda
2022-04-21 15:56       ` Paolo Bonzini
2022-04-26 19:06         ` Peter Gonda
2022-04-27 16:04           ` Paolo Bonzini
2022-04-27 20:18             ` Peter Gonda
2022-04-28 21:28               ` Peter Gonda
2022-04-28 23:59                 ` Paolo Bonzini
2022-04-29 15:35                   ` Peter Gonda
2022-04-29 15:38                     ` Paolo Bonzini
2022-04-29 15:51                       ` Peter Gonda
2022-04-29 15:58                         ` Paolo Bonzini
2022-04-29 17:12                           ` Peter Gonda [this message]
2022-04-29 17:21                             ` Paolo Bonzini
2022-04-29 17:27                               ` Peter Gonda
2022-04-29 17:32                                 ` Paolo Bonzini
2022-04-29 17:33                                   ` Peter Gonda
     [not found]                 ` <20220429010312.4013-1-hdanton@sina.com>
2022-04-29  8:48                   ` Paolo Bonzini
     [not found]                   ` <20220429114012.4127-1-hdanton@sina.com>
2022-04-29 13:44                     ` Paolo Bonzini
     [not found]                     ` <20220430015008.4257-1-hdanton@sina.com>
2022-04-30  8:11                       ` Paolo Bonzini
2022-04-30  8:11                       ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMkAt6o8u9=H_kjr_xyRO05J=RDFUZRiTc_Bw-FFDKEUaiyp2Q@mail.gmail.com' \
    --to=pgonda@google.com \
    --cc=jsperbeck@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rientjes@google.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).