linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Peter Gonda <pgonda@google.com>
Cc: John Sperbeck <jsperbeck@google.com>,
	kvm list <kvm@vger.kernel.org>,
	David Rientjes <rientjes@google.com>,
	Sean Christopherson <seanjc@google.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock
Date: Fri, 29 Apr 2022 01:59:15 +0200	[thread overview]
Message-ID: <0d282be4-d612-374d-84ba-067994321bab@redhat.com> (raw)
In-Reply-To: <CAMkAt6rmDrZfN5DbNOTsKFV57PwEnK2zxgBTCbEPeE206+5v5w@mail.gmail.com>

On 4/28/22 23:28, Peter Gonda wrote:
> 
> So when actually trying this out I noticed that we are releasing the
> current vcpu iterator but really we haven't actually taken that lock
> yet. So we'd need to maintain a prev_* pointer and release that one.

Not entirely true because all vcpu->mutex.dep_maps will be for the same 
lock.  The dep_map is essentially a fancy string, in this case 
"&vcpu->mutex".

See the definition of mutex_init:

#define mutex_init(mutex)                                              \
do {                                                                   \
         static struct lock_class_key __key;                            \
                                                                        \
         __mutex_init((mutex), #mutex, &__key);                         \
} while (0)

and the dep_map field is initialized with

         lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_SLEEP);

(i.e. all vcpu->mutexes share the same name and key because they have a 
single mutex_init-ialization site).  Lockdep is as crude in theory as it 
is effective in practice!

> 
>           bool acquired = false;
>           kvm_for_each_vcpu(...) {
>                   if (!acquired) {
>                      if (mutex_lock_killable_nested(&vcpu->mutex, role)
>                          goto out_unlock;
>                      acquired = true;
>                   } else {
>                      if (mutex_lock_killable(&vcpu->mutex, role)
>                          goto out_unlock;

This will cause a lockdep splat because it uses subclass 0.  All the 
*_nested functions is allow you to specify a subclass other than zero.

Paolo

>                   }
>           }
> 
> To unlock:
> 
>           kvm_for_each_vcpu(...) {
>              mutex_unlock(&vcpu->mutex);
>           }
> 
> This way instead of mocking and releasing the lock_dep we just lock
> the fist vcpu with mutex_lock_killable_nested(). I think this
> maintains the property you suggested of "coalesces all the mutexes for
> a vm in a single subclass".  Thoughts?


  reply	other threads:[~2022-04-28 23:59 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-07 19:59 [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock Peter Gonda
2022-04-07 21:17 ` John Sperbeck
2022-04-08 15:08   ` Peter Gonda
2022-04-20 20:14     ` Peter Gonda
2022-04-21 15:56       ` Paolo Bonzini
2022-04-26 19:06         ` Peter Gonda
2022-04-27 16:04           ` Paolo Bonzini
2022-04-27 20:18             ` Peter Gonda
2022-04-28 21:28               ` Peter Gonda
2022-04-28 23:59                 ` Paolo Bonzini [this message]
2022-04-29 15:35                   ` Peter Gonda
2022-04-29 15:38                     ` Paolo Bonzini
2022-04-29 15:51                       ` Peter Gonda
2022-04-29 15:58                         ` Paolo Bonzini
2022-04-29 17:12                           ` Peter Gonda
2022-04-29 17:21                             ` Paolo Bonzini
2022-04-29 17:27                               ` Peter Gonda
2022-04-29 17:32                                 ` Paolo Bonzini
2022-04-29 17:33                                   ` Peter Gonda
     [not found]                 ` <20220429010312.4013-1-hdanton@sina.com>
2022-04-29  8:48                   ` Paolo Bonzini
     [not found]                   ` <20220429114012.4127-1-hdanton@sina.com>
2022-04-29 13:44                     ` Paolo Bonzini
     [not found]                     ` <20220430015008.4257-1-hdanton@sina.com>
2022-04-30  8:11                       ` Paolo Bonzini
2022-04-30  8:11                       ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0d282be4-d612-374d-84ba-067994321bab@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=jsperbeck@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pgonda@google.com \
    --cc=rientjes@google.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).