linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Gonda <pgonda@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: John Sperbeck <jsperbeck@google.com>,
	kvm list <kvm@vger.kernel.org>,
	David Rientjes <rientjes@google.com>,
	Sean Christopherson <seanjc@google.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock
Date: Wed, 27 Apr 2022 14:18:09 -0600	[thread overview]
Message-ID: <CAMkAt6oL5qi7z-eh4z7z8WBhpc=Ow6WtcJA5bDi6-aGMnz135A@mail.gmail.com> (raw)
In-Reply-To: <4c0edc90-36a1-4f4c-1923-4b20e7bdbb4c@redhat.com>

On Wed, Apr 27, 2022 at 10:04 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 4/26/22 21:06, Peter Gonda wrote:
> > On Thu, Apr 21, 2022 at 9:56 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
> >>
> >> On 4/20/22 22:14, Peter Gonda wrote:
> >>>>>> svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all
> >>>>>> source and target vcpu->locks. Mark the nested subclasses to avoid false
> >>>>>> positives from lockdep.
> >>>> Nope. Good catch, I didn't realize there was a limit 8 subclasses:
> >>> Does anyone have thoughts on how we can resolve this vCPU locking with
> >>> the 8 subclass max?
> >>
> >> The documentation does not have anything.  Maybe you can call
> >> mutex_release manually (and mutex_acquire before unlocking).
> >>
> >> Paolo
> >
> > Hmm this seems to be working thanks Paolo. To lock I have been using:
> >
> > ...
> >                    if (mutex_lock_killable_nested(
> >                                &vcpu->mutex, i * SEV_NR_MIGRATION_ROLES + role))
> >                            goto out_unlock;
> >                    mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
> > ...
> >
> > To unlock:
> > ...
> >                    mutex_acquire(&vcpu->mutex.dep_map, 0, 0, _THIS_IP_);
> >                    mutex_unlock(&vcpu->mutex);
> > ...
> >
> > If I understand correctly we are fully disabling lockdep by doing
> > this. If this is the case should I just remove all the '_nested' usage
> > so switch to mutex_lock_killable() and remove the per vCPU subclass?
>
> Yes, though you could also do:
>
>         bool acquired = false;
>         kvm_for_each_vcpu(...) {
>                 if (acquired)
>                         mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
>                 if (mutex_lock_killable_nested(&vcpu->mutex, role)
>                         goto out_unlock;
>                 acquired = true;
>                 ...
>
> and to unlock:
>
>         bool acquired = true;
>         kvm_for_each_vcpu(...) {
>                 if (!acquired)
>                         mutex_acquire(&vcpu->mutex.dep_map, 0, role, _THIS_IP_);
>                 mutex_unlock(&vcpu->mutex);
>                 acquired = false;
>         }
>
> where role is either 0 or SINGLE_DEPTH_NESTING and is passed to
> sev_{,un}lock_vcpus_for_migration.
>
> That coalesces all the mutexes for a vm in a single subclass, essentially.

Ah thats a great idea to allow for lockdep to work still. I'll try
that out, thanks again Paolo.

>
> Paolo
>

  reply	other threads:[~2022-04-27 20:18 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-07 19:59 [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock Peter Gonda
2022-04-07 21:17 ` John Sperbeck
2022-04-08 15:08   ` Peter Gonda
2022-04-20 20:14     ` Peter Gonda
2022-04-21 15:56       ` Paolo Bonzini
2022-04-26 19:06         ` Peter Gonda
2022-04-27 16:04           ` Paolo Bonzini
2022-04-27 20:18             ` Peter Gonda [this message]
2022-04-28 21:28               ` Peter Gonda
2022-04-28 23:59                 ` Paolo Bonzini
2022-04-29 15:35                   ` Peter Gonda
2022-04-29 15:38                     ` Paolo Bonzini
2022-04-29 15:51                       ` Peter Gonda
2022-04-29 15:58                         ` Paolo Bonzini
2022-04-29 17:12                           ` Peter Gonda
2022-04-29 17:21                             ` Paolo Bonzini
2022-04-29 17:27                               ` Peter Gonda
2022-04-29 17:32                                 ` Paolo Bonzini
2022-04-29 17:33                                   ` Peter Gonda
     [not found]                 ` <20220429010312.4013-1-hdanton@sina.com>
2022-04-29  8:48                   ` Paolo Bonzini
     [not found]                   ` <20220429114012.4127-1-hdanton@sina.com>
2022-04-29 13:44                     ` Paolo Bonzini
     [not found]                     ` <20220430015008.4257-1-hdanton@sina.com>
2022-04-30  8:11                       ` Paolo Bonzini
2022-04-30  8:11                       ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMkAt6oL5qi7z-eh4z7z8WBhpc=Ow6WtcJA5bDi6-aGMnz135A@mail.gmail.com' \
    --to=pgonda@google.com \
    --cc=jsperbeck@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rientjes@google.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).