From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A946C433F5 for ; Tue, 28 Sep 2021 12:42:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D53D60E54 for ; Tue, 28 Sep 2021 12:42:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240698AbhI1MoH (ORCPT ); Tue, 28 Sep 2021 08:44:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240637AbhI1MoF (ORCPT ); Tue, 28 Sep 2021 08:44:05 -0400 Received: from theia.8bytes.org (8bytes.org [IPv6:2a01:238:4383:600:38bc:a715:4b6d:a889]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 629AAC061575; Tue, 28 Sep 2021 05:42:26 -0700 (PDT) Received: by theia.8bytes.org (Postfix, from userid 1000) id B2248208; Tue, 28 Sep 2021 14:42:24 +0200 (CEST) Date: Tue, 28 Sep 2021 14:42:19 +0200 From: Joerg Roedel To: Peter Gonda Cc: kvm@vger.kernel.org, Sean Christopherson , Marc Orr , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/4 V8] KVM: SEV: Add support for SEV intra host migration Message-ID: References: <20210914164727.3007031-1-pgonda@google.com> <20210914164727.3007031-2-pgonda@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210914164727.3007031-2-pgonda@google.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Tue, Sep 14, 2021 at 09:47:24AM -0700, Peter Gonda wrote: > +static int sev_lock_vcpus_for_migration(struct kvm *kvm) > +{ > + struct kvm_vcpu *vcpu; > + int i, j; > + > + kvm_for_each_vcpu(i, vcpu, kvm) { > + if (mutex_lock_killable(&vcpu->mutex)) > + goto out_unlock; > + } > + > + return 0; > + > +out_unlock: > + kvm_for_each_vcpu(j, vcpu, kvm) { > + mutex_unlock(&vcpu->mutex); > + if (i == j) > + break; Hmm, doesn't the mutex_unlock() need to happen after the check?