From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35CC5C4320A for ; Thu, 19 Aug 2021 22:58:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 17C746109E for ; Thu, 19 Aug 2021 22:58:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236508AbhHSW7J (ORCPT ); Thu, 19 Aug 2021 18:59:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229808AbhHSW7I (ORCPT ); Thu, 19 Aug 2021 18:59:08 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 428F1C061756 for ; Thu, 19 Aug 2021 15:58:31 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id j1so6050248pjv.3 for ; Thu, 19 Aug 2021 15:58:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=8y5LqE4GdIR929n9d9+ayleYuEU4gSh8xvm/xko1ui8=; b=QWHBI897Zpttl5mcDjfI+YVe6BFEYWDr3Gf2V50xV1oDU7WdDnAUuIZjAwDlKjl4zQ 6wIl8/IbGUU2b4/RocsbAhkg+bTMeXlYlppcgxuc+J7YxEu0MPsoWPqx42u5qw1iWJzc XcGNJd7tx/UgjCAHtcF1YPw5XCR1UsQUAnnq8f4pFuOBPNJ/wWpuQT8lBXrdPZagcKBF 0SNZX9fvVZJhKgXB8zTTsvEX9lzRlYeRgrzaprdGCqLkRy3FiaEXhYE+cwIbsC63M/0V skLtLERUzJVLzX3nx/Nfg96NMpbvOQEbJ7+7SMFfIlnXLpOnR4a+GjTaA8CCNeC3kfrn SQEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=8y5LqE4GdIR929n9d9+ayleYuEU4gSh8xvm/xko1ui8=; b=gTUZXMEYJOnRd6qUv3YRiJDEnQeNgCbu1dyJBMPB+2hG2/qe/ee5qYm7SIjz5OOkZW t2/yR19G/GEIjTXb5DPWFRQkAOFyeZqWHWPqE5+rNiVhtIj+Cylg9lvKgZtewWK8frfM kKVTM2r/yUsp1RU5p5fuQuJdaazgBbh37D6yYf95drvEscttHg5PVhMHqPUtiLRB5vIF YwcRTiP/50p/+WMW5kU9aXI0oovVsK7xqOvSIAulsTFlSCzy50iZPIkL+G6VQyDrEFVd nka8pgiXLUWKXPKR6+pbmrAu+cke0EBr1+r56cSAy7cNKf9KkOOhkCWWjAChETVMMKCY lS7A== X-Gm-Message-State: AOAM532rbI7vYy2oqAGhhqh0zdzbh69eUTIvc00ciGhuLJEri8WaUi9v hBcOv4/A14Rgdg+KC76Y82vVww== X-Google-Smtp-Source: ABdhPJxInDZ1rJn3rwONKayLNmeE6Uv2NWbacmzUYRzmxkkr3zLQLh2eWsOd3SEcDEdZDsO46uljjA== X-Received: by 2002:a17:902:8c83:b029:129:17e5:a1cc with SMTP id t3-20020a1709028c83b029012917e5a1ccmr13710692plo.49.1629413910546; Thu, 19 Aug 2021 15:58:30 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id j11sm4754203pfa.10.2021.08.19.15.58.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Aug 2021 15:58:30 -0700 (PDT) Date: Thu, 19 Aug 2021 22:58:23 +0000 From: Sean Christopherson To: Peter Gonda Cc: Marc Orr , kvm list , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2 V4] KVM, SEV: Add support for SEV intra host migration Message-ID: References: <20210819154910.1064090-1-pgonda@google.com> <20210819154910.1064090-2-pgonda@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 19, 2021, Peter Gonda wrote: > > > > > > +static int svm_sev_lock_for_migration(struct kvm *kvm) > > > +{ > > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > > > + int ret; > > > + > > > + /* > > > + * Bail if this VM is already involved in a migration to avoid deadlock > > > + * between two VMs trying to migrate to/from each other. > > > + */ > > > + spin_lock(&sev->migration_lock); > > > + if (sev->migration_in_progress) > > > + ret = -EBUSY; > > > + else { > > > + /* > > > + * Otherwise indicate VM is migrating and take the KVM lock. > > > + */ > > > + sev->migration_in_progress = true; > > > + mutex_lock(&kvm->lock); Deadlock aside, mutex_lock() can sleep, which is not allowed while holding a spinlock, i.e. this patch does not work. That's my suggestion did the crazy dance of "acquiring" a flag. What I don't know is why on earth I suggested a global spinlock, a simple atomic should work, e.g. if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1)) return -EBUSY; mutex_lock(&kvm->lock); and on the backend... mutex_unlock(&kvm->lock); atomic_set_release(&sev->migration_in_progress, 0); > > > + ret = 0; > > > + } > > > + spin_unlock(&sev->migration_lock); > > > + > > > + return ret; > > > +} > > > + > > > +static void svm_unlock_after_migration(struct kvm *kvm) > > > +{ > > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > > > + > > > + mutex_unlock(&kvm->lock); > > > + WRITE_ONCE(sev->migration_in_progress, false); > > > +} > > > + > > > > This entire locking scheme seems over-complicated to me. Can we simply > > rely on `migration_lock` and get rid of `migration_in_progress`? I was > > chatting about these patches with Peter, while he worked on this new > > version. But he mentioned that this locking scheme had been suggested > > by Sean in a previous review. Sean: what do you think? My rationale > > was that this is called via a VM-level ioctl. So serializing the > > entire code path on `migration_lock` seems fine. But maybe I'm missing > > something? > > > Marc I think that only having the spin lock could result in > deadlocking. If userspace double migrated 2 VMs, A and B for > discussion, A could grab VM_A.spin_lock then VM_A.kvm_mutex. Meanwhile > B could grab VM_B.spin_lock and VM_B.kvm_mutex. Then A attempts to > grab VM_B.spin_lock and we have a deadlock. If the same happens with > the proposed scheme when A attempts to lock B, VM_B.spin_lock will be > open but the bool will mark the VM under migration so A will unlock > and bail. Sean originally proposed a global spin lock but I thought a > per kvm_sev_info struct would also be safe. Close. The issue is taking kvm->lock from both VM_A and VM_B. If userspace double migrates we'll end up with lock ordering A->B and B-A, so we need a way to guarantee one of those wins. My proposed solution is to use a flag as a sort of one-off "try lock" to detect a mean userspace.