From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AB5BC433FE for ; Fri, 29 Apr 2022 17:29:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379744AbiD2Rcz (ORCPT ); Fri, 29 Apr 2022 13:32:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379628AbiD2RcU (ORCPT ); Fri, 29 Apr 2022 13:32:20 -0400 Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com [IPv6:2a00:1450:4864:20::135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F948DB0C6 for ; Fri, 29 Apr 2022 10:28:06 -0700 (PDT) Received: by mail-lf1-x135.google.com with SMTP id y32so15243632lfa.6 for ; Fri, 29 Apr 2022 10:28:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=k5wgbbVMzxT1FGv3ToNsrykCep98GtsONmeu7ExUc9Q=; b=Ooof03YYaerjHpgKdNiPWsEY0/0Dof5Tz3f/l3NM3t82c5E1iKlZq6AVPXH7KVYIH6 X0iW4bDtykLWpRUA0aFTl1zoZb6D1zksUCsNeGK4PbYeN/hSAip981Pdq3GvY9cWpIGa S7jr4Ia7I0MCL9snxMeGlLXf1oWzHPFugitt9t0znydt9cx3QNGRLv9Qta8Qx0RsBORv O31pV3mithTb0WNiKc5dnD+lkhpglvtgjGLf6WzYWqepdwo3zG87nGVUMSbep/g7KU43 ltQNnCH9OiGTffgBabg9qYb+C2TTPZbJLX0uq5IGHF+ZO1sRlGKjXmOr/QZUOY59VTjp A3hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=k5wgbbVMzxT1FGv3ToNsrykCep98GtsONmeu7ExUc9Q=; b=iOTGCVZlpZeHj0ebrphRY9MwhQRKK+fsZHiK5Q+2Evg3kk4bIj15EeGkHWB3dUEMMR om27kGedYGBHE0PD9+E5QxYdwuYji2TuEr6v7WmaQt3ZdQg4FrqzdMO5qeo+rJmk8fgr /n+7SISLtpJCp1W5tUMeFp2CGl5j0MUTfu43cMu6FGcoVzY/M53FBaqlgsCJOBNVRlGi bATw5tH3w3JakM6sWcJFQIpPmpvXIkHWvNlxBoLVeU59k5EBRJjW7tdf01WbKXQvyG+z 3vAvff+NICUOHQeclb82zffZJRKDJglEx/D7qcVxOMSR5ttMi8YA2LTzCsi3QHd5a1uW rAAw== X-Gm-Message-State: AOAM531R1AV0UYCGX/YeE37iSgaakKPbKYIbO6KQPYmeTUI2aNNXd8vs /TeXzBeP8A/LXUaHxNQEK+dC2pQDHT8ArzKzTPfJsA== X-Google-Smtp-Source: ABdhPJwg3BtzWCk4ed0t1y77RW806RTg1nSgy4v/2th+J0/fN7ePNIb4Qp0MNC4cFS2diG3oqDYHS6+0sdvk4Sihi0E= X-Received: by 2002:a05:6512:2627:b0:44a:f55c:ded9 with SMTP id bt39-20020a056512262700b0044af55cded9mr184036lfb.373.1651253283698; Fri, 29 Apr 2022 10:28:03 -0700 (PDT) MIME-Version: 1.0 References: <20220407195908.633003-1-pgonda@google.com> <62e9ece1-5d71-f803-3f65-2755160cf1d1@redhat.com> <4c0edc90-36a1-4f4c-1923-4b20e7bdbb4c@redhat.com> <0d282be4-d612-374d-84ba-067994321bab@redhat.com> <8a2c5f8c-503c-b4f0-75e7-039533c9852d@redhat.com> <4afce434-ab25-66d6-76f4-3a987f64e88e@redhat.com> In-Reply-To: From: Peter Gonda Date: Fri, 29 Apr 2022 11:27:51 -0600 Message-ID: Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock To: Paolo Bonzini Cc: John Sperbeck , kvm list , David Rientjes , Sean Christopherson , LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 29, 2022 at 11:21 AM Paolo Bonzini wrote: > > On Fri, Apr 29, 2022 at 7:12 PM Peter Gonda wrote: > > Sounds good. Instead of doing this prev_vcpu solution we could just > > keep the 1st vcpu for source and target. I think this could work since > > all the vcpu->mutex.dep_maps do not point to the same string. > > > > Lock: > > bool acquired = false; > > kvm_for_each_vcpu(...) { > > if (mutex_lock_killable_nested(&vcpu->mutex, role) > > goto out_unlock; > > acquired = true; > > if (acquired) > > mutex_release(&vcpu->mutex, role) > > } > > Almost: > > bool first = true; > kvm_for_each_vcpu(...) { > if (mutex_lock_killable_nested(&vcpu->mutex, role) > goto out_unlock; > if (first) > ++role, first = false; > else > mutex_release(&vcpu->mutex, role); > } > > and to unlock: > > bool first = true; > kvm_for_each_vcpu(...) { > if (first) > first = false; > else > mutex_acquire(&vcpu->mutex, role); > mutex_unlock(&vcpu->mutex); > acquired = false; > } > > because you cannot use the first vCPU's role again when locking. Ah yes I missed that. I would suggest `role = SEV_NR_MIGRATION_ROLES` or something else instead of role++ to avoid leaking this implementation detail outside of the function signature / enum. > > Paolo >