From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5650DC19F2A for ; Fri, 29 Jul 2022 18:15:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238399AbiG2SPZ (ORCPT ); Fri, 29 Jul 2022 14:15:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229593AbiG2SPX (ORCPT ); Fri, 29 Jul 2022 14:15:23 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66F661ADAC for ; Fri, 29 Jul 2022 11:15:21 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id 23so4599210pgc.8 for ; Fri, 29 Jul 2022 11:15:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=Obk/pcqF68Ks7EntsqYVj2QhxCZophsW8ZmQm6Eytd0=; b=sdNmm3lLFIdBbY9ez5s+LiPHA0B79x8tQ1YJzPIc62ADvFEVyR6qw9k2bOtcfRWF/1 QVKJrGQFjgxWtK4j9SVMWTC2uu7nEKyh5pxHBwgLIR1eNfHQ7hRGGiByQ70p6TWHI78q +xAaZPpgMHO0AeTlIJzRI00J4P/Nd24j8doXNytxXZXuPv8I9CjMq1tDu42BMi7Qiw3h G/dWVytyQYCgnE7SuD+OUsUtZ951KJf3smo8WTiuTY6vNJ9vp6Qx6WOmFMPlgAWHm9P6 AHZyVKiPkewv86zzPZc4Uq+BEdRLvKI2eb+4Ygk81F4u/w24SmLfxsOqIzEaQR8aMRo/ V5ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=Obk/pcqF68Ks7EntsqYVj2QhxCZophsW8ZmQm6Eytd0=; b=abwTw9nF0YL5JlSJgN/Lka4Nrds5p4zRTGQoOIoT//o6lJ0y+zocpbY7WfapkZlK77 8tmQnfSF0/baPp9bg5YS6d3upF3rH8unGaYN+oJQFt6ZjzM8gu+cMEMuI7UFXa5lJt/v /Gp/EfsAXuDsJAr+lMrUrZq/uB1G2EWGSYFZUyd0K+aRYmWnlE6c1ySiuhKtBxTlZ/O2 mwbe7cz7krwNmq5OUJQH+D4Gixx8jMLF9gFImlypi/D3KFFRWviYM3tbktOx9RppCRas Fi3bMA69g+3Lpz7RjTSd4Pet9z20Ybk5m0bNalzBT/cOYivVOgIjU+g+/EmYdcEPt0ME C0nA== X-Gm-Message-State: AJIora9dNPlG5kQxs4+UITQ9Hdi10rOwIthI6P1AeUCL8TZ8U5ErlWIZ C8u1+lBMWYEMRp7UpHaSVAfuGQ== X-Google-Smtp-Source: AGRyM1svUsmBEpgf5T4GJq5zfLGGMtiYdIybjaQglORa4rlytr4GMQJOh2HBpI9hYelo6x2YrG5qWw== X-Received: by 2002:a63:f011:0:b0:41a:6262:bfcd with SMTP id k17-20020a63f011000000b0041a6262bfcdmr3842848pgh.40.1659118520814; Fri, 29 Jul 2022 11:15:20 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id x11-20020a1709028ecb00b0016c38eb1f3asm3861796plo.214.2022.07.29.11.15.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Jul 2022 11:15:20 -0700 (PDT) Date: Fri, 29 Jul 2022 18:15:16 +0000 From: Sean Christopherson To: Kai Huang Cc: Paolo Bonzini , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Tom Lendacky Subject: Re: [PATCH 3/4] KVM: SVM: Adjust MMIO masks (for caching) before doing SEV(-ES) setup Message-ID: References: <20220728221759.3492539-1-seanjc@google.com> <20220728221759.3492539-4-seanjc@google.com> <9bdfbad2dc9f193fb57f7ee113db7f1c2b96973c.camel@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9bdfbad2dc9f193fb57f7ee113db7f1c2b96973c.camel@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 29, 2022, Kai Huang wrote: > On Thu, 2022-07-28 at 22:17 +0000, Sean Christopherson wrote: > > Adjust KVM's MMIO masks to account for the C-bit location prior to doing > > SEV(-ES) setup. A future patch will consume enable_mmio caching during > > SEV setup as SEV-ES _requires_ MMIO caching, i.e. KVM needs to disallow > > SEV-ES if MMIO caching is disabled. > > > > Cc: stable@vger.kernel.org > > Signed-off-by: Sean Christopherson > > --- > > arch/x86/kvm/svm/svm.c | 9 ++++++--- > > 1 file changed, 6 insertions(+), 3 deletions(-) > > > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > > index aef63aae922d..62e89db83bc1 100644 > > --- a/arch/x86/kvm/svm/svm.c > > +++ b/arch/x86/kvm/svm/svm.c > > @@ -5034,13 +5034,16 @@ static __init int svm_hardware_setup(void) > > /* Setup shadow_me_value and shadow_me_mask */ > > kvm_mmu_set_me_spte_mask(sme_me_mask, sme_me_mask); > > > > - /* Note, SEV setup consumes npt_enabled. */ > > + svm_adjust_mmio_mask(); > > + > > + /* > > + * Note, SEV setup consumes npt_enabled and enable_mmio_caching (which > > + * may be modified by svm_adjust_mmio_mask()). > > + */ > > sev_hardware_setup(); > > If I am not seeing mistakenly, the code in latest queue branch doesn't consume > enable_mmio_caching. It is only added in your later patch. > > So perhaps adjust the comment or merge patches together? Oooh, I see what you're saying. I split the patches so that if this movement turns out to break something then bisection will point directly here, but that's a pretty weak argument since both patches are tiny. And taking patch 4 without patch 3, e.g. in the unlikely event this movement needs to be reverted, is probably worse than not having patch 4 at all, i.e. having somewhat obvious breakage is better. So yeah, I'll squash this with patch 4. Thanks!