From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 678A5C43334 for ; Mon, 27 Jun 2022 21:57:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241907AbiF0V5W (ORCPT ); Mon, 27 Jun 2022 17:57:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241374AbiF0Vy7 (ORCPT ); Mon, 27 Jun 2022 17:54:59 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C24656305; Mon, 27 Jun 2022 14:54:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656366896; x=1687902896; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/mQaXP2VIXEOdSbOXfM9daL4mBgq94R6l1BwCramulU=; b=Vn8jb+ugd6GgLTZBCtdlvRANR0nYzyeis2N3BBRy5uKjrnW2fJ3j1wMe TtN3uSJHyzkBDjVmAF2YoFkY+0CZwnPrknGjcT8ew+1OHZT5kXxNRoi6v veddpzFk2yUCGC7zU2algB2fTPWWaKcB/v9BU9t7e5QY0Z76jTIvWgnw+ ik/ca18mZeBFor36wZa2exzQLWP4lIqf/0+/UCebjmOfjSQqsSbpQ1EV7 A1LlZ+ni1SI5Rm2q4QM5ar+0/vzgsfxXJJneS8b8+h0lTbgKhA8HYO0B2 PiY5hIRNLMeiYmoLhR72OEEXnRIcv3CUreniTX0UyCdOLjrTadD4awC8N Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10391"; a="281609537" X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="281609537" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:52 -0700 X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="657863541" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:52 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Rick Edgecombe Subject: [PATCH v7 033/102] KVM: x86/mmu: Add address conversion functions for TDX shared bits Date: Mon, 27 Jun 2022 14:53:25 -0700 Message-Id: <69f4b4942d5f17fad40a8d08556488b8e4b7954d.1656366338.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Rick Edgecombe TDX repurposes one GPA bits (51 bit or 47 bit based on configuration) to indicate the GPA is private(if cleared) or shared (if set) with VMM. If GPA.shared is set, GPA is converted existing conventional EPT pointed by EPTP. If GPA.shared bit is cleared, GPA is converted by Secure-EPT(S-EPT) TDX module manages. VMM has to issue SEAM call to TDX module to operate on S-EPT. e.g. populating/zapping guest page or shadow page by TDH.PAGE.{ADD, REMOVE} for guest page, TDH.PAGE.SEPT.{ADD, REMOVE} S-EPT etc. Several hooks needs to be added to KVM MMU to support TDX. Add a function to check if KVM MMU is running for TDX and several functions for address conversation between private-GPA and shared-GPA. Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.h | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e5d4e5b60fdc..2c47aab72a1b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1339,7 +1339,9 @@ struct kvm_arch { */ u32 max_vcpu_ids; +#ifdef CONFIG_KVM_MMU_PRIVATE gfn_t gfn_shared_mask; +#endif }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index f8192864b496..ccf0ba7a6387 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -286,4 +286,36 @@ static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu, return gpa; return translate_nested_gpa(vcpu, gpa, access, exception); } + +static inline gfn_t kvm_gfn_shared_mask(const struct kvm *kvm) +{ +#ifdef CONFIG_KVM_MMU_PRIVATE + return kvm->arch.gfn_shared_mask; +#else + return 0; +#endif +} + +static inline gfn_t kvm_gfn_shared(const struct kvm *kvm, gfn_t gfn) +{ + return gfn | kvm_gfn_shared_mask(kvm); +} + +static inline gfn_t kvm_gfn_private(const struct kvm *kvm, gfn_t gfn) +{ + return gfn & ~kvm_gfn_shared_mask(kvm); +} + +static inline gpa_t kvm_gpa_private(const struct kvm *kvm, gpa_t gpa) +{ + return gpa & ~gfn_to_gpa(kvm_gfn_shared_mask(kvm)); +} + +static inline bool kvm_is_private_gpa(const struct kvm *kvm, gpa_t gpa) +{ + gfn_t mask = kvm_gfn_shared_mask(kvm); + + return mask && !(gpa_to_gfn(gpa) & mask); +} + #endif -- 2.25.1