From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59C9ECDB483 for ; Tue, 17 Oct 2023 10:16:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343761AbjJQKP7 (ORCPT ); Tue, 17 Oct 2023 06:15:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235026AbjJQKPj (ORCPT ); Tue, 17 Oct 2023 06:15:39 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD9ED198; Tue, 17 Oct 2023 03:15:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697537734; x=1729073734; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n4UtbhKr/IDc9atorHG1N+zyvfajcN/QgBgjNTS8SHA=; b=eDHNKpCgLCO9jbK2aGOzDH+KuTobHFVZZK8q/SxonrEFIOAD/Sogyx+w EPbioNcrDvuBqKZCxln2XU6qGXxjXY5fxbIxj0yRErKGeRlRe90dcQMkU 85fAUkEu9S/uvDeXQ5JECDjTjQDSzFgInmAHrcb5FL5gROfmy/ud4LrY9 jFbx6/7sAXNlkRRRt77BGjhxSfjACzo1zYFsnWWBRgMMoMM2WA0t3qCFr e3IrwIUXet2GHCyOcy5tH2N5vwc7bCniScvdLmHJlFM04ii7JSSmaSlW4 YIObDL9Ejns9JL/mtJbBQOQH63TQczC4KN04pHY4x58OCFrX/WvyrOdm8 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="452226764" X-IronPort-AV: E=Sophos;i="6.03,231,1694761200"; d="scan'208";a="452226764" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 03:15:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="872503462" X-IronPort-AV: E=Sophos;i="6.03,231,1694761200"; d="scan'208";a="872503462" Received: from chowe-mobl.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.255.229.64]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 03:15:28 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: x86@kernel.org, dave.hansen@intel.com, kirill.shutemov@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, tglx@linutronix.de, bp@alien8.de, mingo@redhat.com, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, rafael@kernel.org, david@redhat.com, dan.j.williams@intel.com, len.brown@intel.com, ak@linux.intel.com, isaku.yamahata@intel.com, ying.huang@intel.com, chao.gao@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, nik.borisov@suse.com, bagasdotme@gmail.com, sagis@google.com, imammedo@redhat.com, kai.huang@intel.com Subject: [PATCH v14 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code Date: Tue, 17 Oct 2023 23:14:29 +1300 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some SEAMCALLs use the RDRAND hardware and can fail for the same reasons as RDRAND. Use the kernel RDRAND retry logic for them. There are three __seamcall*() variants. Do the SEAMCALL retry in common code and add a wrapper for each of them. Signed-off-by: Kai Huang Reviewed-by: Kirill A. Shutemov --- v13 -> v14: - Use real function sc_retry() instead of using macros. (Dave) - Added Kirill's tag. v12 -> v13: - New implementation due to TDCALL assembly series. --- arch/x86/include/asm/tdx.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index a252328734c7..d624aa25aab0 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -24,6 +24,11 @@ #define TDX_SEAMCALL_GP (TDX_SW_ERROR | X86_TRAP_GP) #define TDX_SEAMCALL_UD (TDX_SW_ERROR | X86_TRAP_UD) +/* + * TDX module SEAMCALL leaf function error codes + */ +#define TDX_RND_NO_ENTROPY 0x8000020300000000ULL + #ifndef __ASSEMBLY__ /* @@ -82,6 +87,27 @@ u64 __seamcall(u64 fn, struct tdx_module_args *args); u64 __seamcall_ret(u64 fn, struct tdx_module_args *args); u64 __seamcall_saved_ret(u64 fn, struct tdx_module_args *args); +#include + +typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args); + +static inline u64 sc_retry(sc_func_t func, u64 fn, + struct tdx_module_args *args) +{ + int retry = RDRAND_RETRY_LOOPS; + u64 ret; + + do { + ret = func(fn, args); + } while (ret == TDX_RND_NO_ENTROPY && --retry); + + return ret; +} + +#define seamcall(_fn, _args) sc_retry(__seamcall, (_fn), (_args)) +#define seamcall_ret(_fn, _args) sc_retry(__seamcall_ret, (_fn), (_args)) +#define seamcall_saved_ret(_fn, _args) sc_retry(__seamcall_saved_ret, (_fn), (_args)) + bool platform_tdx_enabled(void); #else static inline bool platform_tdx_enabled(void) { return false; } -- 2.41.0