From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99D58C4332F for ; Mon, 7 Mar 2022 21:35:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5BC598D0017; Mon, 7 Mar 2022 16:35:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CEA28D001B; Mon, 7 Mar 2022 16:35:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D5F78D0019; Mon, 7 Mar 2022 16:35:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id C57AC8D0017 for ; Mon, 7 Mar 2022 16:35:06 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7CD558249980 for ; Mon, 7 Mar 2022 21:35:06 +0000 (UTC) X-FDA: 79218895812.29.55AE248 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2070.outbound.protection.outlook.com [40.107.237.70]) by imf30.hostedemail.com (Postfix) with ESMTP id CBA5080003 for ; Mon, 7 Mar 2022 21:35:05 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=a/X00GmcdoJ41LJyFG5RQi89zMyL4E+LflhIqrw2lRF1g0A/KlpX1nPkk1tt4TTzgOvZ1yu1+42rdXUTCED0WvKRR/Ezq5gOAedxqtLQ99CGuEPEjHXqzGsxhX4Q0DWU+KjL9hCWcMc8MqZlX3rC7zVpGU3tPgLnejJf5SYjCXOEpvv9QwXzqYw0ZD7M4PIXszMEoHqfFZ+EzRP+QBGgyr+ZBlAZvErI7g0Uf8QEq3Jzkd+Q9vSaZYajD1cmXLPltRTP02n51013NeoC546KVGwMW1UsM7urTgB4sxmcc5vyju37fUD88FtQ47nbFnsibAyrgo9hX0Tt9kUGzKxiiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IP1obJvu6OsHJZcZ8vx43TaKjIyen/YfL7sTU73FQTw=; b=Q7ZNIpwwk0XEc4JbKrdur+nhYskPYhbkyunaSrJrFkkI/HMB625mkKky7ckOx+/vK2omxM3C4X13GTis/hhjhC+4eQnU0irfjeXZkO/1aXHIoUY+yZrTyYGCggNFfN2sn9Vjdjpwu4AbYCWYRLwgwABRrsDN2QqU8VAq95Z3QY3dIYg1doXCMrliKS7Q3UBWnBB+cqiFdO6h74EHg6HKdn/74zq2Gey21tuQath70jFE+WEzBpGw7BvjsH5HZ0bAYDPF+BUCrUyDF1JRFabwYH0l68YUXuakqtiSf1UtRObNcA/gj4mY9lECkwuDfo67mYL7LmDkkvD5LW3dMonLPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IP1obJvu6OsHJZcZ8vx43TaKjIyen/YfL7sTU73FQTw=; b=WVXxwYh7PJBYG3BL78UX62IDAZrWu0VO0QhyX7nWywzt0b3YmjTgfo3TmYWRZNqvsVbe4QjEW2IWNr0jXslpSwOn1F1lYSnTVoLWiZdWedG+lUvu40BdJdIzn4OMGeGpF17eMwYMub45X0D+EDGbOD7Pbo66A1x4x69+ynetpcU= Received: from BN9PR03CA0377.namprd03.prod.outlook.com (2603:10b6:408:f7::22) by DM5PR12MB1657.namprd12.prod.outlook.com (2603:10b6:4:d::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.17; Mon, 7 Mar 2022 21:35:00 +0000 Received: from BN8NAM11FT031.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f7:cafe::ef) by BN9PR03CA0377.outlook.office365.com (2603:10b6:408:f7::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.14 via Frontend Transport; Mon, 7 Mar 2022 21:35:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT031.mail.protection.outlook.com (10.13.177.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5038.14 via Frontend Transport; Mon, 7 Mar 2022 21:34:59 +0000 Received: from sbrijesh-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.18; Mon, 7 Mar 2022 15:34:54 -0600 From: Brijesh Singh To: , , , , , , CC: Thomas Gleixner , Ingo Molnar , Joerg Roedel , Tom Lendacky , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Sean Christopherson , "Vitaly Kuznetsov" , Jim Mattson , "Andy Lutomirski" , Dave Hansen , Sergio Lopez , Peter Gonda , "Peter Zijlstra" , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Borislav Petkov , Michael Roth , Vlastimil Babka , "Kirill A . Shutemov" , Andi Kleen , "Dr . David Alan Gilbert" , , , , , Brijesh Singh Subject: [PATCH v12 21/46] x86/mm: Validate memory when changing the C-bit Date: Mon, 7 Mar 2022 15:33:31 -0600 Message-ID: <20220307213356.2797205-22-brijesh.singh@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220307213356.2797205-1-brijesh.singh@amd.com> References: <20220307213356.2797205-1-brijesh.singh@amd.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 61ebe13e-05e2-437a-55ce-08da008254ee X-MS-TrafficTypeDiagnostic: DM5PR12MB1657:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IR3OZ3Rbo1wZl1Sp/QSMnUmVwP5FD1kx5EEfyJV39cRNZfJeqU9VcpepWq3EUXY1QBSjPZEa5JycEwc2Zkn8bNHASxYx5jYTI13q2PJnBjOA/F3pvLKTJcMKvDai/bTU/xQsmNGZB8DqGl9vWYuVbEBUwVh7AfdEbn0lkVL2rq2zrQBnzLIcSfsec2S6/yA3zlv+vcQJBVc/PT6O3cysN/ARoiVHZ6l2wa5+T5VrabJFidW6yOET8jJxWMKB7vDV3QoRkLbyKNp8oAiX6EDl+b5i9rnjOyH3X/wdPRshPHkhEmUi5SHSpVjlnR2ZiW4ppxzmg4r43JagYUAaCKlrx4SCYbKZiaid+3y6kmJceX/996wRgfsgnZSCWX2G+kf1sC7WNC7gvHM9qJbGOLhsvZzfnMSFtgIw7lrrc/lIbOpBcxydw3pnmF/j98O2mg/0uyDY60iQ9PF1dxfSspUcXpFqvI1p4A9gfKR2H/6Dx3Pe3+jh5/0g9dY/bxOKrCcVZOQ9iPUM0OG9D6XVqm+ceunqLES6ZhVt/9dQR8vBdXCzbDy8O3rOs5eIdFbnS59lsTsxgyBr3sJmAbnAQ3xOa+ZD87J6H6wxsI/mP3AVZXXLdNYlsnReM6qGNAzvrKWCppjaw708txtz9mcHu3ehOYF93OHFGrHBfXe+EUnnHEcZpgMRC4iJxLpfefCu9CDoNj3Dbgu7fKFkLKi/xbLn5CgqyQ2HohJzu/ZXEMZhJio= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(5660300002)(81166007)(2906002)(86362001)(15650500001)(36860700001)(44832011)(47076005)(508600001)(8676002)(7416002)(7406005)(70586007)(70206006)(4326008)(6666004)(356005)(40460700003)(7696005)(8936002)(82310400004)(83380400001)(54906003)(426003)(336012)(1076003)(186003)(26005)(110136005)(36756003)(316002)(2616005)(16526019)(36900700001)(2101003);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Mar 2022 21:34:59.6375 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 61ebe13e-05e2-437a-55ce-08da008254ee X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT031.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1657 X-Rspamd-Queue-Id: CBA5080003 X-Stat-Signature: ztk3b9ycij9qcikm54xxhpfnxnsnusom X-Rspam-User: Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=WVXxwYh7; dmarc=pass (policy=quarantine) header.from=amd.com; spf=pass (imf30.hostedemail.com: domain of brijesh.singh@amd.com designates 40.107.237.70 as permitted sender) smtp.mailfrom=brijesh.singh@amd.com X-Rspamd-Server: rspam03 X-HE-Tag: 1646688905-771460 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add the needed functionality to change pages state from shared to private and vice-versa using the Page State Change VMGEXIT as documented in the GHCB spec. Signed-off-by: Brijesh Singh --- arch/x86/include/asm/sev-common.h | 22 ++++ arch/x86/include/asm/sev.h | 4 + arch/x86/include/uapi/asm/svm.h | 2 + arch/x86/kernel/sev.c | 168 ++++++++++++++++++++++++++++++ arch/x86/mm/mem_encrypt_amd.c | 13 +++ 5 files changed, 209 insertions(+) diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev= -common.h index f077a6c95e67..1aa72b5c2490 100644 --- a/arch/x86/include/asm/sev-common.h +++ b/arch/x86/include/asm/sev-common.h @@ -105,6 +105,28 @@ enum psc_op { =20 #define GHCB_HV_FT_SNP BIT_ULL(0) =20 +/* SNP Page State Change NAE event */ +#define VMGEXIT_PSC_MAX_ENTRY 253 + +struct psc_hdr { + u16 cur_entry; + u16 end_entry; + u32 reserved; +} __packed; + +struct psc_entry { + u64 cur_page : 12, + gfn : 40, + operation : 4, + pagesize : 1, + reserved : 7; +} __packed; + +struct snp_psc_desc { + struct psc_hdr hdr; + struct psc_entry entries[VMGEXIT_PSC_MAX_ENTRY]; +} __packed; + #define GHCB_MSR_TERM_REQ 0x100 #define GHCB_MSR_TERM_REASON_SET_POS 12 #define GHCB_MSR_TERM_REASON_SET_MASK 0xf diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index f65d257e3d4a..feeb93e6ec97 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -128,6 +128,8 @@ void __init early_snp_set_memory_private(unsigned lon= g vaddr, unsigned long padd void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned lo= ng paddr, unsigned int npages); void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum p= sc_op op); +void snp_set_memory_shared(unsigned long vaddr, unsigned int npages); +void snp_set_memory_private(unsigned long vaddr, unsigned int npages); #else static inline void sev_es_ist_enter(struct pt_regs *regs) { } static inline void sev_es_ist_exit(void) { } @@ -142,6 +144,8 @@ early_snp_set_memory_private(unsigned long vaddr, uns= igned long paddr, unsigned static inline void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, un= signed int npages) { } static inline void __init snp_prep_memory(unsigned long paddr, unsigned = int sz, enum psc_op op) { } +static inline void snp_set_memory_shared(unsigned long vaddr, unsigned i= nt npages) { } +static inline void snp_set_memory_private(unsigned long vaddr, unsigned = int npages) { } #endif =20 #endif diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/= svm.h index b0ad00f4c1e1..0dcdb6e0c913 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -108,6 +108,7 @@ #define SVM_VMGEXIT_AP_JUMP_TABLE 0x80000005 #define SVM_VMGEXIT_SET_AP_JUMP_TABLE 0 #define SVM_VMGEXIT_GET_AP_JUMP_TABLE 1 +#define SVM_VMGEXIT_PSC 0x80000010 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd #define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff =20 @@ -219,6 +220,7 @@ { SVM_VMGEXIT_NMI_COMPLETE, "vmgexit_nmi_complete" }, \ { SVM_VMGEXIT_AP_HLT_LOOP, "vmgexit_ap_hlt_loop" }, \ { SVM_VMGEXIT_AP_JUMP_TABLE, "vmgexit_ap_jump_table" }, \ + { SVM_VMGEXIT_PSC, "vmgexit_page_state_change" }, \ { SVM_VMGEXIT_HV_FEATURES, "vmgexit_hypervisor_feature" }, \ { SVM_EXIT_ERR, "invalid_guest_state" } =20 diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 1e8dc71e7ba6..4315be1602d1 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -655,6 +655,174 @@ void __init snp_prep_memory(unsigned long paddr, un= signed int sz, enum psc_op op WARN(1, "invalid memory op %d\n", op); } =20 +static int vmgexit_psc(struct snp_psc_desc *desc) +{ + int cur_entry, end_entry, ret =3D 0; + struct snp_psc_desc *data; + struct ghcb_state state; + struct es_em_ctxt ctxt; + unsigned long flags; + struct ghcb *ghcb; + + /* + * __sev_get_ghcb() needs to run with IRQs disabled because it is using + * a per-CPU GHCB. + */ + local_irq_save(flags); + + ghcb =3D __sev_get_ghcb(&state); + if (!ghcb) { + ret =3D 1; + goto out_unlock; + } + + /* Copy the input desc into GHCB shared buffer */ + data =3D (struct snp_psc_desc *)ghcb->shared_buffer; + memcpy(ghcb->shared_buffer, desc, min_t(int, GHCB_SHARED_BUF_SIZE, size= of(*desc))); + + /* + * As per the GHCB specification, the hypervisor can resume the guest + * before processing all the entries. Check whether all the entries + * are processed. If not, then keep retrying. Note, the hypervisor + * will update the data memory directly to indicate the status, so + * reference the data->hdr everywhere. + * + * The strategy here is to wait for the hypervisor to change the page + * state in the RMP table before guest accesses the memory pages. If th= e + * page state change was not successful, then later memory access will + * result in a crash. + */ + cur_entry =3D data->hdr.cur_entry; + end_entry =3D data->hdr.end_entry; + + while (data->hdr.cur_entry <=3D data->hdr.end_entry) { + ghcb_set_sw_scratch(ghcb, (u64)__pa(data)); + + /* This will advance the shared buffer data points to. */ + ret =3D sev_es_ghcb_hv_call(ghcb, true, &ctxt, SVM_VMGEXIT_PSC, 0, 0); + + /* + * Page State Change VMGEXIT can pass error code through + * exit_info_2. + */ + if (WARN(ret || ghcb->save.sw_exit_info_2, + "SNP: PSC failed ret=3D%d exit_info_2=3D%llx\n", + ret, ghcb->save.sw_exit_info_2)) { + ret =3D 1; + goto out; + } + + /* Verify that reserved bit is not set */ + if (WARN(data->hdr.reserved, "Reserved bit is set in the PSC header\n"= )) { + ret =3D 1; + goto out; + } + + /* + * Sanity check that entry processing is not going backwards. + * This will happen only if hypervisor is tricking us. + */ + if (WARN(data->hdr.end_entry > end_entry || cur_entry > data->hdr.cur_= entry, +"SNP: PSC processing going backward, end_entry %d (got %d) cur_entry %d = (got %d)\n", + end_entry, data->hdr.end_entry, cur_entry, data->hdr.cur_entry)) { + ret =3D 1; + goto out; + } + } + +out: + __sev_put_ghcb(&state); + +out_unlock: + local_irq_restore(flags); + + return ret; +} + +static void __set_pages_state(struct snp_psc_desc *data, unsigned long v= addr, + unsigned long vaddr_end, int op) +{ + struct psc_hdr *hdr; + struct psc_entry *e; + unsigned long pfn; + int i; + + hdr =3D &data->hdr; + e =3D data->entries; + + memset(data, 0, sizeof(*data)); + i =3D 0; + + while (vaddr < vaddr_end) { + if (is_vmalloc_addr((void *)vaddr)) + pfn =3D vmalloc_to_pfn((void *)vaddr); + else + pfn =3D __pa(vaddr) >> PAGE_SHIFT; + + e->gfn =3D pfn; + e->operation =3D op; + hdr->end_entry =3D i; + + /* + * Current SNP implementation doesn't keep track of the RMP page + * size so use 4K for simplicity. + */ + e->pagesize =3D RMP_PG_SIZE_4K; + + vaddr =3D vaddr + PAGE_SIZE; + e++; + i++; + } + + if (vmgexit_psc(data)) + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); +} + +static void set_pages_state(unsigned long vaddr, unsigned int npages, in= t op) +{ + unsigned long vaddr_end, next_vaddr; + struct snp_psc_desc *desc; + + desc =3D kmalloc(sizeof(*desc), GFP_KERNEL_ACCOUNT); + if (!desc) + panic("SNP: failed to allocate memory for PSC descriptor\n"); + + vaddr =3D vaddr & PAGE_MASK; + vaddr_end =3D vaddr + (npages << PAGE_SHIFT); + + while (vaddr < vaddr_end) { + /* Calculate the last vaddr that fits in one struct snp_psc_desc. */ + next_vaddr =3D min_t(unsigned long, vaddr_end, + (VMGEXIT_PSC_MAX_ENTRY * PAGE_SIZE) + vaddr); + + __set_pages_state(desc, vaddr, next_vaddr, op); + + vaddr =3D next_vaddr; + } + + kfree(desc); +} + +void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) +{ + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + pvalidate_pages(vaddr, npages, false); + + set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED); +} + +void snp_set_memory_private(unsigned long vaddr, unsigned int npages) +{ + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); + + pvalidate_pages(vaddr, npages, true); +} + int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { u16 startup_cs, startup_ip; diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.= c index 8539dd6f24ff..d3c88d9ef8d6 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -316,11 +316,24 @@ static void enc_dec_hypercall(unsigned long vaddr, = int npages, bool enc) =20 static void amd_enc_status_change_prepare(unsigned long vaddr, int npage= s, bool enc) { + /* + * To maintain the security guarantees of SEV-SNP guests, make sure + * to invalidate the memory before encryption attribute is cleared. + */ + if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !enc) + snp_set_memory_shared(vaddr, npages); } =20 /* Return true unconditionally: return value doesn't matter for the SEV = side */ static bool amd_enc_status_change_finish(unsigned long vaddr, int npages= , bool enc) { + /* + * After memory is mapped encrypted in the page table, validate it + * so that it is consistent with the page table updates. + */ + if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && enc) + snp_set_memory_private(vaddr, npages); + if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) enc_dec_hypercall(vaddr, npages, enc); =20 --=20 2.25.1