From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0D45C432BE for ; Tue, 27 Jul 2021 05:56:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB18660FE3 for ; Tue, 27 Jul 2021 05:56:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235468AbhG0F4A (ORCPT ); Tue, 27 Jul 2021 01:56:00 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:3918 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235455AbhG0Fzp (ORCPT ); Tue, 27 Jul 2021 01:55:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1627365346; x=1658901346; h=from:to:cc:subject:date:message-id:in-reply-to: references:content-transfer-encoding:mime-version; bh=/R3Ok7pk98yOJminN0WkFySx0RBjDiGIxPnZMtYRB3o=; b=Ratl+4r33IPxSX6WwhERKT1jsPyiWe+BldsUiK0eM2ufw8opjINsq30s SA5imp94KsLf4nmEORzCvoV60CBd18G/Qyjx7SxfbL6nx+IzlOadBqCsv WnOXUgN75XfbtBL0ku3g3exgPJAjg1EjdKx84kwnSX+lOzfjE0Fd7PpY3 /iR0cAQ1Hhs5enAcRnvOKCdMHmS1r4wkvln24WHAa3A0MhrLYUh1REdFU HKN6zGLlt/56CTQ1oIWfgiDd5Tg3n3nGaF00PQVoahuLR4fMcKpb1XXBf AE/iIeTaJ0jTGBN8A+6WKLczcot/StArtOpZs1aBgRQ/cCyUc8Ccx2j/2 w==; X-IronPort-AV: E=Sophos;i="5.84,272,1620662400"; d="scan'208";a="176146071" Received: from mail-sn1anam02lp2048.outbound.protection.outlook.com (HELO NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.48]) by ob1.hgst.iphmx.com with ESMTP; 27 Jul 2021 13:55:44 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bGqvrktTEhk3BKdsLWA7wc7nfl1PjK1yQCSG/PMcPR3iJRlH5bIuFCqIrly/5x4RKxeaSUT3uvAyr193WPZHNAXC9Rt86QwTtLFcXCBcFwvBwdR/t2BBqmmHWnp3nmny8XkVdsY8snZIyX8kC87hERbLLjkYFEVNrY+zuqB9sd6CKMGW6Fs23tFbfjVj5E+RquU6Wp//MumgS6MffMzeZ2LXkEOH34Yaj417uwqALN6bcNtkbUKpsO6TiegXzI5moCHPJsa9Dtlqwm3PVDtrbD0LCFTfRWZMYDDyNzdZtly1PMtAOph977OCIKCTEuE+TAaNl9MH69ryrmGyPu9zEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zJAtGnxgEm4pRBHTsKIPf2xt6UCIHbqTpxFTViesm6Y=; b=LXr7RMnqp56dT+txFZdcy02xs1jCPUadnWVtpYKFDiaTXmrMv16/eMzm5SayyLFTfaO25XcUYKpqRYa1JraXeFGZqES/bX36+e/Kgy6rH/LJCWS7ZkXHuVdCxApQK2sHNyfKzAoyvpZedyoOkJ5ur2/3A04rv8QFL1ADJAUTUdEDGBn0XBQUcugV2HBc7t4TTkKQH6kTvciqK3cDGVXjWCT29lnMH1ckdqPv1MnzroXtW1eRQRIb5DG7qW58R+t2BKXNF0Il/ruqE/LhXm3u6JroOR4lknGrHH6bnaPODBSs7Bw+ihoYI/yoDcV1F35r5Uy56+nSkRWePkrOhQHZJw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass header.d=wdc.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zJAtGnxgEm4pRBHTsKIPf2xt6UCIHbqTpxFTViesm6Y=; b=Pxy8k9O6F3C0ZincQX1K3PlnpLW1K9c+KmvfXFd2iupTrIT+T2i5QcX36eXGtGMNZc9IBFFxQhvM6zusk0/Zy1K7Kh3eDVCQCbu0LzIMKLZEE83UMC96MDRpWZpVe2KZKe3ig4K7QzHKVBvAh0yOkxVkoNlusJxZ/AHYCJZJPgY= Authentication-Results: dabbelt.com; dkim=none (message not signed) header.d=none;dabbelt.com; dmarc=none action=none header.from=wdc.com; Received: from CO6PR04MB7812.namprd04.prod.outlook.com (2603:10b6:303:138::6) by CO6PR04MB8377.namprd04.prod.outlook.com (2603:10b6:303:140::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4352.29; Tue, 27 Jul 2021 05:55:43 +0000 Received: from CO6PR04MB7812.namprd04.prod.outlook.com ([fe80::a153:b7f8:c87f:89f8]) by CO6PR04MB7812.namprd04.prod.outlook.com ([fe80::a153:b7f8:c87f:89f8%8]) with mapi id 15.20.4352.031; Tue, 27 Jul 2021 05:55:43 +0000 From: Anup Patel To: Palmer Dabbelt , Palmer Dabbelt , Paul Walmsley , Albert Ou , Paolo Bonzini Cc: Alexander Graf , Atish Patra , Alistair Francis , Damien Le Moal , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v19 06/17] RISC-V: KVM: Implement VCPU world-switch Date: Tue, 27 Jul 2021 11:24:39 +0530 Message-Id: <20210727055450.2742868-7-anup.patel@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210727055450.2742868-1-anup.patel@wdc.com> References: <20210727055450.2742868-1-anup.patel@wdc.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: MA1PR0101CA0030.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:22::16) To CO6PR04MB7812.namprd04.prod.outlook.com (2603:10b6:303:138::6) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from wdc.com (122.171.179.229) by MA1PR0101CA0030.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:22::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4352.26 via Frontend Transport; Tue, 27 Jul 2021 05:55:39 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 40a40305-6279-46ae-da93-08d950c32bb0 X-MS-TrafficTypeDiagnostic: CO6PR04MB8377: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: WDCIPOUTBOUND: EOP-TRUE X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: AKDJn7gPdpzdrFb23FLqwGXqdMTSdQapMOoiJSfsZ4Wu/3SSlsLiOuvycULYmIu0d2/iUePP6sNPUK3Srj2WsX2SKIwwbCzhCHXITDjhlLpJ137DlXIFFHt8FuEKEOaK11klTO8VD4H+wZ3QPeDhYBBFjG9EKfCBde7ClRNnFbSaSylPG6bNGYYmOu6DP/FGm+ZzR3WuFsj/+17NVC+KMGnnpvyvcIBqDv/ztrZn4ohRxaAkglAt9HczONHImrapJOiW843IDGiHjUnNt/K0uijtJ20RrxjyYEbt4rKO42hk3WqVrjxr/w2BDPhB/2ppzn92bgIq0VIQ74VO7ivn0BZgY0UyLNvYEGoYWUml//Ni4PpG0kTv1+VAPJS8+SWDDSRFjaR/kQ9a9bz96y3y73sLueFhnc9m4F80NcPu+jLdeXDgVGkDSwClCF5qBLQv5W5yWNH452Rd9WTlXoCSNyeTeHLiBEgZS8pfCWjDtf0zcmZUsL1rTOhkC3zppf32Lu1B6F1Ccaq+xKb6ozjfvMgDV0fERCeJVeFcEDiTt3QDRvO0NZurHUituqHzUcViuP7JTxK7wMfaLMpEs+kZjAymrsdClP2lZCsEmwx1f27oYDP5f0gbDuSID6nFl8etpEcKfMCX5TZmd8FHNgENTjg1L2rAMfMegmG7K3xLlzs/k6TKSmTgA3sbUVcGJ7gI X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO6PR04MB7812.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(7696005)(52116002)(26005)(66946007)(54906003)(5660300002)(956004)(2616005)(8936002)(186003)(38100700002)(316002)(36756003)(508600001)(8886007)(7416002)(66476007)(2906002)(1076003)(110136005)(86362001)(30864003)(44832011)(83380400001)(8676002)(4326008)(66556008)(55016002)(38350700002);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?Q/d9rUoQ9G3SbEgIcpc5TdT/3aoAylzEv6RuTROyGBwylKp97uL2OnuNjv3u?= =?us-ascii?Q?tmx3xnGgXQWrVGAuczxvbpucOr2yWXiFv1Nn9P1260TbeOP3gm4+4YHgVo2k?= =?us-ascii?Q?ghxIXlhOQjWr58PQ1gNhnKm3HI+8CvGj29aD+6vgqp/IpFbAeWL2zYJlyvDI?= =?us-ascii?Q?FU63fx90XTOn0ui21LCuxQFGUnuM+46IlPqiQJHqx6m/CpIGdHl578CEEIHp?= =?us-ascii?Q?pU+wPi5DK65owZ8pXXcqAJQG1DNr9wgaHwODciRJ83MTZfGYLvZXgS2ZjHag?= =?us-ascii?Q?Oat2kor/1WqrCqOJy90mAEdhoIEORRcHxVPR1LkbC/yGyOzGdCgTGEXeIam5?= =?us-ascii?Q?IkkqrNTuO9n/IPSKlLgqmlNwSKBHBFJNJgU9AIweIKFy77myRmS3D4H8c+/g?= =?us-ascii?Q?eyz/PPmGZTJi5n5V2AkutEwiYVlgIwW4iMXtdblqqsOSsGGa4zX3ePU4yxmD?= =?us-ascii?Q?Se+ebn8Zg2WeiK9suv1QGPeM1R1/9eS5/opxr9oheVOtMohk32M/Tn3DO74I?= =?us-ascii?Q?yTE5LO/KdCzwiN1skxZ+thvQlY0cj3XENLhHSedx8er9OLaMlDnVCjFRTyYG?= =?us-ascii?Q?Nua5NV3Gr4isMo8MRenHDUXBL7q0kjzWMeAK5UmOFOJEWZANxWe6/BVrNegI?= =?us-ascii?Q?jNB0fNwZJbBXPR5GKlaNcO0kPBXGUgK10kzWXgRoEscbXHiHoUvlManBjSAx?= =?us-ascii?Q?HhVqML/xLyPC8WIHctVJHPFPJx8tXZysB6gIm1E8BOkazP9oObxsrAwIYqbE?= =?us-ascii?Q?kXr+8b+/bzk97ksxYEmbYKol5+C93ThVN+OPJZr2TlAUgcaauoTMPy2mWsgJ?= =?us-ascii?Q?L/06m+jbeqvcDJe5ZQz6mvmp3v7i+fAd37rKAI07xHqFo06h0/4OEtEkbxzj?= =?us-ascii?Q?Xt2dYLT35XerweKbPReOKxCXdTQCdw5T2IAL9jXC+J2pWO9lrZ9udUtWkB1+?= =?us-ascii?Q?FloQWDrD1v2YkUyIkeehYjDJfzQSPuHah4B3r07Zqpfaq4l4I7r7F/RFtBNk?= =?us-ascii?Q?lhgpNLmKQy3b8zSpAI17KHKMxne/SU3HUhAP08SihWbpgRd33aNO4upKldJm?= =?us-ascii?Q?SaW60VeD6LtAoR6AONPY8JZCpKApUAk+4kg2zGdXY+09gjqGkfCrAUtSUUMf?= =?us-ascii?Q?S/ANNomNGjv8XQsQI+Hsnv6fVh8gdSAnrVNjaRA+jLU6kYEQJbu3B2mjph/Q?= =?us-ascii?Q?FBCIRE6HhxlA7TIwnOmdJNzR0GXhFo+gCCEmVorTsejz5b2OMbsAXYulHMiO?= =?us-ascii?Q?lr6t2Xk3xgtA4okDhSzhiN5QeI+bM3mfzFssOZ8yEO4ap5Wj7KI+bGmudUek?= =?us-ascii?Q?yM5miWUj2UTTAaWo4xkTLwjO?= X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 40a40305-6279-46ae-da93-08d950c32bb0 X-MS-Exchange-CrossTenant-AuthSource: CO6PR04MB7812.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jul 2021 05:55:43.2905 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: S0av19b5KhmDfrNKXbCmo1PAFUF5J7c+/e+R0qPm73MZSvllXvYMzvkVXe5X7nxVyBDcLCQnD6cGyJtMpCo6UA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR04MB8377 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch implements the VCPU world-switch for KVM RISC-V. The KVM RISC-V world-switch (i.e. __kvm_riscv_switch_to()) mostly switches general purpose registers, SSTATUS, STVEC, SSCRATCH and HSTATUS CSRs. Other CSRs are switched via vcpu_load() and vcpu_put() interface in kvm_arch_vcpu_load() and kvm_arch_vcpu_put() functions respectively. Signed-off-by: Anup Patel Acked-by: Paolo Bonzini Reviewed-by: Paolo Bonzini Reviewed-by: Alexander Graf --- arch/riscv/include/asm/kvm_host.h | 10 +- arch/riscv/kernel/asm-offsets.c | 78 ++++++++++++ arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu.c | 30 ++++- arch/riscv/kvm/vcpu_switch.S | 203 ++++++++++++++++++++++++++++++ 5 files changed, 319 insertions(+), 4 deletions(-) create mode 100644 arch/riscv/kvm/vcpu_switch.S diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 87871b11d8ec..846f74e587e0 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -115,6 +115,14 @@ struct kvm_vcpu_arch { /* ISA feature bits (similar to MISA) */ unsigned long isa; + /* SSCRATCH, STVEC, and SCOUNTEREN of Host */ + unsigned long host_sscratch; + unsigned long host_stvec; + unsigned long host_scounteren; + + /* CPU context of Host */ + struct kvm_cpu_context host_context; + /* CPU context of Guest VCPU */ struct kvm_cpu_context guest_context; @@ -163,7 +171,7 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run); int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap); -static inline void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch) {} +void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch); int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index 90f8ce64fa6f..2fac70303341 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -7,7 +7,9 @@ #define GENERATING_ASM_OFFSETS #include +#include #include +#include #include #include @@ -111,6 +113,82 @@ void asm_offsets(void) OFFSET(PT_BADADDR, pt_regs, badaddr); OFFSET(PT_CAUSE, pt_regs, cause); + OFFSET(KVM_ARCH_GUEST_ZERO, kvm_vcpu_arch, guest_context.zero); + OFFSET(KVM_ARCH_GUEST_RA, kvm_vcpu_arch, guest_context.ra); + OFFSET(KVM_ARCH_GUEST_SP, kvm_vcpu_arch, guest_context.sp); + OFFSET(KVM_ARCH_GUEST_GP, kvm_vcpu_arch, guest_context.gp); + OFFSET(KVM_ARCH_GUEST_TP, kvm_vcpu_arch, guest_context.tp); + OFFSET(KVM_ARCH_GUEST_T0, kvm_vcpu_arch, guest_context.t0); + OFFSET(KVM_ARCH_GUEST_T1, kvm_vcpu_arch, guest_context.t1); + OFFSET(KVM_ARCH_GUEST_T2, kvm_vcpu_arch, guest_context.t2); + OFFSET(KVM_ARCH_GUEST_S0, kvm_vcpu_arch, guest_context.s0); + OFFSET(KVM_ARCH_GUEST_S1, kvm_vcpu_arch, guest_context.s1); + OFFSET(KVM_ARCH_GUEST_A0, kvm_vcpu_arch, guest_context.a0); + OFFSET(KVM_ARCH_GUEST_A1, kvm_vcpu_arch, guest_context.a1); + OFFSET(KVM_ARCH_GUEST_A2, kvm_vcpu_arch, guest_context.a2); + OFFSET(KVM_ARCH_GUEST_A3, kvm_vcpu_arch, guest_context.a3); + OFFSET(KVM_ARCH_GUEST_A4, kvm_vcpu_arch, guest_context.a4); + OFFSET(KVM_ARCH_GUEST_A5, kvm_vcpu_arch, guest_context.a5); + OFFSET(KVM_ARCH_GUEST_A6, kvm_vcpu_arch, guest_context.a6); + OFFSET(KVM_ARCH_GUEST_A7, kvm_vcpu_arch, guest_context.a7); + OFFSET(KVM_ARCH_GUEST_S2, kvm_vcpu_arch, guest_context.s2); + OFFSET(KVM_ARCH_GUEST_S3, kvm_vcpu_arch, guest_context.s3); + OFFSET(KVM_ARCH_GUEST_S4, kvm_vcpu_arch, guest_context.s4); + OFFSET(KVM_ARCH_GUEST_S5, kvm_vcpu_arch, guest_context.s5); + OFFSET(KVM_ARCH_GUEST_S6, kvm_vcpu_arch, guest_context.s6); + OFFSET(KVM_ARCH_GUEST_S7, kvm_vcpu_arch, guest_context.s7); + OFFSET(KVM_ARCH_GUEST_S8, kvm_vcpu_arch, guest_context.s8); + OFFSET(KVM_ARCH_GUEST_S9, kvm_vcpu_arch, guest_context.s9); + OFFSET(KVM_ARCH_GUEST_S10, kvm_vcpu_arch, guest_context.s10); + OFFSET(KVM_ARCH_GUEST_S11, kvm_vcpu_arch, guest_context.s11); + OFFSET(KVM_ARCH_GUEST_T3, kvm_vcpu_arch, guest_context.t3); + OFFSET(KVM_ARCH_GUEST_T4, kvm_vcpu_arch, guest_context.t4); + OFFSET(KVM_ARCH_GUEST_T5, kvm_vcpu_arch, guest_context.t5); + OFFSET(KVM_ARCH_GUEST_T6, kvm_vcpu_arch, guest_context.t6); + OFFSET(KVM_ARCH_GUEST_SEPC, kvm_vcpu_arch, guest_context.sepc); + OFFSET(KVM_ARCH_GUEST_SSTATUS, kvm_vcpu_arch, guest_context.sstatus); + OFFSET(KVM_ARCH_GUEST_HSTATUS, kvm_vcpu_arch, guest_context.hstatus); + OFFSET(KVM_ARCH_GUEST_SCOUNTEREN, kvm_vcpu_arch, guest_csr.scounteren); + + OFFSET(KVM_ARCH_HOST_ZERO, kvm_vcpu_arch, host_context.zero); + OFFSET(KVM_ARCH_HOST_RA, kvm_vcpu_arch, host_context.ra); + OFFSET(KVM_ARCH_HOST_SP, kvm_vcpu_arch, host_context.sp); + OFFSET(KVM_ARCH_HOST_GP, kvm_vcpu_arch, host_context.gp); + OFFSET(KVM_ARCH_HOST_TP, kvm_vcpu_arch, host_context.tp); + OFFSET(KVM_ARCH_HOST_T0, kvm_vcpu_arch, host_context.t0); + OFFSET(KVM_ARCH_HOST_T1, kvm_vcpu_arch, host_context.t1); + OFFSET(KVM_ARCH_HOST_T2, kvm_vcpu_arch, host_context.t2); + OFFSET(KVM_ARCH_HOST_S0, kvm_vcpu_arch, host_context.s0); + OFFSET(KVM_ARCH_HOST_S1, kvm_vcpu_arch, host_context.s1); + OFFSET(KVM_ARCH_HOST_A0, kvm_vcpu_arch, host_context.a0); + OFFSET(KVM_ARCH_HOST_A1, kvm_vcpu_arch, host_context.a1); + OFFSET(KVM_ARCH_HOST_A2, kvm_vcpu_arch, host_context.a2); + OFFSET(KVM_ARCH_HOST_A3, kvm_vcpu_arch, host_context.a3); + OFFSET(KVM_ARCH_HOST_A4, kvm_vcpu_arch, host_context.a4); + OFFSET(KVM_ARCH_HOST_A5, kvm_vcpu_arch, host_context.a5); + OFFSET(KVM_ARCH_HOST_A6, kvm_vcpu_arch, host_context.a6); + OFFSET(KVM_ARCH_HOST_A7, kvm_vcpu_arch, host_context.a7); + OFFSET(KVM_ARCH_HOST_S2, kvm_vcpu_arch, host_context.s2); + OFFSET(KVM_ARCH_HOST_S3, kvm_vcpu_arch, host_context.s3); + OFFSET(KVM_ARCH_HOST_S4, kvm_vcpu_arch, host_context.s4); + OFFSET(KVM_ARCH_HOST_S5, kvm_vcpu_arch, host_context.s5); + OFFSET(KVM_ARCH_HOST_S6, kvm_vcpu_arch, host_context.s6); + OFFSET(KVM_ARCH_HOST_S7, kvm_vcpu_arch, host_context.s7); + OFFSET(KVM_ARCH_HOST_S8, kvm_vcpu_arch, host_context.s8); + OFFSET(KVM_ARCH_HOST_S9, kvm_vcpu_arch, host_context.s9); + OFFSET(KVM_ARCH_HOST_S10, kvm_vcpu_arch, host_context.s10); + OFFSET(KVM_ARCH_HOST_S11, kvm_vcpu_arch, host_context.s11); + OFFSET(KVM_ARCH_HOST_T3, kvm_vcpu_arch, host_context.t3); + OFFSET(KVM_ARCH_HOST_T4, kvm_vcpu_arch, host_context.t4); + OFFSET(KVM_ARCH_HOST_T5, kvm_vcpu_arch, host_context.t5); + OFFSET(KVM_ARCH_HOST_T6, kvm_vcpu_arch, host_context.t6); + OFFSET(KVM_ARCH_HOST_SEPC, kvm_vcpu_arch, host_context.sepc); + OFFSET(KVM_ARCH_HOST_SSTATUS, kvm_vcpu_arch, host_context.sstatus); + OFFSET(KVM_ARCH_HOST_HSTATUS, kvm_vcpu_arch, host_context.hstatus); + OFFSET(KVM_ARCH_HOST_SSCRATCH, kvm_vcpu_arch, host_sscratch); + OFFSET(KVM_ARCH_HOST_STVEC, kvm_vcpu_arch, host_stvec); + OFFSET(KVM_ARCH_HOST_SCOUNTEREN, kvm_vcpu_arch, host_scounteren); + /* * THREAD_{F,X}* might be larger than a S-type offset can handle, but * these are used in performance-sensitive assembly so we can't resort diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 4732094391bf..9e8133c898dc 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -10,4 +10,4 @@ KVM := ../../../virt/kvm obj-$(CONFIG_KVM) += kvm.o kvm-y += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/binary_stats.o \ - main.o vm.o mmu.o vcpu.o vcpu_exit.o + main.o vm.o mmu.o vcpu.o vcpu_exit.o vcpu_switch.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index a67cd9caa911..91135e12caf6 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -570,14 +570,40 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + csr_write(CSR_VSSTATUS, csr->vsstatus); + csr_write(CSR_HIE, csr->hie); + csr_write(CSR_VSTVEC, csr->vstvec); + csr_write(CSR_VSSCRATCH, csr->vsscratch); + csr_write(CSR_VSEPC, csr->vsepc); + csr_write(CSR_VSCAUSE, csr->vscause); + csr_write(CSR_VSTVAL, csr->vstval); + csr_write(CSR_HVIP, csr->hvip); + csr_write(CSR_VSATP, csr->vsatp); kvm_riscv_stage2_update_hgatp(vcpu); + + vcpu->cpu = cpu; } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + vcpu->cpu = -1; + + csr_write(CSR_HGATP, 0); + + csr->vsstatus = csr_read(CSR_VSSTATUS); + csr->hie = csr_read(CSR_HIE); + csr->vstvec = csr_read(CSR_VSTVEC); + csr->vsscratch = csr_read(CSR_VSSCRATCH); + csr->vsepc = csr_read(CSR_VSEPC); + csr->vscause = csr_read(CSR_VSCAUSE); + csr->vstval = csr_read(CSR_VSTVAL); + csr->hvip = csr_read(CSR_HVIP); + csr->vsatp = csr_read(CSR_VSATP); } static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S new file mode 100644 index 000000000000..5174b025ff4e --- /dev/null +++ b/arch/riscv/kvm/vcpu_switch.S @@ -0,0 +1,203 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include + + .text + .altmacro + .option norelax + +ENTRY(__kvm_riscv_switch_to) + /* Save Host GPRs (except A0 and T0-T6) */ + REG_S ra, (KVM_ARCH_HOST_RA)(a0) + REG_S sp, (KVM_ARCH_HOST_SP)(a0) + REG_S gp, (KVM_ARCH_HOST_GP)(a0) + REG_S tp, (KVM_ARCH_HOST_TP)(a0) + REG_S s0, (KVM_ARCH_HOST_S0)(a0) + REG_S s1, (KVM_ARCH_HOST_S1)(a0) + REG_S a1, (KVM_ARCH_HOST_A1)(a0) + REG_S a2, (KVM_ARCH_HOST_A2)(a0) + REG_S a3, (KVM_ARCH_HOST_A3)(a0) + REG_S a4, (KVM_ARCH_HOST_A4)(a0) + REG_S a5, (KVM_ARCH_HOST_A5)(a0) + REG_S a6, (KVM_ARCH_HOST_A6)(a0) + REG_S a7, (KVM_ARCH_HOST_A7)(a0) + REG_S s2, (KVM_ARCH_HOST_S2)(a0) + REG_S s3, (KVM_ARCH_HOST_S3)(a0) + REG_S s4, (KVM_ARCH_HOST_S4)(a0) + REG_S s5, (KVM_ARCH_HOST_S5)(a0) + REG_S s6, (KVM_ARCH_HOST_S6)(a0) + REG_S s7, (KVM_ARCH_HOST_S7)(a0) + REG_S s8, (KVM_ARCH_HOST_S8)(a0) + REG_S s9, (KVM_ARCH_HOST_S9)(a0) + REG_S s10, (KVM_ARCH_HOST_S10)(a0) + REG_S s11, (KVM_ARCH_HOST_S11)(a0) + + /* Save Host and Restore Guest SSTATUS */ + REG_L t0, (KVM_ARCH_GUEST_SSTATUS)(a0) + csrrw t0, CSR_SSTATUS, t0 + REG_S t0, (KVM_ARCH_HOST_SSTATUS)(a0) + + /* Save Host and Restore Guest HSTATUS */ + REG_L t1, (KVM_ARCH_GUEST_HSTATUS)(a0) + csrrw t1, CSR_HSTATUS, t1 + REG_S t1, (KVM_ARCH_HOST_HSTATUS)(a0) + + /* Save Host and Restore Guest SCOUNTEREN */ + REG_L t2, (KVM_ARCH_GUEST_SCOUNTEREN)(a0) + csrrw t2, CSR_SCOUNTEREN, t2 + REG_S t2, (KVM_ARCH_HOST_SCOUNTEREN)(a0) + + /* Save Host SSCRATCH and change it to struct kvm_vcpu_arch pointer */ + csrrw t3, CSR_SSCRATCH, a0 + REG_S t3, (KVM_ARCH_HOST_SSCRATCH)(a0) + + /* Save Host STVEC and change it to return path */ + la t4, __kvm_switch_return + csrrw t4, CSR_STVEC, t4 + REG_S t4, (KVM_ARCH_HOST_STVEC)(a0) + + /* Restore Guest SEPC */ + REG_L t0, (KVM_ARCH_GUEST_SEPC)(a0) + csrw CSR_SEPC, t0 + + /* Restore Guest GPRs (except A0) */ + REG_L ra, (KVM_ARCH_GUEST_RA)(a0) + REG_L sp, (KVM_ARCH_GUEST_SP)(a0) + REG_L gp, (KVM_ARCH_GUEST_GP)(a0) + REG_L tp, (KVM_ARCH_GUEST_TP)(a0) + REG_L t0, (KVM_ARCH_GUEST_T0)(a0) + REG_L t1, (KVM_ARCH_GUEST_T1)(a0) + REG_L t2, (KVM_ARCH_GUEST_T2)(a0) + REG_L s0, (KVM_ARCH_GUEST_S0)(a0) + REG_L s1, (KVM_ARCH_GUEST_S1)(a0) + REG_L a1, (KVM_ARCH_GUEST_A1)(a0) + REG_L a2, (KVM_ARCH_GUEST_A2)(a0) + REG_L a3, (KVM_ARCH_GUEST_A3)(a0) + REG_L a4, (KVM_ARCH_GUEST_A4)(a0) + REG_L a5, (KVM_ARCH_GUEST_A5)(a0) + REG_L a6, (KVM_ARCH_GUEST_A6)(a0) + REG_L a7, (KVM_ARCH_GUEST_A7)(a0) + REG_L s2, (KVM_ARCH_GUEST_S2)(a0) + REG_L s3, (KVM_ARCH_GUEST_S3)(a0) + REG_L s4, (KVM_ARCH_GUEST_S4)(a0) + REG_L s5, (KVM_ARCH_GUEST_S5)(a0) + REG_L s6, (KVM_ARCH_GUEST_S6)(a0) + REG_L s7, (KVM_ARCH_GUEST_S7)(a0) + REG_L s8, (KVM_ARCH_GUEST_S8)(a0) + REG_L s9, (KVM_ARCH_GUEST_S9)(a0) + REG_L s10, (KVM_ARCH_GUEST_S10)(a0) + REG_L s11, (KVM_ARCH_GUEST_S11)(a0) + REG_L t3, (KVM_ARCH_GUEST_T3)(a0) + REG_L t4, (KVM_ARCH_GUEST_T4)(a0) + REG_L t5, (KVM_ARCH_GUEST_T5)(a0) + REG_L t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Restore Guest A0 */ + REG_L a0, (KVM_ARCH_GUEST_A0)(a0) + + /* Resume Guest */ + sret + + /* Back to Host */ + .align 2 +__kvm_switch_return: + /* Swap Guest A0 with SSCRATCH */ + csrrw a0, CSR_SSCRATCH, a0 + + /* Save Guest GPRs (except A0) */ + REG_S ra, (KVM_ARCH_GUEST_RA)(a0) + REG_S sp, (KVM_ARCH_GUEST_SP)(a0) + REG_S gp, (KVM_ARCH_GUEST_GP)(a0) + REG_S tp, (KVM_ARCH_GUEST_TP)(a0) + REG_S t0, (KVM_ARCH_GUEST_T0)(a0) + REG_S t1, (KVM_ARCH_GUEST_T1)(a0) + REG_S t2, (KVM_ARCH_GUEST_T2)(a0) + REG_S s0, (KVM_ARCH_GUEST_S0)(a0) + REG_S s1, (KVM_ARCH_GUEST_S1)(a0) + REG_S a1, (KVM_ARCH_GUEST_A1)(a0) + REG_S a2, (KVM_ARCH_GUEST_A2)(a0) + REG_S a3, (KVM_ARCH_GUEST_A3)(a0) + REG_S a4, (KVM_ARCH_GUEST_A4)(a0) + REG_S a5, (KVM_ARCH_GUEST_A5)(a0) + REG_S a6, (KVM_ARCH_GUEST_A6)(a0) + REG_S a7, (KVM_ARCH_GUEST_A7)(a0) + REG_S s2, (KVM_ARCH_GUEST_S2)(a0) + REG_S s3, (KVM_ARCH_GUEST_S3)(a0) + REG_S s4, (KVM_ARCH_GUEST_S4)(a0) + REG_S s5, (KVM_ARCH_GUEST_S5)(a0) + REG_S s6, (KVM_ARCH_GUEST_S6)(a0) + REG_S s7, (KVM_ARCH_GUEST_S7)(a0) + REG_S s8, (KVM_ARCH_GUEST_S8)(a0) + REG_S s9, (KVM_ARCH_GUEST_S9)(a0) + REG_S s10, (KVM_ARCH_GUEST_S10)(a0) + REG_S s11, (KVM_ARCH_GUEST_S11)(a0) + REG_S t3, (KVM_ARCH_GUEST_T3)(a0) + REG_S t4, (KVM_ARCH_GUEST_T4)(a0) + REG_S t5, (KVM_ARCH_GUEST_T5)(a0) + REG_S t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Save Guest SEPC */ + csrr t0, CSR_SEPC + REG_S t0, (KVM_ARCH_GUEST_SEPC)(a0) + + /* Restore Host STVEC */ + REG_L t1, (KVM_ARCH_HOST_STVEC)(a0) + csrw CSR_STVEC, t1 + + /* Save Guest A0 and Restore Host SSCRATCH */ + REG_L t2, (KVM_ARCH_HOST_SSCRATCH)(a0) + csrrw t2, CSR_SSCRATCH, t2 + REG_S t2, (KVM_ARCH_GUEST_A0)(a0) + + /* Save Guest and Restore Host SCOUNTEREN */ + REG_L t3, (KVM_ARCH_HOST_SCOUNTEREN)(a0) + csrrw t3, CSR_SCOUNTEREN, t3 + REG_S t3, (KVM_ARCH_GUEST_SCOUNTEREN)(a0) + + /* Save Guest and Restore Host HSTATUS */ + REG_L t4, (KVM_ARCH_HOST_HSTATUS)(a0) + csrrw t4, CSR_HSTATUS, t4 + REG_S t4, (KVM_ARCH_GUEST_HSTATUS)(a0) + + /* Save Guest and Restore Host SSTATUS */ + REG_L t5, (KVM_ARCH_HOST_SSTATUS)(a0) + csrrw t5, CSR_SSTATUS, t5 + REG_S t5, (KVM_ARCH_GUEST_SSTATUS)(a0) + + /* Restore Host GPRs (except A0 and T0-T6) */ + REG_L ra, (KVM_ARCH_HOST_RA)(a0) + REG_L sp, (KVM_ARCH_HOST_SP)(a0) + REG_L gp, (KVM_ARCH_HOST_GP)(a0) + REG_L tp, (KVM_ARCH_HOST_TP)(a0) + REG_L s0, (KVM_ARCH_HOST_S0)(a0) + REG_L s1, (KVM_ARCH_HOST_S1)(a0) + REG_L a1, (KVM_ARCH_HOST_A1)(a0) + REG_L a2, (KVM_ARCH_HOST_A2)(a0) + REG_L a3, (KVM_ARCH_HOST_A3)(a0) + REG_L a4, (KVM_ARCH_HOST_A4)(a0) + REG_L a5, (KVM_ARCH_HOST_A5)(a0) + REG_L a6, (KVM_ARCH_HOST_A6)(a0) + REG_L a7, (KVM_ARCH_HOST_A7)(a0) + REG_L s2, (KVM_ARCH_HOST_S2)(a0) + REG_L s3, (KVM_ARCH_HOST_S3)(a0) + REG_L s4, (KVM_ARCH_HOST_S4)(a0) + REG_L s5, (KVM_ARCH_HOST_S5)(a0) + REG_L s6, (KVM_ARCH_HOST_S6)(a0) + REG_L s7, (KVM_ARCH_HOST_S7)(a0) + REG_L s8, (KVM_ARCH_HOST_S8)(a0) + REG_L s9, (KVM_ARCH_HOST_S9)(a0) + REG_L s10, (KVM_ARCH_HOST_S10)(a0) + REG_L s11, (KVM_ARCH_HOST_S11)(a0) + + /* Return to C code */ + ret +ENDPROC(__kvm_riscv_switch_to) -- 2.25.1