From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CD96C32750 for ; Fri, 2 Aug 2019 07:48:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4F9E8206A3 for ; Fri, 2 Aug 2019 07:48:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="rsa2a4rC"; dkim=pass (1024-bit key) header.d=sharedspace.onmicrosoft.com header.i=@sharedspace.onmicrosoft.com header.b="t5dmtp5v" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390421AbfHBHsf (ORCPT ); Fri, 2 Aug 2019 03:48:35 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:39477 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390333AbfHBHsc (ORCPT ); Fri, 2 Aug 2019 03:48:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564732191; x=1596268191; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=r0QXvUKtpGlPRChSwz6gxxRbH1Hi8axixCfyLTZzKaY=; b=rsa2a4rCsNwAZEz+2q6SJBCPU1k+uRwEVX3a17IIUEgHLehuZrVE8VVB yT41BXAQog/lu9tG7QksOVBUPijZvtEXO3c/IWAMHJGZS10Pxwuk2zlXW WpV+S0J2l64VCc1E+QE9uTNksX00Fp6pHIMvqrvzh045GO7lJvGdxVo48 i5jAZyriR8hrSC6EIf4wuxr1scaJmLfagE3b5CnDXEQevBldt37Eq1z4w yXPcKpqROhNIqCrvSO5jSOCAqoUIY9Sf3k/x1wrcywpipR/1r/PygLCt/ Y/wyaFZGwEPe8aXvXYY/uhapSszU9YYw0u05nZS1+L1H7lPo47S37oXmr A==; IronPort-SDR: Dvv/93NKXxLcrjbQX1GjwQ4rKl6DYXKe9MWOPi7hjVyudbVRNjpdQy814fXUrT4NHZZvRWcaFd k6exxBST00KSdJKwGCEWlkfTnDCtVZNFGeDo+qZSQTaL6TnEMYYEqd35nHSfddlwzspbGov+2D sTKoycNuoZTE2CBEHi5pOBRjlzpkU80cvN8JUTT5Ak0S0e7zrxa9hDO+6XZ4Jw0gNduZTbtM9y bjvQvr8UlR6InM3hKqfcgOcPXd3HMWd1rmoSK72pt0rC55RVCh7fv70cITh3oFpZm9m0xlK9b2 MCI= X-IronPort-AV: E=Sophos;i="5.64,337,1559491200"; d="scan'208";a="215006720" Received: from mail-sn1nam04lp2057.outbound.protection.outlook.com (HELO NAM04-SN1-obe.outbound.protection.outlook.com) ([104.47.44.57]) by ob1.hgst.iphmx.com with ESMTP; 02 Aug 2019 15:49:49 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=a1MCqbf9RGb8Yp54pCCTBUs3kb1jMFveKVpLa8lAJZC6RDyCznoCWpe7ggnS7zEYSZVz1+Lg/aPEOdqlli+vBssxfBfS2xJXycB2le0JTYn8OMDc99inDdAc/38HhpvUVgC5b6uv7DmVBmuhJw2y6xOtDV/zQ+8HMDm6zam87dhhbG+yetM4FgXrQKcOw1Brh8rxaVSdBNbLpheKB9XNH561zSMTqRW5i1i+c+7gn5duZ5SR7SQPKrWhf2Imp+UAlFelMc8UqCxW5iGBf64EnM24aIpEahXp/EgifeT1ypTz1DByHCK0c1+jAmbhRERHgrxkcq6KjNKuwrJ3Cr35xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XrkGbmpkMaICNq7ndqzBAzzOtBy/WAbclAz8h31v23c=; b=KcLFFQcXxRBH10bEVT69xToTVykBX+HITASZZC0QcVBVE9vQnQJ7qoHZgxiV8kI0qsTeSU8pKY/YcmE3RpQTT9j3Xr8lLWBqlKjKbIdmKyodZHX+TiV7GK4fsegneYnkUB56Lr3dngWJinf3mf4EalpRCnLqorSzjALWc5OIZLVH/3ptEYkB3rY7Bw2RGOT0C/0f0ti8WmVukOdUf1AGNNERwO5iMp2nYLjBeU/Lo5YQ1L+/WypO5Puq8mEYX5GMGF1j30+JEhJgLsOQu2INSpXRXAXpQw3WqqqXvOfPIgFG50uKgtxQQyPLsi4rzUDpHSkqDwoHnB8BQDc6T47w7A== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XrkGbmpkMaICNq7ndqzBAzzOtBy/WAbclAz8h31v23c=; b=t5dmtp5vqyZJinPZfiHNZc265iFGgKSPUu5JjOMQ/8piiS93NuzfYZVDp18xCA4Kus3T8jHwjHa9odBUTFvt2Qo2n/AjbN3G40k+o42RjKtSAuitmUL7JdKOEvLamkRuIAlhi63ezRTOht1+fF+SCNjbCtMgHKnYOcwGEwMKHbw= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB6286.namprd04.prod.outlook.com (10.255.232.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2136.16; Fri, 2 Aug 2019 07:48:29 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2136.010; Fri, 2 Aug 2019 07:48:29 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K CC: Daniel Lezcano , Thomas Gleixner , Atish Patra , Alistair Francis , Damien Le Moal , Christoph Hellwig , Anup Patel , "kvm@vger.kernel.org" , "linux-riscv@lists.infradead.org" , "linux-kernel@vger.kernel.org" , Anup Patel Subject: [RFC PATCH v2 15/19] RISC-V: KVM: FP lazy save/restore Thread-Topic: [RFC PATCH v2 15/19] RISC-V: KVM: FP lazy save/restore Thread-Index: AQHVSQas1flNEdp2BU64gBxcmBv69Q== Date: Fri, 2 Aug 2019 07:48:29 +0000 Message-ID: <20190802074620.115029-16-anup.patel@wdc.com> References: <20190802074620.115029-1-anup.patel@wdc.com> In-Reply-To: <20190802074620.115029-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0111.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::27) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.20.161] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 7de0242a-d055-4381-993b-08d7171dcf0c x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0;PCL:0;RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020);SRVR:MN2PR04MB6286; x-ms-traffictypediagnostic: MN2PR04MB6286: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:7219; x-forefront-prvs: 011787B9DD x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(4636009)(366004)(376002)(396003)(136003)(346002)(39860400002)(199004)(189003)(5660300002)(26005)(8676002)(186003)(30864003)(71200400001)(71190400001)(68736007)(476003)(8936002)(50226002)(2906002)(36756003)(81156014)(81166006)(1076003)(316002)(446003)(102836004)(55236004)(110136005)(52116002)(78486014)(305945005)(54906003)(86362001)(76176011)(7736002)(6506007)(386003)(66446008)(64756008)(66556008)(66476007)(53946003)(486006)(6486002)(66946007)(478600001)(99286004)(6116002)(256004)(9456002)(14444005)(3846002)(7416002)(4326008)(11346002)(25786009)(6512007)(53936002)(2616005)(44832011)(14454004)(66066001)(6436002);DIR:OUT;SFP:1102;SCL:1;SRVR:MN2PR04MB6286;H:MN2PR04MB6061.namprd04.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;A:1;MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: kaGQ71mlfjl3s7SbD6V7wHus92EswtsbXelZ1k/MPj3iX4yyU57kLRc8K9CafloVNCIFx5rqkyrB6+uvRE6tQ+CQbTJsJ6zMLdcvUyGWX1cyS1eVebLNOF4AwdFqAu/GFw1ZAkkOA5aNNnw+0AiqtF7wLne7EFpRBUdVt9F1JUlkoir5AID6Pqxm1xhtbgCb4/8hSuoyDikVPDYcpLpzKJ1BsnTvRSE8pmQxPvYcZS+zawO1RJfRp3/7AGXuxcC51wouZE4Hiljl2xGEPIdwyVJBaqQ00Iwvh1eeowP5PkGlGn9rZq6SNCbimN9hNV8LlGbuoRMmRH36AXCUnFOT+hnyH1aE5lO8v0G+xzoiWbmujTouHWzUEE9XStXBahQsTIG1WtW7gDJ+HLB+F4YDKC/dT2bXlSwi6KGvhqWqvW4= Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7de0242a-d055-4381-993b-08d7171dcf0c X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Aug 2019 07:48:29.3443 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB6286 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Atish Patra This patch adds floating point (F and D extension) context save/restore for guest VCPUs. The FP context is saved and restored lazily only when kernel enter/exits the in-kernel run loop and not during the KVM world switch. This way FP save/restore has minimal impact on KVM performance. Signed-off-by: Atish Patra Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 5 + arch/riscv/kernel/asm-offsets.c | 72 +++++++++++++ arch/riscv/kvm/vcpu.c | 81 ++++++++++++++ arch/riscv/kvm/vcpu_switch.S | 174 ++++++++++++++++++++++++++++++ 4 files changed, 332 insertions(+) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index a966e3587362..08990ecd2260 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -112,6 +112,7 @@ struct kvm_cpu_context { unsigned long sepc; unsigned long sstatus; unsigned long hstatus; + union __riscv_fp_state fp; }; =20 struct kvm_vcpu_csr { @@ -209,6 +210,10 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct = kvm_run *run, unsigned long scause, unsigned long stval); =20 void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch); +void __kvm_riscv_vcpu_fp_f_save(struct kvm_cpu_context *context); +void __kvm_riscv_vcpu_fp_f_restore(struct kvm_cpu_context *context); +void __kvm_riscv_vcpu_fp_d_save(struct kvm_cpu_context *context); +void __kvm_riscv_vcpu_fp_d_restore(struct kvm_cpu_context *context); =20 int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq= ); diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offset= s.c index 711656710190..9980069a1acf 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -185,6 +185,78 @@ void asm_offsets(void) OFFSET(KVM_ARCH_HOST_SSCRATCH, kvm_vcpu_arch, host_sscratch); OFFSET(KVM_ARCH_HOST_STVEC, kvm_vcpu_arch, host_stvec); =20 + /* F extension */ + + OFFSET(KVM_ARCH_FP_F_F0, kvm_cpu_context, fp.f.f[0]); + OFFSET(KVM_ARCH_FP_F_F1, kvm_cpu_context, fp.f.f[1]); + OFFSET(KVM_ARCH_FP_F_F2, kvm_cpu_context, fp.f.f[2]); + OFFSET(KVM_ARCH_FP_F_F3, kvm_cpu_context, fp.f.f[3]); + OFFSET(KVM_ARCH_FP_F_F4, kvm_cpu_context, fp.f.f[4]); + OFFSET(KVM_ARCH_FP_F_F5, kvm_cpu_context, fp.f.f[5]); + OFFSET(KVM_ARCH_FP_F_F6, kvm_cpu_context, fp.f.f[6]); + OFFSET(KVM_ARCH_FP_F_F7, kvm_cpu_context, fp.f.f[7]); + OFFSET(KVM_ARCH_FP_F_F8, kvm_cpu_context, fp.f.f[8]); + OFFSET(KVM_ARCH_FP_F_F9, kvm_cpu_context, fp.f.f[9]); + OFFSET(KVM_ARCH_FP_F_F10, kvm_cpu_context, fp.f.f[10]); + OFFSET(KVM_ARCH_FP_F_F11, kvm_cpu_context, fp.f.f[11]); + OFFSET(KVM_ARCH_FP_F_F12, kvm_cpu_context, fp.f.f[12]); + OFFSET(KVM_ARCH_FP_F_F13, kvm_cpu_context, fp.f.f[13]); + OFFSET(KVM_ARCH_FP_F_F14, kvm_cpu_context, fp.f.f[14]); + OFFSET(KVM_ARCH_FP_F_F15, kvm_cpu_context, fp.f.f[15]); + OFFSET(KVM_ARCH_FP_F_F16, kvm_cpu_context, fp.f.f[16]); + OFFSET(KVM_ARCH_FP_F_F17, kvm_cpu_context, fp.f.f[17]); + OFFSET(KVM_ARCH_FP_F_F18, kvm_cpu_context, fp.f.f[18]); + OFFSET(KVM_ARCH_FP_F_F19, kvm_cpu_context, fp.f.f[19]); + OFFSET(KVM_ARCH_FP_F_F20, kvm_cpu_context, fp.f.f[20]); + OFFSET(KVM_ARCH_FP_F_F21, kvm_cpu_context, fp.f.f[21]); + OFFSET(KVM_ARCH_FP_F_F22, kvm_cpu_context, fp.f.f[22]); + OFFSET(KVM_ARCH_FP_F_F23, kvm_cpu_context, fp.f.f[23]); + OFFSET(KVM_ARCH_FP_F_F24, kvm_cpu_context, fp.f.f[24]); + OFFSET(KVM_ARCH_FP_F_F25, kvm_cpu_context, fp.f.f[25]); + OFFSET(KVM_ARCH_FP_F_F26, kvm_cpu_context, fp.f.f[26]); + OFFSET(KVM_ARCH_FP_F_F27, kvm_cpu_context, fp.f.f[27]); + OFFSET(KVM_ARCH_FP_F_F28, kvm_cpu_context, fp.f.f[28]); + OFFSET(KVM_ARCH_FP_F_F29, kvm_cpu_context, fp.f.f[29]); + OFFSET(KVM_ARCH_FP_F_F30, kvm_cpu_context, fp.f.f[30]); + OFFSET(KVM_ARCH_FP_F_F31, kvm_cpu_context, fp.f.f[31]); + OFFSET(KVM_ARCH_FP_F_FCSR, kvm_cpu_context, fp.f.fcsr); + + /* D extension */ + + OFFSET(KVM_ARCH_FP_D_F0, kvm_cpu_context, fp.d.f[0]); + OFFSET(KVM_ARCH_FP_D_F1, kvm_cpu_context, fp.d.f[1]); + OFFSET(KVM_ARCH_FP_D_F2, kvm_cpu_context, fp.d.f[2]); + OFFSET(KVM_ARCH_FP_D_F3, kvm_cpu_context, fp.d.f[3]); + OFFSET(KVM_ARCH_FP_D_F4, kvm_cpu_context, fp.d.f[4]); + OFFSET(KVM_ARCH_FP_D_F5, kvm_cpu_context, fp.d.f[5]); + OFFSET(KVM_ARCH_FP_D_F6, kvm_cpu_context, fp.d.f[6]); + OFFSET(KVM_ARCH_FP_D_F7, kvm_cpu_context, fp.d.f[7]); + OFFSET(KVM_ARCH_FP_D_F8, kvm_cpu_context, fp.d.f[8]); + OFFSET(KVM_ARCH_FP_D_F9, kvm_cpu_context, fp.d.f[9]); + OFFSET(KVM_ARCH_FP_D_F10, kvm_cpu_context, fp.d.f[10]); + OFFSET(KVM_ARCH_FP_D_F11, kvm_cpu_context, fp.d.f[11]); + OFFSET(KVM_ARCH_FP_D_F12, kvm_cpu_context, fp.d.f[12]); + OFFSET(KVM_ARCH_FP_D_F13, kvm_cpu_context, fp.d.f[13]); + OFFSET(KVM_ARCH_FP_D_F14, kvm_cpu_context, fp.d.f[14]); + OFFSET(KVM_ARCH_FP_D_F15, kvm_cpu_context, fp.d.f[15]); + OFFSET(KVM_ARCH_FP_D_F16, kvm_cpu_context, fp.d.f[16]); + OFFSET(KVM_ARCH_FP_D_F17, kvm_cpu_context, fp.d.f[17]); + OFFSET(KVM_ARCH_FP_D_F18, kvm_cpu_context, fp.d.f[18]); + OFFSET(KVM_ARCH_FP_D_F19, kvm_cpu_context, fp.d.f[19]); + OFFSET(KVM_ARCH_FP_D_F20, kvm_cpu_context, fp.d.f[20]); + OFFSET(KVM_ARCH_FP_D_F21, kvm_cpu_context, fp.d.f[21]); + OFFSET(KVM_ARCH_FP_D_F22, kvm_cpu_context, fp.d.f[22]); + OFFSET(KVM_ARCH_FP_D_F23, kvm_cpu_context, fp.d.f[23]); + OFFSET(KVM_ARCH_FP_D_F24, kvm_cpu_context, fp.d.f[24]); + OFFSET(KVM_ARCH_FP_D_F25, kvm_cpu_context, fp.d.f[25]); + OFFSET(KVM_ARCH_FP_D_F26, kvm_cpu_context, fp.d.f[26]); + OFFSET(KVM_ARCH_FP_D_F27, kvm_cpu_context, fp.d.f[27]); + OFFSET(KVM_ARCH_FP_D_F28, kvm_cpu_context, fp.d.f[28]); + OFFSET(KVM_ARCH_FP_D_F29, kvm_cpu_context, fp.d.f[29]); + OFFSET(KVM_ARCH_FP_D_F30, kvm_cpu_context, fp.d.f[30]); + OFFSET(KVM_ARCH_FP_D_F31, kvm_cpu_context, fp.d.f[31]); + OFFSET(KVM_ARCH_FP_D_FCSR, kvm_cpu_context, fp.d.fcsr); + /* * THREAD_{F,X}* might be larger than a S-type offset can handle, but * these are used in performance-sensitive assembly so we can't resort diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 3166e3f147e5..995ee27e9b8a 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -31,6 +31,76 @@ struct kvm_stats_debugfs_item debugfs_entries[] =3D { { NULL } }; =20 +#ifdef CONFIG_FPU +static void kvm_riscv_vcpu_fp_reset(struct kvm_vcpu *vcpu) +{ + unsigned long isa =3D vcpu->arch.isa; + struct kvm_cpu_context *cntx =3D &vcpu->arch.guest_context; + + cntx->sstatus &=3D ~SR_FS; + if ((riscv_isa_extension_available(f) && (isa & RISCV_ISA_EXT_f)) || + (riscv_isa_extension_available(d) && (isa & RISCV_ISA_EXT_d))) + cntx->sstatus |=3D SR_FS_INITIAL; + else + cntx->sstatus |=3D SR_FS_OFF; +} + +static void kvm_riscv_vcpu_fp_clean(struct kvm_cpu_context *cntx) +{ + cntx->sstatus &=3D ~SR_FS; + cntx->sstatus |=3D SR_FS_CLEAN; +} + +static void kvm_riscv_vcpu_guest_fp_save(struct kvm_cpu_context *cntx, + unsigned long isa) +{ + if ((cntx->sstatus & SR_FS) =3D=3D SR_FS_DIRTY) { + if (isa & RISCV_ISA_EXT_d) + __kvm_riscv_vcpu_fp_d_save(cntx); + else if (isa & RISCV_ISA_EXT_f) + __kvm_riscv_vcpu_fp_f_save(cntx); + kvm_riscv_vcpu_fp_clean(cntx); + } +} + +static void kvm_riscv_vcpu_guest_fp_restore(struct kvm_cpu_context *cntx, + unsigned long isa) +{ + if ((cntx->sstatus & SR_FS) !=3D SR_FS_OFF) { + if (isa & RISCV_ISA_EXT_d) + __kvm_riscv_vcpu_fp_d_restore(cntx); + else if (isa & RISCV_ISA_EXT_f) + __kvm_riscv_vcpu_fp_f_restore(cntx); + kvm_riscv_vcpu_fp_clean(cntx); + } +} + +static void kvm_riscv_vcpu_host_fp_save(struct kvm_cpu_context *cntx) +{ + /* No need to check host sstatus as it can be modified outside */ + if (riscv_isa_extension_available(d)) + __kvm_riscv_vcpu_fp_d_save(cntx); + else if (riscv_isa_extension_available(f)) + __kvm_riscv_vcpu_fp_f_save(cntx); +} + +static void kvm_riscv_vcpu_host_fp_restore(struct kvm_cpu_context *cntx) +{ + if (riscv_isa_extension_available(d)) + __kvm_riscv_vcpu_fp_d_restore(cntx); + else if (riscv_isa_extension_available(f)) + __kvm_riscv_vcpu_fp_f_restore(cntx); +} +#else +static void kvm_riscv_vcpu_fp_reset(struct kvm_vcpu *vcpu) {} +static void kvm_riscv_vcpu_guest_fp_save(struct kvm_cpu_context *cntx, + unsigned long isa) {} +static void kvm_riscv_vcpu_guest_fp_restore(struct kvm_cpu_context *cntx, + unsigned long isa) {} +static void kvm_riscv_vcpu_host_fp_save(struct kvm_cpu_context *cntx) {} +static void kvm_riscv_vcpu_host_fp_restore(struct kvm_cpu_context *cntx) {= } +#endif + #define KVM_RISCV_ISA_ALLOWED (RISCV_ISA_EXT_a | \ RISCV_ISA_EXT_c | \ RISCV_ISA_EXT_d | \ @@ -51,6 +121,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) =20 memcpy(cntx, reset_cntx, sizeof(*cntx)); =20 + kvm_riscv_vcpu_fp_reset(vcpu); + kvm_riscv_vcpu_timer_reset(vcpu); =20 WRITE_ONCE(vcpu->arch.irqs_pending, 0); @@ -219,6 +291,7 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcp= u *vcpu, vcpu->arch.isa =3D reg_val; vcpu->arch.isa &=3D riscv_isa; vcpu->arch.isa &=3D KVM_RISCV_ISA_ALLOWED; + kvm_riscv_vcpu_fp_reset(vcpu); } else { return -ENOTSUPP; } @@ -588,6 +661,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu= ) =20 kvm_riscv_stage2_update_hgatp(vcpu); =20 + kvm_riscv_vcpu_host_fp_save(&vcpu->arch.host_context); + kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context, + vcpu->arch.isa); + vcpu->cpu =3D cpu; } =20 @@ -597,6 +674,10 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) =20 vcpu->cpu =3D -1; =20 + kvm_riscv_vcpu_guest_fp_save(&vcpu->arch.guest_context, + vcpu->arch.isa); + kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context); + csr_write(CSR_HGATP, 0); =20 csr->vsstatus =3D csr_read(CSR_VSSTATUS); diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S index e1a17df1b379..d7e237d1004c 100644 --- a/arch/riscv/kvm/vcpu_switch.S +++ b/arch/riscv/kvm/vcpu_switch.S @@ -192,3 +192,177 @@ __kvm_switch_return: /* Return to C code */ ret ENDPROC(__kvm_riscv_switch_to) + +#ifdef CONFIG_FPU + .align 3 + .global __kvm_riscv_vcpu_fp_f_save +__kvm_riscv_vcpu_fp_f_save: + csrr t2, CSR_SSTATUS + li t1, SR_FS + csrs CSR_SSTATUS, t1 + frcsr t0 + fsw f0, KVM_ARCH_FP_F_F0(a0) + fsw f1, KVM_ARCH_FP_F_F1(a0) + fsw f2, KVM_ARCH_FP_F_F2(a0) + fsw f3, KVM_ARCH_FP_F_F3(a0) + fsw f4, KVM_ARCH_FP_F_F4(a0) + fsw f5, KVM_ARCH_FP_F_F5(a0) + fsw f6, KVM_ARCH_FP_F_F6(a0) + fsw f7, KVM_ARCH_FP_F_F7(a0) + fsw f8, KVM_ARCH_FP_F_F8(a0) + fsw f9, KVM_ARCH_FP_F_F9(a0) + fsw f10, KVM_ARCH_FP_F_F10(a0) + fsw f11, KVM_ARCH_FP_F_F11(a0) + fsw f12, KVM_ARCH_FP_F_F12(a0) + fsw f13, KVM_ARCH_FP_F_F13(a0) + fsw f14, KVM_ARCH_FP_F_F14(a0) + fsw f15, KVM_ARCH_FP_F_F15(a0) + fsw f16, KVM_ARCH_FP_F_F16(a0) + fsw f17, KVM_ARCH_FP_F_F17(a0) + fsw f18, KVM_ARCH_FP_F_F18(a0) + fsw f19, KVM_ARCH_FP_F_F19(a0) + fsw f20, KVM_ARCH_FP_F_F20(a0) + fsw f21, KVM_ARCH_FP_F_F21(a0) + fsw f22, KVM_ARCH_FP_F_F22(a0) + fsw f23, KVM_ARCH_FP_F_F23(a0) + fsw f24, KVM_ARCH_FP_F_F24(a0) + fsw f25, KVM_ARCH_FP_F_F25(a0) + fsw f26, KVM_ARCH_FP_F_F26(a0) + fsw f27, KVM_ARCH_FP_F_F27(a0) + fsw f28, KVM_ARCH_FP_F_F28(a0) + fsw f29, KVM_ARCH_FP_F_F29(a0) + fsw f30, KVM_ARCH_FP_F_F30(a0) + fsw f31, KVM_ARCH_FP_F_F31(a0) + sw t0, KVM_ARCH_FP_F_FCSR(a0) + csrw CSR_SSTATUS, t2 + ret + + .align 3 + .global __kvm_riscv_vcpu_fp_d_save +__kvm_riscv_vcpu_fp_d_save: + csrr t2, CSR_SSTATUS + li t1, SR_FS + csrs CSR_SSTATUS, t1 + frcsr t0 + fsd f0, KVM_ARCH_FP_D_F0(a0) + fsd f1, KVM_ARCH_FP_D_F1(a0) + fsd f2, KVM_ARCH_FP_D_F2(a0) + fsd f3, KVM_ARCH_FP_D_F3(a0) + fsd f4, KVM_ARCH_FP_D_F4(a0) + fsd f5, KVM_ARCH_FP_D_F5(a0) + fsd f6, KVM_ARCH_FP_D_F6(a0) + fsd f7, KVM_ARCH_FP_D_F7(a0) + fsd f8, KVM_ARCH_FP_D_F8(a0) + fsd f9, KVM_ARCH_FP_D_F9(a0) + fsd f10, KVM_ARCH_FP_D_F10(a0) + fsd f11, KVM_ARCH_FP_D_F11(a0) + fsd f12, KVM_ARCH_FP_D_F12(a0) + fsd f13, KVM_ARCH_FP_D_F13(a0) + fsd f14, KVM_ARCH_FP_D_F14(a0) + fsd f15, KVM_ARCH_FP_D_F15(a0) + fsd f16, KVM_ARCH_FP_D_F16(a0) + fsd f17, KVM_ARCH_FP_D_F17(a0) + fsd f18, KVM_ARCH_FP_D_F18(a0) + fsd f19, KVM_ARCH_FP_D_F19(a0) + fsd f20, KVM_ARCH_FP_D_F20(a0) + fsd f21, KVM_ARCH_FP_D_F21(a0) + fsd f22, KVM_ARCH_FP_D_F22(a0) + fsd f23, KVM_ARCH_FP_D_F23(a0) + fsd f24, KVM_ARCH_FP_D_F24(a0) + fsd f25, KVM_ARCH_FP_D_F25(a0) + fsd f26, KVM_ARCH_FP_D_F26(a0) + fsd f27, KVM_ARCH_FP_D_F27(a0) + fsd f28, KVM_ARCH_FP_D_F28(a0) + fsd f29, KVM_ARCH_FP_D_F29(a0) + fsd f30, KVM_ARCH_FP_D_F30(a0) + fsd f31, KVM_ARCH_FP_D_F31(a0) + sw t0, KVM_ARCH_FP_D_FCSR(a0) + csrw CSR_SSTATUS, t2 + ret + + .align 3 + .global __kvm_riscv_vcpu_fp_f_restore +__kvm_riscv_vcpu_fp_f_restore: + csrr t2, CSR_SSTATUS + li t1, SR_FS + lw t0, KVM_ARCH_FP_F_FCSR(a0) + csrs CSR_SSTATUS, t1 + flw f0, KVM_ARCH_FP_F_F0(a0) + flw f1, KVM_ARCH_FP_F_F1(a0) + flw f2, KVM_ARCH_FP_F_F2(a0) + flw f3, KVM_ARCH_FP_F_F3(a0) + flw f4, KVM_ARCH_FP_F_F4(a0) + flw f5, KVM_ARCH_FP_F_F5(a0) + flw f6, KVM_ARCH_FP_F_F6(a0) + flw f7, KVM_ARCH_FP_F_F7(a0) + flw f8, KVM_ARCH_FP_F_F8(a0) + flw f9, KVM_ARCH_FP_F_F9(a0) + flw f10, KVM_ARCH_FP_F_F10(a0) + flw f11, KVM_ARCH_FP_F_F11(a0) + flw f12, KVM_ARCH_FP_F_F12(a0) + flw f13, KVM_ARCH_FP_F_F13(a0) + flw f14, KVM_ARCH_FP_F_F14(a0) + flw f15, KVM_ARCH_FP_F_F15(a0) + flw f16, KVM_ARCH_FP_F_F16(a0) + flw f17, KVM_ARCH_FP_F_F17(a0) + flw f18, KVM_ARCH_FP_F_F18(a0) + flw f19, KVM_ARCH_FP_F_F19(a0) + flw f20, KVM_ARCH_FP_F_F20(a0) + flw f21, KVM_ARCH_FP_F_F21(a0) + flw f22, KVM_ARCH_FP_F_F22(a0) + flw f23, KVM_ARCH_FP_F_F23(a0) + flw f24, KVM_ARCH_FP_F_F24(a0) + flw f25, KVM_ARCH_FP_F_F25(a0) + flw f26, KVM_ARCH_FP_F_F26(a0) + flw f27, KVM_ARCH_FP_F_F27(a0) + flw f28, KVM_ARCH_FP_F_F28(a0) + flw f29, KVM_ARCH_FP_F_F29(a0) + flw f30, KVM_ARCH_FP_F_F30(a0) + flw f31, KVM_ARCH_FP_F_F31(a0) + fscsr t0 + csrw CSR_SSTATUS, t2 + ret + + .align 3 + .global __kvm_riscv_vcpu_fp_d_restore +__kvm_riscv_vcpu_fp_d_restore: + csrr t2, CSR_SSTATUS + li t1, SR_FS + lw t0, KVM_ARCH_FP_D_FCSR(a0) + csrs CSR_SSTATUS, t1 + fld f0, KVM_ARCH_FP_D_F0(a0) + fld f1, KVM_ARCH_FP_D_F1(a0) + fld f2, KVM_ARCH_FP_D_F2(a0) + fld f3, KVM_ARCH_FP_D_F3(a0) + fld f4, KVM_ARCH_FP_D_F4(a0) + fld f5, KVM_ARCH_FP_D_F5(a0) + fld f6, KVM_ARCH_FP_D_F6(a0) + fld f7, KVM_ARCH_FP_D_F7(a0) + fld f8, KVM_ARCH_FP_D_F8(a0) + fld f9, KVM_ARCH_FP_D_F9(a0) + fld f10, KVM_ARCH_FP_D_F10(a0) + fld f11, KVM_ARCH_FP_D_F11(a0) + fld f12, KVM_ARCH_FP_D_F12(a0) + fld f13, KVM_ARCH_FP_D_F13(a0) + fld f14, KVM_ARCH_FP_D_F14(a0) + fld f15, KVM_ARCH_FP_D_F15(a0) + fld f16, KVM_ARCH_FP_D_F16(a0) + fld f17, KVM_ARCH_FP_D_F17(a0) + fld f18, KVM_ARCH_FP_D_F18(a0) + fld f19, KVM_ARCH_FP_D_F19(a0) + fld f20, KVM_ARCH_FP_D_F20(a0) + fld f21, KVM_ARCH_FP_D_F21(a0) + fld f22, KVM_ARCH_FP_D_F22(a0) + fld f23, KVM_ARCH_FP_D_F23(a0) + fld f24, KVM_ARCH_FP_D_F24(a0) + fld f25, KVM_ARCH_FP_D_F25(a0) + fld f26, KVM_ARCH_FP_D_F26(a0) + fld f27, KVM_ARCH_FP_D_F27(a0) + fld f28, KVM_ARCH_FP_D_F28(a0) + fld f29, KVM_ARCH_FP_D_F29(a0) + fld f30, KVM_ARCH_FP_D_F30(a0) + fld f31, KVM_ARCH_FP_D_F31(a0) + fscsr t0 + csrw CSR_SSTATUS, t2 + ret +#endif --=20 2.17.1