From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4245FC3A5A8 for ; Wed, 4 Sep 2019 16:16:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EDBD221670 for ; Wed, 4 Sep 2019 16:16:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="rYuJFslL"; dkim=pass (1024-bit key) header.d=sharedspace.onmicrosoft.com header.i=@sharedspace.onmicrosoft.com header.b="u5vNMw3a" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732590AbfIDQQF (ORCPT ); Wed, 4 Sep 2019 12:16:05 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:2733 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731450AbfIDQQF (ORCPT ); Wed, 4 Sep 2019 12:16:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1567613765; x=1599149765; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=IRfUW2Sk4sbRSR9UPrHHycXuR7SiyJMjdQLZnjfvnJ4=; b=rYuJFslLWPBLjsCXdD+GZSR1FT5UiPI+xZQfatnKKsQaGjXU4aI2NWJg EPvcrWEIEho2tOVloDWj0omWeV/VvDgC9PxNIBOzT6mdFX+KwRuH4yeHv FepPO2Zmg7AFKhHMwCRDnkgz1aUNGY0kKdXhA3Ic8H+CO+iZ8/dzXvtdB 5uf7jMPeL0BYxSQypAODzHr6r2WdluuH5hRMXkeyITZWKQCyN1RbDZYBy bBAvBqhvSAHcjVss2cfOcd28NA0lEQUIAMSW8AJVuyGLzr+9jlL33Fif/ ppo/cZFw634Qz/gZRumXDZLHSe8zMO0MNpnpJw7EDlpAtsL7YEAhTW8AD g==; IronPort-SDR: hj6t+ml/j29dVtGXW/Cv1bATSncBLy5JVsninr3SAgD4mc9jqIOLlW1hRIn7Q6L7o9Rs0kdQLL 3Rik/QdU7nkczuyYxryzwID/dXrwDVcvr+Ae0Q5NnSlq29vu8HfL8uFXYds7zBXfstxHCLXWN2 4jwqR3EgvnxdlaNUevkgGsNi0pKSPbHTZjrAkj9WhPGo9hJp2UzRob2Bn2mlg9Xf0Yix2V8ibh crL5p+bzm9CeAeeW+aOQNM6FAVhcK2e65nCi1jmybxdlHk89mFWzz96Yfgw5VU+OjjrH/ujeiK eZI= X-IronPort-AV: E=Sophos;i="5.64,467,1559491200"; d="scan'208";a="118324030" Received: from mail-dm3nam05lp2050.outbound.protection.outlook.com (HELO NAM05-DM3-obe.outbound.protection.outlook.com) ([104.47.49.50]) by ob1.hgst.iphmx.com with ESMTP; 05 Sep 2019 00:15:51 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gCLZWAqC2vwvcTivV5YhW+doy9A7LN01D4GGM2Hr6915IN0fkeVCOpovG0iq6N+GpA55w+Mqr3qBjo2ohtjeBz+91pm1TbGb5kyZ+ISp2SYQJEne71GooQwfRRnNJ7WHAWxbwLj8M5GvBEAn5o8zPj48Cg4+loJ20I6/jf8Cm362XaFLtGpRYdJoEqWDvD/3HMEjve3C23QCef2eXxX5dKgyeUcIyxJJhX8t2DCPJ12kOD6QFSC1rhr2BQuOP6vrRZ72J1LWmBqxAgTwQCKQb+acxOQ3mMDNgTIlSCLYYOSQ6r9IT+xhhBb5bHjlfsqAgBocbkhu6IkXJXChgYBlBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YYUi5M9T+MUGFc/RJqOJBpVG5eCucWxNpoeH/SPtR+o=; b=XJbpM+Gg7WAkm0Qx1Sa8kK1qXLunf33MS/eE9Vm3ynq7gjIj7yTHKi4Rj0nIVPYxawM9fcxjlW9TvVdOcjBvbTu8Q08JuUG/RH7xcUbWL9SiUBMnwnp5bkVeRLdAyvGhXIkFWPUJ7mPQYuHOA4hwAY7nofwF07uRzBC4bNnFbQrBhKQ8x0Z4L1RX5NE90OCBPk6bMm6AFfFHvqQi6VROOjP/CfaZ0+/Qj4ZdS7oiJvAOycvT/EV0ihHbjcj/xIo+B9Mtgp+n7KkglfpLCG9hewCscBtv4fd3kYSBi/P0mB3MtzKIbJx6TKvKzgxizffpnEx36RvR8iJE6y1vyedomA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass header.d=wdc.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YYUi5M9T+MUGFc/RJqOJBpVG5eCucWxNpoeH/SPtR+o=; b=u5vNMw3azM+Rp/nEdhy8GDo3oXwqKEHG8jXVD0Lj4EaKgeKirwRPuZ2wDNcUWnb/K6IlpyxkWlrszjFSBc+hm414crqKXfH3DQBSPUcrLMTEw/8slIAbLBh4zG/Nxz6y/ojJgucRA2xn0GsbyowKMzmYSXdPReo7xhfBMRAKxxs= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5504.namprd04.prod.outlook.com (20.178.247.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2178.16; Wed, 4 Sep 2019 16:15:49 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::e1a5:8de2:c3b1:3fb0]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::e1a5:8de2:c3b1:3fb0%7]) with mapi id 15.20.2220.022; Wed, 4 Sep 2019 16:15:49 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K CC: Daniel Lezcano , Thomas Gleixner , Alexander Graf , Atish Patra , Alistair Francis , Damien Le Moal , Christoph Hellwig , Anup Patel , "kvm@vger.kernel.org" , "linux-riscv@lists.infradead.org" , "linux-kernel@vger.kernel.org" , Anup Patel Subject: [PATCH v7 16/21] RISC-V: KVM: FP lazy save/restore Thread-Topic: [PATCH v7 16/21] RISC-V: KVM: FP lazy save/restore Thread-Index: AQHVYzwDE5mFolyTh0Oa95KBKXtf2g== Date: Wed, 4 Sep 2019 16:15:48 +0000 Message-ID: <20190904161245.111924-18-anup.patel@wdc.com> References: <20190904161245.111924-1-anup.patel@wdc.com> In-Reply-To: <20190904161245.111924-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: MA1PR01CA0084.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00::24) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [49.207.53.222] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: b6aecbf0-8c9c-47c7-a102-08d731532624 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0;PCL:0;RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600166)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020);SRVR:MN2PR04MB5504; x-ms-traffictypediagnostic: MN2PR04MB5504: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:7219; x-forefront-prvs: 0150F3F97D x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(4636009)(376002)(136003)(396003)(346002)(366004)(39860400002)(199004)(189003)(66556008)(66446008)(7416002)(7736002)(8676002)(478600001)(25786009)(386003)(66946007)(99286004)(256004)(14444005)(50226002)(14454004)(66476007)(64756008)(6506007)(6512007)(102836004)(53946003)(6436002)(8936002)(54906003)(6116002)(3846002)(486006)(26005)(55236004)(86362001)(1076003)(53936002)(476003)(316002)(71200400001)(71190400001)(81156014)(76176011)(52116002)(305945005)(5660300002)(6486002)(36756003)(186003)(11346002)(2616005)(30864003)(66066001)(44832011)(4326008)(2906002)(446003)(81166006)(110136005);DIR:OUT;SFP:1102;SCL:1;SRVR:MN2PR04MB5504;H:MN2PR04MB6061.namprd04.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;A:1;MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: p6nh9+IIjjSt0RGEUKx9STvDvaXSRKBQ5ip/a9cU7NI7dZVqtQ6zylRSk/HnCNL7Lyc5m4hGSS+ZDm8LsHgNljUr2ftyKtV2E1xJqgWa6IX6GyalNeCMQiYKqq4BDdv2pMYmZBf9JokYw6u8cjk4DT2mRrmu31/QnqIz0h3VhG68Bzyj685xC6dd53pYUAdd8baZX5D+s8KY6EPooprbLhUq9BrPBQd/Of/BkGr4kKuG/BrCo6tI1Ly+fk1sdQqh4SyzMsr508cUJfQVsvIIQpV3ymtQ5rdSuNpl59MfS2MHfnbyxtYAhjL7JFibZ6DGptZHH3Ll/sraH5SSNlKMA24wCDeWvTRb5uo71VUEimz5pWtTkMoQJu3f/ocHXYGtJcdSct/PvpknUxYbBqF9xfllHw9MAgsgich98YIBRDo= Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: b6aecbf0-8c9c-47c7-a102-08d731532624 X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Sep 2019 16:15:48.9754 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: YNE0dPggRThPb966wPAGFY0rgZRE25KnJHWSoufV86AXsc7+Nsp3jjhTaOX4MzLCbAATjyfa/hSk7uYmJ/DRNA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5504 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Atish Patra This patch adds floating point (F and D extension) context save/restore for guest VCPUs. The FP context is saved and restored lazily only when kernel enter/exits the in-kernel run loop and not during the KVM world switch. This way FP save/restore has minimal impact on KVM performance. Signed-off-by: Atish Patra Signed-off-by: Anup Patel Acked-by: Paolo Bonzini Reviewed-by: Paolo Bonzini Reviewed-by: Alexander Graf --- arch/riscv/include/asm/kvm_host.h | 5 + arch/riscv/kernel/asm-offsets.c | 72 +++++++++++++ arch/riscv/kvm/vcpu.c | 81 ++++++++++++++ arch/riscv/kvm/vcpu_switch.S | 174 ++++++++++++++++++++++++++++++ 4 files changed, 332 insertions(+) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 9179ff019235..928c67828b1b 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -117,6 +117,7 @@ struct kvm_cpu_context { unsigned long sepc; unsigned long sstatus; unsigned long hstatus; + union __riscv_fp_state fp; }; =20 struct kvm_vcpu_csr { @@ -236,6 +237,10 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct = kvm_run *run, unsigned long scause, unsigned long stval); =20 void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch); +void __kvm_riscv_fp_f_save(struct kvm_cpu_context *context); +void __kvm_riscv_fp_f_restore(struct kvm_cpu_context *context); +void __kvm_riscv_fp_d_save(struct kvm_cpu_context *context); +void __kvm_riscv_fp_d_restore(struct kvm_cpu_context *context); =20 int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq= ); diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offset= s.c index 711656710190..9980069a1acf 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -185,6 +185,78 @@ void asm_offsets(void) OFFSET(KVM_ARCH_HOST_SSCRATCH, kvm_vcpu_arch, host_sscratch); OFFSET(KVM_ARCH_HOST_STVEC, kvm_vcpu_arch, host_stvec); =20 + /* F extension */ + + OFFSET(KVM_ARCH_FP_F_F0, kvm_cpu_context, fp.f.f[0]); + OFFSET(KVM_ARCH_FP_F_F1, kvm_cpu_context, fp.f.f[1]); + OFFSET(KVM_ARCH_FP_F_F2, kvm_cpu_context, fp.f.f[2]); + OFFSET(KVM_ARCH_FP_F_F3, kvm_cpu_context, fp.f.f[3]); + OFFSET(KVM_ARCH_FP_F_F4, kvm_cpu_context, fp.f.f[4]); + OFFSET(KVM_ARCH_FP_F_F5, kvm_cpu_context, fp.f.f[5]); + OFFSET(KVM_ARCH_FP_F_F6, kvm_cpu_context, fp.f.f[6]); + OFFSET(KVM_ARCH_FP_F_F7, kvm_cpu_context, fp.f.f[7]); + OFFSET(KVM_ARCH_FP_F_F8, kvm_cpu_context, fp.f.f[8]); + OFFSET(KVM_ARCH_FP_F_F9, kvm_cpu_context, fp.f.f[9]); + OFFSET(KVM_ARCH_FP_F_F10, kvm_cpu_context, fp.f.f[10]); + OFFSET(KVM_ARCH_FP_F_F11, kvm_cpu_context, fp.f.f[11]); + OFFSET(KVM_ARCH_FP_F_F12, kvm_cpu_context, fp.f.f[12]); + OFFSET(KVM_ARCH_FP_F_F13, kvm_cpu_context, fp.f.f[13]); + OFFSET(KVM_ARCH_FP_F_F14, kvm_cpu_context, fp.f.f[14]); + OFFSET(KVM_ARCH_FP_F_F15, kvm_cpu_context, fp.f.f[15]); + OFFSET(KVM_ARCH_FP_F_F16, kvm_cpu_context, fp.f.f[16]); + OFFSET(KVM_ARCH_FP_F_F17, kvm_cpu_context, fp.f.f[17]); + OFFSET(KVM_ARCH_FP_F_F18, kvm_cpu_context, fp.f.f[18]); + OFFSET(KVM_ARCH_FP_F_F19, kvm_cpu_context, fp.f.f[19]); + OFFSET(KVM_ARCH_FP_F_F20, kvm_cpu_context, fp.f.f[20]); + OFFSET(KVM_ARCH_FP_F_F21, kvm_cpu_context, fp.f.f[21]); + OFFSET(KVM_ARCH_FP_F_F22, kvm_cpu_context, fp.f.f[22]); + OFFSET(KVM_ARCH_FP_F_F23, kvm_cpu_context, fp.f.f[23]); + OFFSET(KVM_ARCH_FP_F_F24, kvm_cpu_context, fp.f.f[24]); + OFFSET(KVM_ARCH_FP_F_F25, kvm_cpu_context, fp.f.f[25]); + OFFSET(KVM_ARCH_FP_F_F26, kvm_cpu_context, fp.f.f[26]); + OFFSET(KVM_ARCH_FP_F_F27, kvm_cpu_context, fp.f.f[27]); + OFFSET(KVM_ARCH_FP_F_F28, kvm_cpu_context, fp.f.f[28]); + OFFSET(KVM_ARCH_FP_F_F29, kvm_cpu_context, fp.f.f[29]); + OFFSET(KVM_ARCH_FP_F_F30, kvm_cpu_context, fp.f.f[30]); + OFFSET(KVM_ARCH_FP_F_F31, kvm_cpu_context, fp.f.f[31]); + OFFSET(KVM_ARCH_FP_F_FCSR, kvm_cpu_context, fp.f.fcsr); + + /* D extension */ + + OFFSET(KVM_ARCH_FP_D_F0, kvm_cpu_context, fp.d.f[0]); + OFFSET(KVM_ARCH_FP_D_F1, kvm_cpu_context, fp.d.f[1]); + OFFSET(KVM_ARCH_FP_D_F2, kvm_cpu_context, fp.d.f[2]); + OFFSET(KVM_ARCH_FP_D_F3, kvm_cpu_context, fp.d.f[3]); + OFFSET(KVM_ARCH_FP_D_F4, kvm_cpu_context, fp.d.f[4]); + OFFSET(KVM_ARCH_FP_D_F5, kvm_cpu_context, fp.d.f[5]); + OFFSET(KVM_ARCH_FP_D_F6, kvm_cpu_context, fp.d.f[6]); + OFFSET(KVM_ARCH_FP_D_F7, kvm_cpu_context, fp.d.f[7]); + OFFSET(KVM_ARCH_FP_D_F8, kvm_cpu_context, fp.d.f[8]); + OFFSET(KVM_ARCH_FP_D_F9, kvm_cpu_context, fp.d.f[9]); + OFFSET(KVM_ARCH_FP_D_F10, kvm_cpu_context, fp.d.f[10]); + OFFSET(KVM_ARCH_FP_D_F11, kvm_cpu_context, fp.d.f[11]); + OFFSET(KVM_ARCH_FP_D_F12, kvm_cpu_context, fp.d.f[12]); + OFFSET(KVM_ARCH_FP_D_F13, kvm_cpu_context, fp.d.f[13]); + OFFSET(KVM_ARCH_FP_D_F14, kvm_cpu_context, fp.d.f[14]); + OFFSET(KVM_ARCH_FP_D_F15, kvm_cpu_context, fp.d.f[15]); + OFFSET(KVM_ARCH_FP_D_F16, kvm_cpu_context, fp.d.f[16]); + OFFSET(KVM_ARCH_FP_D_F17, kvm_cpu_context, fp.d.f[17]); + OFFSET(KVM_ARCH_FP_D_F18, kvm_cpu_context, fp.d.f[18]); + OFFSET(KVM_ARCH_FP_D_F19, kvm_cpu_context, fp.d.f[19]); + OFFSET(KVM_ARCH_FP_D_F20, kvm_cpu_context, fp.d.f[20]); + OFFSET(KVM_ARCH_FP_D_F21, kvm_cpu_context, fp.d.f[21]); + OFFSET(KVM_ARCH_FP_D_F22, kvm_cpu_context, fp.d.f[22]); + OFFSET(KVM_ARCH_FP_D_F23, kvm_cpu_context, fp.d.f[23]); + OFFSET(KVM_ARCH_FP_D_F24, kvm_cpu_context, fp.d.f[24]); + OFFSET(KVM_ARCH_FP_D_F25, kvm_cpu_context, fp.d.f[25]); + OFFSET(KVM_ARCH_FP_D_F26, kvm_cpu_context, fp.d.f[26]); + OFFSET(KVM_ARCH_FP_D_F27, kvm_cpu_context, fp.d.f[27]); + OFFSET(KVM_ARCH_FP_D_F28, kvm_cpu_context, fp.d.f[28]); + OFFSET(KVM_ARCH_FP_D_F29, kvm_cpu_context, fp.d.f[29]); + OFFSET(KVM_ARCH_FP_D_F30, kvm_cpu_context, fp.d.f[30]); + OFFSET(KVM_ARCH_FP_D_F31, kvm_cpu_context, fp.d.f[31]); + OFFSET(KVM_ARCH_FP_D_FCSR, kvm_cpu_context, fp.d.fcsr); + /* * THREAD_{F,X}* might be larger than a S-type offset can handle, but * these are used in performance-sensitive assembly so we can't resort diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 7c270ba5e4b4..bb3f2a857b22 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -32,6 +32,76 @@ struct kvm_stats_debugfs_item debugfs_entries[] =3D { { NULL } }; =20 +#ifdef CONFIG_FPU +static void kvm_riscv_vcpu_fp_reset(struct kvm_vcpu *vcpu) +{ + unsigned long isa =3D vcpu->arch.isa; + struct kvm_cpu_context *cntx =3D &vcpu->arch.guest_context; + + cntx->sstatus &=3D ~SR_FS; + if (riscv_isa_extension_available(&isa, f) || + riscv_isa_extension_available(&isa, d)) + cntx->sstatus |=3D SR_FS_INITIAL; + else + cntx->sstatus |=3D SR_FS_OFF; +} + +static void kvm_riscv_vcpu_fp_clean(struct kvm_cpu_context *cntx) +{ + cntx->sstatus &=3D ~SR_FS; + cntx->sstatus |=3D SR_FS_CLEAN; +} + +static void kvm_riscv_vcpu_guest_fp_save(struct kvm_cpu_context *cntx, + unsigned long isa) +{ + if ((cntx->sstatus & SR_FS) =3D=3D SR_FS_DIRTY) { + if (riscv_isa_extension_available(&isa, d)) + __kvm_riscv_fp_d_save(cntx); + else if (riscv_isa_extension_available(&isa, f)) + __kvm_riscv_fp_f_save(cntx); + kvm_riscv_vcpu_fp_clean(cntx); + } +} + +static void kvm_riscv_vcpu_guest_fp_restore(struct kvm_cpu_context *cntx, + unsigned long isa) +{ + if ((cntx->sstatus & SR_FS) !=3D SR_FS_OFF) { + if (riscv_isa_extension_available(&isa, d)) + __kvm_riscv_fp_d_restore(cntx); + else if (riscv_isa_extension_available(&isa, f)) + __kvm_riscv_fp_f_restore(cntx); + kvm_riscv_vcpu_fp_clean(cntx); + } +} + +static void kvm_riscv_vcpu_host_fp_save(struct kvm_cpu_context *cntx) +{ + /* No need to check host sstatus as it can be modified outside */ + if (riscv_isa_extension_available(NULL, d)) + __kvm_riscv_fp_d_save(cntx); + else if (riscv_isa_extension_available(NULL, f)) + __kvm_riscv_fp_f_save(cntx); +} + +static void kvm_riscv_vcpu_host_fp_restore(struct kvm_cpu_context *cntx) +{ + if (riscv_isa_extension_available(NULL, d)) + __kvm_riscv_fp_d_restore(cntx); + else if (riscv_isa_extension_available(NULL, f)) + __kvm_riscv_fp_f_restore(cntx); +} +#else +static void kvm_riscv_vcpu_fp_reset(struct kvm_vcpu *vcpu) {} +static void kvm_riscv_vcpu_guest_fp_save(struct kvm_cpu_context *cntx, + unsigned long isa) {} +static void kvm_riscv_vcpu_guest_fp_restore(struct kvm_cpu_context *cntx, + unsigned long isa) {} +static void kvm_riscv_vcpu_host_fp_save(struct kvm_cpu_context *cntx) {} +static void kvm_riscv_vcpu_host_fp_restore(struct kvm_cpu_context *cntx) {= } +#endif + #define KVM_RISCV_ISA_ALLOWED (riscv_isa_extension_mask(a) | \ riscv_isa_extension_mask(c) | \ riscv_isa_extension_mask(d) | \ @@ -54,6 +124,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) =20 memcpy(cntx, reset_cntx, sizeof(*cntx)); =20 + kvm_riscv_vcpu_fp_reset(vcpu); + kvm_riscv_vcpu_timer_reset(vcpu); =20 WRITE_ONCE(vcpu->arch.irqs_pending, 0); @@ -223,6 +295,7 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcp= u *vcpu, vcpu->arch.isa =3D reg_val; vcpu->arch.isa &=3D riscv_isa_extension_base(NULL); vcpu->arch.isa &=3D KVM_RISCV_ISA_ALLOWED; + kvm_riscv_vcpu_fp_reset(vcpu); } else { return -ENOTSUPP; } @@ -600,6 +673,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu= ) =20 kvm_riscv_stage2_update_hgatp(vcpu); =20 + kvm_riscv_vcpu_host_fp_save(&vcpu->arch.host_context); + kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context, + vcpu->arch.isa); + vcpu->cpu =3D cpu; } =20 @@ -609,6 +686,10 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) =20 vcpu->cpu =3D -1; =20 + kvm_riscv_vcpu_guest_fp_save(&vcpu->arch.guest_context, + vcpu->arch.isa); + kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context); + csr_write(CSR_HGATP, 0); =20 csr->vsstatus =3D csr_read(CSR_VSSTATUS); diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S index ee45c331666f..3b174c2c9dbc 100644 --- a/arch/riscv/kvm/vcpu_switch.S +++ b/arch/riscv/kvm/vcpu_switch.S @@ -200,3 +200,177 @@ ENTRY(__kvm_riscv_unpriv_trap) csrr a1, CSR_SCAUSE sret ENDPROC(__kvm_riscv_unpriv_trap) + +#ifdef CONFIG_FPU + .align 3 + .global __kvm_riscv_fp_f_save +__kvm_riscv_fp_f_save: + csrr t2, CSR_SSTATUS + li t1, SR_FS + csrs CSR_SSTATUS, t1 + frcsr t0 + fsw f0, KVM_ARCH_FP_F_F0(a0) + fsw f1, KVM_ARCH_FP_F_F1(a0) + fsw f2, KVM_ARCH_FP_F_F2(a0) + fsw f3, KVM_ARCH_FP_F_F3(a0) + fsw f4, KVM_ARCH_FP_F_F4(a0) + fsw f5, KVM_ARCH_FP_F_F5(a0) + fsw f6, KVM_ARCH_FP_F_F6(a0) + fsw f7, KVM_ARCH_FP_F_F7(a0) + fsw f8, KVM_ARCH_FP_F_F8(a0) + fsw f9, KVM_ARCH_FP_F_F9(a0) + fsw f10, KVM_ARCH_FP_F_F10(a0) + fsw f11, KVM_ARCH_FP_F_F11(a0) + fsw f12, KVM_ARCH_FP_F_F12(a0) + fsw f13, KVM_ARCH_FP_F_F13(a0) + fsw f14, KVM_ARCH_FP_F_F14(a0) + fsw f15, KVM_ARCH_FP_F_F15(a0) + fsw f16, KVM_ARCH_FP_F_F16(a0) + fsw f17, KVM_ARCH_FP_F_F17(a0) + fsw f18, KVM_ARCH_FP_F_F18(a0) + fsw f19, KVM_ARCH_FP_F_F19(a0) + fsw f20, KVM_ARCH_FP_F_F20(a0) + fsw f21, KVM_ARCH_FP_F_F21(a0) + fsw f22, KVM_ARCH_FP_F_F22(a0) + fsw f23, KVM_ARCH_FP_F_F23(a0) + fsw f24, KVM_ARCH_FP_F_F24(a0) + fsw f25, KVM_ARCH_FP_F_F25(a0) + fsw f26, KVM_ARCH_FP_F_F26(a0) + fsw f27, KVM_ARCH_FP_F_F27(a0) + fsw f28, KVM_ARCH_FP_F_F28(a0) + fsw f29, KVM_ARCH_FP_F_F29(a0) + fsw f30, KVM_ARCH_FP_F_F30(a0) + fsw f31, KVM_ARCH_FP_F_F31(a0) + sw t0, KVM_ARCH_FP_F_FCSR(a0) + csrw CSR_SSTATUS, t2 + ret + + .align 3 + .global __kvm_riscv_fp_d_save +__kvm_riscv_fp_d_save: + csrr t2, CSR_SSTATUS + li t1, SR_FS + csrs CSR_SSTATUS, t1 + frcsr t0 + fsd f0, KVM_ARCH_FP_D_F0(a0) + fsd f1, KVM_ARCH_FP_D_F1(a0) + fsd f2, KVM_ARCH_FP_D_F2(a0) + fsd f3, KVM_ARCH_FP_D_F3(a0) + fsd f4, KVM_ARCH_FP_D_F4(a0) + fsd f5, KVM_ARCH_FP_D_F5(a0) + fsd f6, KVM_ARCH_FP_D_F6(a0) + fsd f7, KVM_ARCH_FP_D_F7(a0) + fsd f8, KVM_ARCH_FP_D_F8(a0) + fsd f9, KVM_ARCH_FP_D_F9(a0) + fsd f10, KVM_ARCH_FP_D_F10(a0) + fsd f11, KVM_ARCH_FP_D_F11(a0) + fsd f12, KVM_ARCH_FP_D_F12(a0) + fsd f13, KVM_ARCH_FP_D_F13(a0) + fsd f14, KVM_ARCH_FP_D_F14(a0) + fsd f15, KVM_ARCH_FP_D_F15(a0) + fsd f16, KVM_ARCH_FP_D_F16(a0) + fsd f17, KVM_ARCH_FP_D_F17(a0) + fsd f18, KVM_ARCH_FP_D_F18(a0) + fsd f19, KVM_ARCH_FP_D_F19(a0) + fsd f20, KVM_ARCH_FP_D_F20(a0) + fsd f21, KVM_ARCH_FP_D_F21(a0) + fsd f22, KVM_ARCH_FP_D_F22(a0) + fsd f23, KVM_ARCH_FP_D_F23(a0) + fsd f24, KVM_ARCH_FP_D_F24(a0) + fsd f25, KVM_ARCH_FP_D_F25(a0) + fsd f26, KVM_ARCH_FP_D_F26(a0) + fsd f27, KVM_ARCH_FP_D_F27(a0) + fsd f28, KVM_ARCH_FP_D_F28(a0) + fsd f29, KVM_ARCH_FP_D_F29(a0) + fsd f30, KVM_ARCH_FP_D_F30(a0) + fsd f31, KVM_ARCH_FP_D_F31(a0) + sw t0, KVM_ARCH_FP_D_FCSR(a0) + csrw CSR_SSTATUS, t2 + ret + + .align 3 + .global __kvm_riscv_fp_f_restore +__kvm_riscv_fp_f_restore: + csrr t2, CSR_SSTATUS + li t1, SR_FS + lw t0, KVM_ARCH_FP_F_FCSR(a0) + csrs CSR_SSTATUS, t1 + flw f0, KVM_ARCH_FP_F_F0(a0) + flw f1, KVM_ARCH_FP_F_F1(a0) + flw f2, KVM_ARCH_FP_F_F2(a0) + flw f3, KVM_ARCH_FP_F_F3(a0) + flw f4, KVM_ARCH_FP_F_F4(a0) + flw f5, KVM_ARCH_FP_F_F5(a0) + flw f6, KVM_ARCH_FP_F_F6(a0) + flw f7, KVM_ARCH_FP_F_F7(a0) + flw f8, KVM_ARCH_FP_F_F8(a0) + flw f9, KVM_ARCH_FP_F_F9(a0) + flw f10, KVM_ARCH_FP_F_F10(a0) + flw f11, KVM_ARCH_FP_F_F11(a0) + flw f12, KVM_ARCH_FP_F_F12(a0) + flw f13, KVM_ARCH_FP_F_F13(a0) + flw f14, KVM_ARCH_FP_F_F14(a0) + flw f15, KVM_ARCH_FP_F_F15(a0) + flw f16, KVM_ARCH_FP_F_F16(a0) + flw f17, KVM_ARCH_FP_F_F17(a0) + flw f18, KVM_ARCH_FP_F_F18(a0) + flw f19, KVM_ARCH_FP_F_F19(a0) + flw f20, KVM_ARCH_FP_F_F20(a0) + flw f21, KVM_ARCH_FP_F_F21(a0) + flw f22, KVM_ARCH_FP_F_F22(a0) + flw f23, KVM_ARCH_FP_F_F23(a0) + flw f24, KVM_ARCH_FP_F_F24(a0) + flw f25, KVM_ARCH_FP_F_F25(a0) + flw f26, KVM_ARCH_FP_F_F26(a0) + flw f27, KVM_ARCH_FP_F_F27(a0) + flw f28, KVM_ARCH_FP_F_F28(a0) + flw f29, KVM_ARCH_FP_F_F29(a0) + flw f30, KVM_ARCH_FP_F_F30(a0) + flw f31, KVM_ARCH_FP_F_F31(a0) + fscsr t0 + csrw CSR_SSTATUS, t2 + ret + + .align 3 + .global __kvm_riscv_fp_d_restore +__kvm_riscv_fp_d_restore: + csrr t2, CSR_SSTATUS + li t1, SR_FS + lw t0, KVM_ARCH_FP_D_FCSR(a0) + csrs CSR_SSTATUS, t1 + fld f0, KVM_ARCH_FP_D_F0(a0) + fld f1, KVM_ARCH_FP_D_F1(a0) + fld f2, KVM_ARCH_FP_D_F2(a0) + fld f3, KVM_ARCH_FP_D_F3(a0) + fld f4, KVM_ARCH_FP_D_F4(a0) + fld f5, KVM_ARCH_FP_D_F5(a0) + fld f6, KVM_ARCH_FP_D_F6(a0) + fld f7, KVM_ARCH_FP_D_F7(a0) + fld f8, KVM_ARCH_FP_D_F8(a0) + fld f9, KVM_ARCH_FP_D_F9(a0) + fld f10, KVM_ARCH_FP_D_F10(a0) + fld f11, KVM_ARCH_FP_D_F11(a0) + fld f12, KVM_ARCH_FP_D_F12(a0) + fld f13, KVM_ARCH_FP_D_F13(a0) + fld f14, KVM_ARCH_FP_D_F14(a0) + fld f15, KVM_ARCH_FP_D_F15(a0) + fld f16, KVM_ARCH_FP_D_F16(a0) + fld f17, KVM_ARCH_FP_D_F17(a0) + fld f18, KVM_ARCH_FP_D_F18(a0) + fld f19, KVM_ARCH_FP_D_F19(a0) + fld f20, KVM_ARCH_FP_D_F20(a0) + fld f21, KVM_ARCH_FP_D_F21(a0) + fld f22, KVM_ARCH_FP_D_F22(a0) + fld f23, KVM_ARCH_FP_D_F23(a0) + fld f24, KVM_ARCH_FP_D_F24(a0) + fld f25, KVM_ARCH_FP_D_F25(a0) + fld f26, KVM_ARCH_FP_D_F26(a0) + fld f27, KVM_ARCH_FP_D_F27(a0) + fld f28, KVM_ARCH_FP_D_F28(a0) + fld f29, KVM_ARCH_FP_D_F29(a0) + fld f30, KVM_ARCH_FP_D_F30(a0) + fld f31, KVM_ARCH_FP_D_F31(a0) + fscsr t0 + csrw CSR_SSTATUS, t2 + ret +#endif --=20 2.17.1