From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5765AC433E0 for ; Fri, 15 Jan 2021 12:21:55 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F26DB223E0 for ; Fri, 15 Jan 2021 12:21:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F26DB223E0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=AMcp9RvfSLHgUEA0/AACOsP2lqggGvkcnXAPLK0domQ=; b=upT+rl4coRyOo/EpoIO3gThpa AJjSBcXxKH6hzdmI4SLTfgG68UjQ6dlW586ZsnZXrJKzqUEte1VQHQy1VNfnO9IEFB/wwQYCDF3fm 4LnBV9URchGkFxITpySmMA5j3z0Bf+H4mr9E8rphhCBRvaYhR3eNG2sDNKeaAkl2xJDjGavtzcCG5 6lF+7hgEckK0ZhXZP96xiypcNPCH4bCMNmbAOkgOP+G/XKLjwM80UZVGVzd9Qwxy8qxDdf9Ny0yTN eh45dWnK/rqfnzUGGRT1tiNyHLiA1X0JEP88+SpmMQhHnlidQQbhMAB74Ei+2wPb2leh3IrOOP370 8TY1M2vMA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l0O6o-0002XD-1C; Fri, 15 Jan 2021 12:21:42 +0000 Received: from esa6.hgst.iphmx.com ([216.71.154.45]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l0O5L-0001qC-L8; Fri, 15 Jan 2021 12:20:34 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1610713211; x=1642249211; h=from:to:cc:subject:date:message-id:in-reply-to: references:content-transfer-encoding:mime-version; bh=7JUuKGLYKuYRlhYKkhLlAklAV3YwRx/tFnJGARscsVc=; b=L6gkEJkJckmNj98ajTED9x8L8exz4AtAA9BiTk03SP0HboMCQ69lqPBb Q2bgfH5EysYQHAH7yrtktwJoWiE9xCAW1pwQNmn1b8iJCsYQLHrvxgai+ u5HyXCOu7jNP3NASOsVWpz6bTpYt7ViYyTISYCX67lvV6mS9X2R6P753R vvMLl3R3FYktXAMW/Wxb0uRFn+iBLY++ChabGiK1aHahl8zAjF6rl4UD3 0mufOfTmmwC0cCossclmUboXAF4CmE7gcTDO2AzMnCsk3SwQJIY4RjQLd CWi/ECj/ZPnfET6FX00X9GBYYxBMCvCGyF9m2SX7R6opAAH+1yPqrBbDj Q==; IronPort-SDR: LouCTrz61xev0yECUqxVU5pGrnGmgax9tACxhDrSnEm3Sor01TbcxeiAdoWx8ZmAdVFpgB4slm 0xQ0o7j+vbDhC62KSGEmRBdf6MYpml4inucl6M6sis7NV8N6W1jyz0lY7raAy10DeLeDkftZsl Cqk+SN5JL6AZqNIJLIAGP8oPiPHYr6Wtrsw0lpJWDtSqine6RvkR+J4zPKR8yMkLqH9mP9/CEM fs1OCGhPcvhHdNRoipFv8S3TGyiBqB/2apQ4S8zal9kWAUrliLfwKD5ieAUUf2ss16s4un8951 iAo= X-IronPort-AV: E=Sophos;i="5.79,349,1602518400"; d="scan'208";a="158687414" Received: from mail-bl2nam02lp2059.outbound.protection.outlook.com (HELO NAM02-BL2-obe.outbound.protection.outlook.com) ([104.47.38.59]) by ob1.hgst.iphmx.com with ESMTP; 15 Jan 2021 20:20:08 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Oc9NGW/QE4XNeKUb0+6vMapNYhHOXu8KuxbgrYkOcIyHhH5PMAp7KuxEfbhsxJtGg4Rp+hwhEShoc/jS9JeY1F2Cmo+A84U+juGKwdILzjZvj3crc3ezXcn8hs6wJOvOvCGxA13JF+DCo2+f0Zs2tiCvWgfPsbdCtraeWH+F6L4yJOoXKje6flePI7rci8pzHdNco/LMi3moh5n+qICb3RRxsEnQr4+Ix2tGlXHzmEuPPH7JMO/xW1IV6O2mehugwq7Rkx7nd1AD9qlerVrGfdarNc5dcDFunaQ9lwLNLvQZF1P1VEXfy78IXBCo6+ffhO1JIagDk4zMgOSqZwX/ZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HcKnO8RSk+x2gMM0vdM3lv3PiLZjBBQ0cLX1opyLag4=; b=B3hQcGPc4PDbZiLgpqQJhjc29Bcn4TXy+uYXRiFEZfyMtVGV/GANo3s0X7zbCTZInWvzxn/S6nKIZLcyZ8MSZCtsDg54LnCVBuxUUTKQjBkIJ8DOfzTeB23xPmdYI8Yl49Euw7fHZ7AucPzyCqdkvXkfn3fXAhovy2fOO3KnL9HAJGBmjzes2FUnvoR1dlHS7uydYtyJu1gLSzHVeSojcdZWdPEUzOB1bSgXg5R/j6zxsTRQ5g8vPMBgM9m3hW+Mo+OxdO/13L4Go1caWIBtb0BP0GbR6RpVYSPpey0cFdoJD2Nu8vGmUM3XgpU5DgYN2Xuj/bAqqHHCRgxUkRcrkA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass header.d=wdc.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HcKnO8RSk+x2gMM0vdM3lv3PiLZjBBQ0cLX1opyLag4=; b=C1AuA8HbLoDL4RxLQrXsCA1N6pROhgz8ZNClSo7kAl0entKDQdOrvW9i5vqeLrvUKmiDQGuixlSsDKYAtHxcrY1l1LU8Shb+FymlPZdagO4/WgYZG3jrIXL77pvKbKzJWNMFqX/nsJlWmTB3QNeNDhNFdhmGevq2fW4IJhLKfyc= Authentication-Results: dabbelt.com; dkim=none (message not signed) header.d=none;dabbelt.com; dmarc=none action=none header.from=wdc.com; Received: from DM6PR04MB6201.namprd04.prod.outlook.com (2603:10b6:5:127::32) by DM5PR04MB3769.namprd04.prod.outlook.com (2603:10b6:3:fc::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3763.10; Fri, 15 Jan 2021 12:20:06 +0000 Received: from DM6PR04MB6201.namprd04.prod.outlook.com ([fe80::2513:b200:bdc8:b97]) by DM6PR04MB6201.namprd04.prod.outlook.com ([fe80::2513:b200:bdc8:b97%5]) with mapi id 15.20.3742.012; Fri, 15 Jan 2021 12:20:06 +0000 From: Anup Patel To: Palmer Dabbelt , Palmer Dabbelt , Paul Walmsley , Albert Ou , Paolo Bonzini Subject: [PATCH v16 10/17] RISC-V: KVM: Implement stage2 page table programming Date: Fri, 15 Jan 2021 17:48:39 +0530 Message-Id: <20210115121846.114528-11-anup.patel@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210115121846.114528-1-anup.patel@wdc.com> References: <20210115121846.114528-1-anup.patel@wdc.com> X-Originating-IP: [122.167.152.18] X-ClientProxiedBy: MAXPR0101CA0024.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:c::34) To DM6PR04MB6201.namprd04.prod.outlook.com (2603:10b6:5:127::32) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from wdc.com (122.167.152.18) by MAXPR0101CA0024.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:c::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3763.10 via Frontend Transport; Fri, 15 Jan 2021 12:20:02 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 12b65c8c-0db4-414c-1195-08d8b94fe4db X-MS-TrafficTypeDiagnostic: DM5PR04MB3769: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: WDCIPOUTBOUND: EOP-TRUE X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zGMtHbDyo0y6pJsOcrcb1lK0TEsWrGVa7USb0Ldc9z3/d1rOSmDhLEEWzG91dNaCuayF54R8vAZj6LhqKdHqYHVWKAz2aMuHUT2aKVZF+BrnVSGWJyvQ4bP8wTFTSsBhM08nOxwHI7qmQ8qn+di1RKGVJtU/E/eLDWUmtdHKG/AX7LQqprGCo4VUitm9YVThWOrnXApPMyG1Fo29lohcCxDiGdWgZiInLeeXI8Ljh8nY/ExFMFWt6nHKmJGyyiA09YXH7CuShFniQbQy8JkCGuhyU7RUuRmC7a8hDjLU3SJumdQQ2nBXI48H8YCtwrnctQslJrfFbg0BPOPZ55qjWjsL52dIHwIs7V3ZO8QHY7z+6K7wRMD2jUulSCWXy342oIhdSkhe3non4oUxY2P6zA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR04MB6201.namprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(39860400002)(346002)(136003)(366004)(376002)(396003)(66476007)(6666004)(55016002)(7696005)(54906003)(66946007)(110136005)(956004)(5660300002)(2616005)(186003)(52116002)(44832011)(478600001)(66556008)(8936002)(26005)(86362001)(2906002)(83380400001)(7416002)(30864003)(16526019)(8676002)(316002)(4326008)(36756003)(8886007)(1076003); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData: =?us-ascii?Q?LsarvcPdM6eYlfhlzZHzFvFHgixk2bcPnGAkp+hfp4UUKMQQq5boMhHAMxEI?= =?us-ascii?Q?J2BUAEa0s8wouweReuU04QCLUb9omR2PRxEoRBmhF/kq58BQb7MNPqjRcu++?= =?us-ascii?Q?S7DoRboG8W4UHUiwtsRhIBDE0f4Td5zGuASltjONmv66qHi71/cH5SFjfMBC?= =?us-ascii?Q?kK79FtS4Nyt8Vj5M8Myh4/Inhd1c2cbzPU9wH3zCHN/hn4s+zIOf8reV+0Ky?= =?us-ascii?Q?k5NqmU9lR8oBdB6FQ2NVZujkuF+pWjrlhPBmnBVGmzbdHOQrpvAl00jRQja3?= =?us-ascii?Q?tCtXD9DlA/w3/xDyCh9BfThFlskX1Hn+f9XK3OnR4fPJ7Xi951IrukQW/kZ6?= =?us-ascii?Q?cEcsRt7tvZ9Rt3tX9nP2k9bzxxLub/YuWaPtjK6/YZTAI1bkcn3Ri72vn3RJ?= =?us-ascii?Q?vm/dOjx8d8fHZtAOsW9hJbwWiMN7+4AcBEnB79uSYq3atZOqBcbA9rF/2ODl?= =?us-ascii?Q?FVau7MqZ5ptB97Ja/Q6tc5sW1zk3EqJNtbZCnerO+yL1EsJOjCj6YBE0cOLI?= =?us-ascii?Q?Q+m0Ba87fohDy+idjJiOU671DpKv+/L2Q6Aa9iL0OaKI/hT72XwbCmQHRd1S?= =?us-ascii?Q?pOsg64FCvDiHCDuA9rvO7lOXfVmz43C3if/Koqzi0u+B2R5DspngSNYGmU+Z?= =?us-ascii?Q?hZ9sc69imk/lvCLx9F7BImeMD+bZX9XNxQW0FmGzWm3ew/QUD/yAv2HYUppb?= =?us-ascii?Q?mBQoeWBADxzTo8p0VRZNOiJlUCi6TdnAMonQPEtp34Rg39dJM5arMopqcJ2U?= =?us-ascii?Q?ypg15Y6GzH6l0zfURWVphP7lNAPTZK8LXSPM5kUcyZjDCzyIjWSMwptm13IF?= =?us-ascii?Q?//80VxYyetDwlEqFhUJoLrKQ03M8nBI95+LrmZZLUBi3T0ZJbpebd1zGvTBw?= =?us-ascii?Q?eekgQpNflxGjjWUGc4S4EOsK+twtgw5xO1CpGAPV1gubQghclW9N994g/ojE?= =?us-ascii?Q?a5JeyPb2fiW3n6IvfTWTd1GFVWQ9/A1QrVbA+z4FNclySgV5aoO0sHgI89Kw?= =?us-ascii?Q?jHP3?= X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 12b65c8c-0db4-414c-1195-08d8b94fe4db X-MS-Exchange-CrossTenant-AuthSource: DM6PR04MB6201.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jan 2021 12:20:06.8430 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: hKvuG8GeVh1clGYK9rBRNmbyYRk/IwEqFfkrpQRe4S1RETti8BVT3fqzuhKtDQIw0ElDqlb0ml6Fxjf3Gx8Zxg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR04MB3769 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210115_072011_998307_0B30BFF6 X-CRM114-Status: GOOD ( 26.34 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , kvm@vger.kernel.org, Anup Patel , Anup Patel , linux-kernel@vger.kernel.org, Atish Patra , Alistair Francis , kvm-riscv@lists.infradead.org, Alexander Graf , linux-riscv@lists.infradead.org, Yifei Jiang Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch implements all required functions for programming the stage2 page table for each Guest/VM. At high-level, the flow of stage2 related functions is similar from KVM ARM/ARM64 implementation but the stage2 page table format is quite different for KVM RISC-V. [jiangyifei: stage2 dirty log support] Signed-off-by: Yifei Jiang Signed-off-by: Anup Patel Acked-by: Paolo Bonzini Reviewed-by: Paolo Bonzini --- arch/riscv/include/asm/kvm_host.h | 12 + arch/riscv/include/asm/pgtable-bits.h | 1 + arch/riscv/kvm/Kconfig | 1 + arch/riscv/kvm/main.c | 19 + arch/riscv/kvm/mmu.c | 650 +++++++++++++++++++++++++- arch/riscv/kvm/vm.c | 6 - 6 files changed, 673 insertions(+), 16 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index ca4eb4ce1335..5d70c8cf3733 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -76,6 +76,13 @@ struct kvm_mmio_decode { int return_handled; }; +#define KVM_MMU_PAGE_CACHE_NR_OBJS 32 + +struct kvm_mmu_page_cache { + int nobjs; + void *objects[KVM_MMU_PAGE_CACHE_NR_OBJS]; +}; + struct kvm_cpu_trap { unsigned long sepc; unsigned long scause; @@ -177,6 +184,9 @@ struct kvm_vcpu_arch { /* MMIO instruction details */ struct kvm_mmio_decode mmio_decode; + /* Cache pages needed to program page tables with spinlock held */ + struct kvm_mmu_page_cache mmu_page_cache; + /* VCPU power-off state */ bool power_off; @@ -205,6 +215,8 @@ void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu); int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm); void kvm_riscv_stage2_free_pgd(struct kvm *kvm); void kvm_riscv_stage2_update_hgatp(struct kvm_vcpu *vcpu); +void kvm_riscv_stage2_mode_detect(void); +unsigned long kvm_riscv_stage2_mode(void); void kvm_riscv_stage2_vmid_detect(void); unsigned long kvm_riscv_stage2_vmid_bits(void); diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h index bbaeb5d35842..be49d62fcc2b 100644 --- a/arch/riscv/include/asm/pgtable-bits.h +++ b/arch/riscv/include/asm/pgtable-bits.h @@ -26,6 +26,7 @@ #define _PAGE_SPECIAL _PAGE_SOFT #define _PAGE_TABLE _PAGE_PRESENT +#define _PAGE_LEAF (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC) /* * _PAGE_PROT_NONE is set on not-present pages (and ignored by the hardware) to diff --git a/arch/riscv/kvm/Kconfig b/arch/riscv/kvm/Kconfig index b42979f84042..633063edaee8 100644 --- a/arch/riscv/kvm/Kconfig +++ b/arch/riscv/kvm/Kconfig @@ -23,6 +23,7 @@ config KVM select PREEMPT_NOTIFIERS select ANON_INODES select KVM_MMIO + select KVM_GENERIC_DIRTYLOG_READ_PROTECT select HAVE_KVM_VCPU_ASYNC_IOCTL select HAVE_KVM_EVENTFD select SRCU diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 49a4941e3838..421ecf4e6360 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -64,6 +64,8 @@ void kvm_arch_hardware_disable(void) int kvm_arch_init(void *opaque) { + const char *str; + if (!riscv_isa_extension_available(NULL, h)) { kvm_info("hypervisor extension not available\n"); return -ENODEV; @@ -79,10 +81,27 @@ int kvm_arch_init(void *opaque) return -ENODEV; } + kvm_riscv_stage2_mode_detect(); + kvm_riscv_stage2_vmid_detect(); kvm_info("hypervisor extension available\n"); + switch (kvm_riscv_stage2_mode()) { + case HGATP_MODE_SV32X4: + str = "Sv32x4"; + break; + case HGATP_MODE_SV39X4: + str = "Sv39x4"; + break; + case HGATP_MODE_SV48X4: + str = "Sv48x4"; + break; + default: + return -ENODEV; + } + kvm_info("using %s G-stage page table format\n", str); + kvm_info("VMID %ld bits available\n", kvm_riscv_stage2_vmid_bits()); return 0; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 8fb356e68cc5..e311c4bb0b8a 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -17,11 +17,415 @@ #include #include #include +#include + +#ifdef CONFIG_64BIT +static unsigned long stage2_mode = (HGATP_MODE_SV39X4 << HGATP_MODE_SHIFT); +static unsigned long stage2_pgd_levels = 3; +#define stage2_index_bits 9 +#else +static unsigned long stage2_mode = (HGATP_MODE_SV32X4 << HGATP_MODE_SHIFT); +static unsigned long stage2_pgd_levels = 2; +#define stage2_index_bits 10 +#endif + +#define stage2_pgd_xbits 2 +#define stage2_pgd_size (1UL << (HGATP_PAGE_SHIFT + stage2_pgd_xbits)) +#define stage2_gpa_bits (HGATP_PAGE_SHIFT + \ + (stage2_pgd_levels * stage2_index_bits) + \ + stage2_pgd_xbits) +#define stage2_gpa_size ((gpa_t)(1ULL << stage2_gpa_bits)) + +static inline unsigned long stage2_pte_index(gpa_t addr, u32 level) +{ + unsigned long mask; + unsigned long shift = HGATP_PAGE_SHIFT + (stage2_index_bits * level); + + if (level == (stage2_pgd_levels - 1)) + mask = (PTRS_PER_PTE * (1UL << stage2_pgd_xbits)) - 1; + else + mask = PTRS_PER_PTE - 1; + + return (addr >> shift) & mask; +} + +static inline unsigned long stage2_pte_page_vaddr(pte_t pte) +{ + return (unsigned long)pfn_to_virt(pte_val(pte) >> _PAGE_PFN_SHIFT); +} + +static int stage2_page_size_to_level(unsigned long page_size, u32 *out_level) +{ + u32 i; + unsigned long psz = 1UL << 12; + + for (i = 0; i < stage2_pgd_levels; i++) { + if (page_size == (psz << (i * stage2_index_bits))) { + *out_level = i; + return 0; + } + } + + return -EINVAL; +} + +static int stage2_level_to_page_size(u32 level, unsigned long *out_pgsize) +{ + if (stage2_pgd_levels < level) + return -EINVAL; + + *out_pgsize = 1UL << (12 + (level * stage2_index_bits)); + + return 0; +} + +static int stage2_cache_topup(struct kvm_mmu_page_cache *pcache, + int min, int max) +{ + void *page; + + BUG_ON(max > KVM_MMU_PAGE_CACHE_NR_OBJS); + if (pcache->nobjs >= min) + return 0; + while (pcache->nobjs < max) { + page = (void *)__get_free_page(GFP_KERNEL | __GFP_ZERO); + if (!page) + return -ENOMEM; + pcache->objects[pcache->nobjs++] = page; + } + + return 0; +} + +static void stage2_cache_flush(struct kvm_mmu_page_cache *pcache) +{ + while (pcache && pcache->nobjs) + free_page((unsigned long)pcache->objects[--pcache->nobjs]); +} + +static void *stage2_cache_alloc(struct kvm_mmu_page_cache *pcache) +{ + void *p; + + if (!pcache) + return NULL; + + BUG_ON(!pcache->nobjs); + p = pcache->objects[--pcache->nobjs]; + + return p; +} + +static bool stage2_get_leaf_entry(struct kvm *kvm, gpa_t addr, + pte_t **ptepp, u32 *ptep_level) +{ + pte_t *ptep; + u32 current_level = stage2_pgd_levels - 1; + + *ptep_level = current_level; + ptep = (pte_t *)kvm->arch.pgd; + ptep = &ptep[stage2_pte_index(addr, current_level)]; + while (ptep && pte_val(*ptep)) { + if (pte_val(*ptep) & _PAGE_LEAF) { + *ptep_level = current_level; + *ptepp = ptep; + return true; + } + + if (current_level) { + current_level--; + *ptep_level = current_level; + ptep = (pte_t *)stage2_pte_page_vaddr(*ptep); + ptep = &ptep[stage2_pte_index(addr, current_level)]; + } else { + ptep = NULL; + } + } + + return false; +} + +static void stage2_remote_tlb_flush(struct kvm *kvm, u32 level, gpa_t addr) +{ + struct cpumask hmask; + unsigned long size = PAGE_SIZE; + struct kvm_vmid *vmid = &kvm->arch.vmid; + + if (stage2_level_to_page_size(level, &size)) + return; + addr &= ~(size - 1); + + /* + * TODO: Instead of cpu_online_mask, we should only target CPUs + * where the Guest/VM is running. + */ + preempt_disable(); + riscv_cpuid_to_hartid_mask(cpu_online_mask, &hmask); + sbi_remote_hfence_gvma_vmid(cpumask_bits(&hmask), addr, size, + READ_ONCE(vmid->vmid)); + preempt_enable(); +} + +static int stage2_set_pte(struct kvm *kvm, u32 level, + struct kvm_mmu_page_cache *pcache, + gpa_t addr, const pte_t *new_pte) +{ + u32 current_level = stage2_pgd_levels - 1; + pte_t *next_ptep = (pte_t *)kvm->arch.pgd; + pte_t *ptep = &next_ptep[stage2_pte_index(addr, current_level)]; + + if (current_level < level) + return -EINVAL; + + while (current_level != level) { + if (pte_val(*ptep) & _PAGE_LEAF) + return -EEXIST; + + if (!pte_val(*ptep)) { + next_ptep = stage2_cache_alloc(pcache); + if (!next_ptep) + return -ENOMEM; + *ptep = pfn_pte(PFN_DOWN(__pa(next_ptep)), + __pgprot(_PAGE_TABLE)); + } else { + if (pte_val(*ptep) & _PAGE_LEAF) + return -EEXIST; + next_ptep = (pte_t *)stage2_pte_page_vaddr(*ptep); + } + + current_level--; + ptep = &next_ptep[stage2_pte_index(addr, current_level)]; + } + + *ptep = *new_pte; + if (pte_val(*ptep) & _PAGE_LEAF) + stage2_remote_tlb_flush(kvm, current_level, addr); + + return 0; +} + +static int stage2_map_page(struct kvm *kvm, + struct kvm_mmu_page_cache *pcache, + gpa_t gpa, phys_addr_t hpa, + unsigned long page_size, + bool page_rdonly, bool page_exec) +{ + int ret; + u32 level = 0; + pte_t new_pte; + pgprot_t prot; + + ret = stage2_page_size_to_level(page_size, &level); + if (ret) + return ret; + + /* + * A RISC-V implementation can choose to either: + * 1) Update 'A' and 'D' PTE bits in hardware + * 2) Generate page fault when 'A' and/or 'D' bits are not set + * PTE so that software can update these bits. + * + * We support both options mentioned above. To achieve this, we + * always set 'A' and 'D' PTE bits at time of creating stage2 + * mapping. To support KVM dirty page logging with both options + * mentioned above, we will write-protect stage2 PTEs to track + * dirty pages. + */ + + if (page_exec) { + if (page_rdonly) + prot = PAGE_READ_EXEC; + else + prot = PAGE_WRITE_EXEC; + } else { + if (page_rdonly) + prot = PAGE_READ; + else + prot = PAGE_WRITE; + } + new_pte = pfn_pte(PFN_DOWN(hpa), prot); + new_pte = pte_mkdirty(new_pte); + + return stage2_set_pte(kvm, level, pcache, gpa, &new_pte); +} + +enum stage2_op { + STAGE2_OP_NOP = 0, /* Nothing */ + STAGE2_OP_CLEAR, /* Clear/Unmap */ + STAGE2_OP_WP, /* Write-protect */ +}; + +static void stage2_op_pte(struct kvm *kvm, gpa_t addr, + pte_t *ptep, u32 ptep_level, enum stage2_op op) +{ + int i, ret; + pte_t *next_ptep; + u32 next_ptep_level; + unsigned long next_page_size, page_size; + + ret = stage2_level_to_page_size(ptep_level, &page_size); + if (ret) + return; + + BUG_ON(addr & (page_size - 1)); + + if (!pte_val(*ptep)) + return; + + if (ptep_level && !(pte_val(*ptep) & _PAGE_LEAF)) { + next_ptep = (pte_t *)stage2_pte_page_vaddr(*ptep); + next_ptep_level = ptep_level - 1; + ret = stage2_level_to_page_size(next_ptep_level, + &next_page_size); + if (ret) + return; + + if (op == STAGE2_OP_CLEAR) + set_pte(ptep, __pte(0)); + for (i = 0; i < PTRS_PER_PTE; i++) + stage2_op_pte(kvm, addr + i * next_page_size, + &next_ptep[i], next_ptep_level, op); + if (op == STAGE2_OP_CLEAR) + put_page(virt_to_page(next_ptep)); + } else { + if (op == STAGE2_OP_CLEAR) + set_pte(ptep, __pte(0)); + else if (op == STAGE2_OP_WP) + set_pte(ptep, __pte(pte_val(*ptep) & ~_PAGE_WRITE)); + stage2_remote_tlb_flush(kvm, ptep_level, addr); + } +} + +static void stage2_unmap_range(struct kvm *kvm, gpa_t start, gpa_t size) +{ + int ret; + pte_t *ptep; + u32 ptep_level; + bool found_leaf; + unsigned long page_size; + gpa_t addr = start, end = start + size; + + while (addr < end) { + found_leaf = stage2_get_leaf_entry(kvm, addr, + &ptep, &ptep_level); + ret = stage2_level_to_page_size(ptep_level, &page_size); + if (ret) + break; + + if (!found_leaf) + goto next; + + if (!(addr & (page_size - 1)) && ((end - addr) >= page_size)) + stage2_op_pte(kvm, addr, ptep, + ptep_level, STAGE2_OP_CLEAR); + +next: + addr += page_size; + } +} + +static void stage2_wp_range(struct kvm *kvm, gpa_t start, gpa_t end) +{ + int ret; + pte_t *ptep; + u32 ptep_level; + bool found_leaf; + gpa_t addr = start; + unsigned long page_size; + + while (addr < end) { + found_leaf = stage2_get_leaf_entry(kvm, addr, + &ptep, &ptep_level); + ret = stage2_level_to_page_size(ptep_level, &page_size); + if (ret) + break; + + if (!found_leaf) + goto next; + + if (!(addr & (page_size - 1)) && ((end - addr) >= page_size)) + stage2_op_pte(kvm, addr, ptep, + ptep_level, STAGE2_OP_WP); + +next: + addr += page_size; + } +} + +void stage2_wp_memory_region(struct kvm *kvm, int slot) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslot = id_to_memslot(slots, slot); + phys_addr_t start = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + + spin_lock(&kvm->mmu_lock); + stage2_wp_range(kvm, start, end); + spin_unlock(&kvm->mmu_lock); + kvm_flush_remote_tlbs(kvm); +} + +int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, + unsigned long size, bool writable) +{ + pte_t pte; + int ret = 0; + unsigned long pfn; + phys_addr_t addr, end; + struct kvm_mmu_page_cache pcache = { 0, }; + + end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; + pfn = __phys_to_pfn(hpa); + + for (addr = gpa; addr < end; addr += PAGE_SIZE) { + pte = pfn_pte(pfn, PAGE_KERNEL); + + if (!writable) + pte = pte_wrprotect(pte); + + ret = stage2_cache_topup(&pcache, + stage2_pgd_levels, + KVM_MMU_PAGE_CACHE_NR_OBJS); + if (ret) + goto out; + + spin_lock(&kvm->mmu_lock); + ret = stage2_set_pte(kvm, 0, &pcache, addr, &pte); + spin_unlock(&kvm->mmu_lock); + if (ret) + goto out; + + pfn++; + } + +out: + stage2_cache_flush(&pcache); + return ret; + +} + +void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, + unsigned long mask) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; + phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + + stage2_wp_range(kvm, start, end); +} void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { } +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + kvm_flush_remote_tlbs(kvm); +} + void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free) { } @@ -38,7 +442,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) void kvm_arch_flush_shadow_all(struct kvm *kvm) { - /* TODO: */ + kvm_riscv_stage2_free_pgd(kvm); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm, @@ -52,7 +456,13 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - /* TODO: */ + /* + * At this point memslot has been committed and there is an + * allocated dirty_bitmap[], dirty pages will be tracked while + * the memory slot is write protected. + */ + if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) + stage2_wp_memory_region(kvm, mem->slot); } int kvm_arch_prepare_memory_region(struct kvm *kvm, @@ -60,35 +470,255 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, enum kvm_mr_change change) { - /* TODO: */ - return 0; + hva_t hva = mem->userspace_addr; + hva_t reg_end = hva + mem->memory_size; + bool writable = !(mem->flags & KVM_MEM_READONLY); + int ret = 0; + + if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && + change != KVM_MR_FLAGS_ONLY) + return 0; + + /* + * Prevent userspace from creating a memory region outside of the GPA + * space addressable by the KVM guest GPA space. + */ + if ((memslot->base_gfn + memslot->npages) >= + (stage2_gpa_size >> PAGE_SHIFT)) + return -EFAULT; + + mmap_read_lock(current->mm); + + /* + * A memory region could potentially cover multiple VMAs, and + * any holes between them, so iterate over all of them to find + * out if we can map any of them right now. + * + * +--------------------------------------------+ + * +---------------+----------------+ +----------------+ + * | : VMA 1 | VMA 2 | | VMA 3 : | + * +---------------+----------------+ +----------------+ + * | memory region | + * +--------------------------------------------+ + */ + do { + struct vm_area_struct *vma = find_vma(current->mm, hva); + hva_t vm_start, vm_end; + + if (!vma || vma->vm_start >= reg_end) + break; + + /* + * Mapping a read-only VMA is only allowed if the + * memory region is configured as read-only. + */ + if (writable && !(vma->vm_flags & VM_WRITE)) { + ret = -EPERM; + break; + } + + /* Take the intersection of this VMA with the memory region */ + vm_start = max(hva, vma->vm_start); + vm_end = min(reg_end, vma->vm_end); + + if (vma->vm_flags & VM_PFNMAP) { + gpa_t gpa = mem->guest_phys_addr + + (vm_start - mem->userspace_addr); + phys_addr_t pa; + + pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT; + pa += vm_start - vma->vm_start; + + /* IO region dirty page logging not allowed */ + if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES) { + ret = -EINVAL; + goto out; + } + + ret = stage2_ioremap(kvm, gpa, pa, + vm_end - vm_start, writable); + if (ret) + break; + } + hva = vm_end; + } while (hva < reg_end); + + if (change == KVM_MR_FLAGS_ONLY) + goto out; + + spin_lock(&kvm->mmu_lock); + if (ret) + stage2_unmap_range(kvm, mem->guest_phys_addr, + mem->memory_size); + spin_unlock(&kvm->mmu_lock); + +out: + mmap_read_unlock(current->mm); + return ret; } int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, gpa_t gpa, unsigned long hva, bool is_write) { - /* TODO: */ - return 0; + int ret; + kvm_pfn_t hfn; + bool writeable; + short vma_pageshift; + gfn_t gfn = gpa >> PAGE_SHIFT; + struct vm_area_struct *vma; + struct kvm *kvm = vcpu->kvm; + struct kvm_mmu_page_cache *pcache = &vcpu->arch.mmu_page_cache; + bool logging = (memslot->dirty_bitmap && + !(memslot->flags & KVM_MEM_READONLY)) ? true : false; + unsigned long vma_pagesize; + + mmap_read_lock(current->mm); + + vma = find_vma_intersection(current->mm, hva, hva + 1); + if (unlikely(!vma)) { + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); + mmap_read_unlock(current->mm); + return -EFAULT; + } + + if (is_vm_hugetlb_page(vma)) + vma_pageshift = huge_page_shift(hstate_vma(vma)); + else + vma_pageshift = PAGE_SHIFT; + vma_pagesize = 1ULL << vma_pageshift; + if (logging || (vma->vm_flags & VM_PFNMAP)) + vma_pagesize = PAGE_SIZE; + + if (vma_pagesize == PMD_SIZE || vma_pagesize == PGDIR_SIZE) + gfn = (gpa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT; + + mmap_read_unlock(current->mm); + + if (vma_pagesize != PGDIR_SIZE && + vma_pagesize != PMD_SIZE && + vma_pagesize != PAGE_SIZE) { + kvm_err("Invalid VMA page size 0x%lx\n", vma_pagesize); + return -EFAULT; + } + + /* We need minimum second+third level pages */ + ret = stage2_cache_topup(pcache, stage2_pgd_levels, + KVM_MMU_PAGE_CACHE_NR_OBJS); + if (ret) { + kvm_err("Failed to topup stage2 cache\n"); + return ret; + } + + hfn = gfn_to_pfn_prot(kvm, gfn, is_write, &writeable); + if (hfn == KVM_PFN_ERR_HWPOISON) { + send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, + vma_pageshift, current); + return 0; + } + if (is_error_noslot_pfn(hfn)) + return -EFAULT; + + /* + * If logging is active then we allow writable pages only + * for write faults. + */ + if (logging && !is_write) + writeable = false; + + spin_lock(&kvm->mmu_lock); + + if (writeable) { + kvm_set_pfn_dirty(hfn); + mark_page_dirty(kvm, gfn); + ret = stage2_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, + vma_pagesize, false, true); + } else { + ret = stage2_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, + vma_pagesize, true, true); + } + + if (ret) + kvm_err("Failed to map in stage2\n"); + + spin_unlock(&kvm->mmu_lock); + kvm_set_pfn_accessed(hfn); + kvm_release_pfn_clean(hfn); + return ret; } void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu) { - /* TODO: */ + stage2_cache_flush(&vcpu->arch.mmu_page_cache); } int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm) { - /* TODO: */ + struct page *pgd_page; + + if (kvm->arch.pgd != NULL) { + kvm_err("kvm_arch already initialized?\n"); + return -EINVAL; + } + + pgd_page = alloc_pages(GFP_KERNEL | __GFP_ZERO, + get_order(stage2_pgd_size)); + if (!pgd_page) + return -ENOMEM; + kvm->arch.pgd = page_to_virt(pgd_page); + kvm->arch.pgd_phys = page_to_phys(pgd_page); + return 0; } void kvm_riscv_stage2_free_pgd(struct kvm *kvm) { - /* TODO: */ + void *pgd = NULL; + + spin_lock(&kvm->mmu_lock); + if (kvm->arch.pgd) { + stage2_unmap_range(kvm, 0UL, stage2_gpa_size); + pgd = READ_ONCE(kvm->arch.pgd); + kvm->arch.pgd = NULL; + kvm->arch.pgd_phys = 0; + } + spin_unlock(&kvm->mmu_lock); + + if (pgd) + free_pages((unsigned long)pgd, get_order(stage2_pgd_size)); } void kvm_riscv_stage2_update_hgatp(struct kvm_vcpu *vcpu) { - /* TODO: */ + unsigned long hgatp = stage2_mode; + struct kvm_arch *k = &vcpu->kvm->arch; + + hgatp |= (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & + HGATP_VMID_MASK; + hgatp |= (k->pgd_phys >> PAGE_SHIFT) & HGATP_PPN; + + csr_write(CSR_HGATP, hgatp); + + if (!kvm_riscv_stage2_vmid_bits()) + __kvm_riscv_hfence_gvma_all(); +} + +void kvm_riscv_stage2_mode_detect(void) +{ +#ifdef CONFIG_64BIT + /* Try Sv48x4 stage2 mode */ + csr_write(CSR_HGATP, HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) == HGATP_MODE_SV48X4) { + stage2_mode = (HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); + stage2_pgd_levels = 4; + } + csr_write(CSR_HGATP, 0); + + __kvm_riscv_hfence_gvma_all(); +#endif +} + +unsigned long kvm_riscv_stage2_mode(void) +{ + return stage2_mode >> HGATP_MODE_SHIFT; } diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index 282d67617229..6cde69a82252 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -12,12 +12,6 @@ #include #include -int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) -{ - /* TODO: To be added later. */ - return -EOPNOTSUPP; -} - int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) { int r; -- 2.25.1 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv