From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05D42C6FD18 for ; Wed, 19 Apr 2023 22:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230376AbjDSWS7 (ORCPT ); Wed, 19 Apr 2023 18:18:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231494AbjDSWSV (ORCPT ); Wed, 19 Apr 2023 18:18:21 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62BAC61B9 for ; Wed, 19 Apr 2023 15:18:05 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-1a6f0d8cdfeso4498975ad.2 for ; Wed, 19 Apr 2023 15:18:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942676; x=1684534676; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aZA9SkX62fIMzNEcuGtwzVe8Up8R4eolVQO1cYHyNXI=; b=pr/kZUHrkhLnktVNRdfz4ij02MTZd+B9h2TaIJSMbt3NVujPvaiFp7Ik4rbkspQ4Mw yuGTwtHLYNepi572aUMJZXGKBGxqqLOoQQAVLkJQE8EutQeaNnEXjp7ozMrwLpK3F/bd Ea/tWnKPdLgS76oaBzGp+E0BB9Obb3YUpFKxXwNOfadBL+8Y1NryGBq3tL1grWdeANk4 WZM2Z6J10fv3mT8jXclG/Jc0tJeziAmqwuD529CbKYuuh+zFcMDe14NqalRMWLaTiCrI zVE7Fohp3aDpbBaOaQnfhoMTTf4olrAvxRsbHG36Vnwm4rXJhTcr8zeKFydq/GZ4e/40 +9lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942676; x=1684534676; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aZA9SkX62fIMzNEcuGtwzVe8Up8R4eolVQO1cYHyNXI=; b=O+IQxkLoqNEqy0pDIkIluxgF7AlA+lqR0fipu4qZ0dKCBZxv4TM9l1TYKv/aCeNOp6 bAimV14fHadlHaXAL01bCH3zsEn8WIuKQ8VUXS/cCnpjn8c1RICEeOWV7C4legUZg1Jo LngteRCkDrV4jfREHJz/+R7bt7U4ATupc1MeAfK0XZWhYw+iRKnwSWKXCz2XzqM+Iijr AcPKQEu2qOHnqD0dRgvWa9whPDSYC+2fuNCcB3nnKcRS0BmR5bXkuEQWTTOPdCIArdd0 vY51Fo/baevjyOb0YYxKwn8YCUw6L2IaiwHokdDEMyDBkcXtwpZQ5Q2UebdCCQOlKo+5 2fTQ== X-Gm-Message-State: AAQBX9d7HI5iW9g8cWHHdirPcYcVlvIVl4tTQXxQIKhy3I1shtOQOWk7 m2bnWiDcwgP8ajhP48XOg4ulRBDCXfgAbjwG30E= X-Google-Smtp-Source: AKy350aluUR1rPQXEDjya7qc0cousjv7HzRiR+wEHDbTSJe3tj7Btum5nXicmgptUoZ7hSCP4qqeQg== X-Received: by 2002:a17:902:c40d:b0:1a8:1cbb:4f76 with SMTP id k13-20020a170902c40d00b001a81cbb4f76mr4908149plk.28.1681942676188; Wed, 19 Apr 2023 15:17:56 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:55 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 12/48] RISC-V: KVM: Exit to the user space for trap redirection Date: Wed, 19 Apr 2023 15:16:40 -0700 Message-Id: <20230419221716.3603068-13-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, the trap redirection to the guest happens in the following cases. 1. Illegal instruction trap 2. Virtual instruction trap 3. Unsuccesfull unpriv read Allowing host to cause traps in the TVM directly is problematic. TSM doesn't support trap redirection yet. Ideally, the host should not end up in one of these situations where it has to redirect the trap. If it happens, exit to the userspace with error as it can't forward the trap to the TVM. If there is any usecasse arises in the future, it has to be co-ordinated through TSM. Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu_exit.c | 9 ++++++++- arch/riscv/kvm/vcpu_insn.c | 17 +++++++++++++++++ 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 4ea101a..0d0c895 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -9,6 +9,7 @@ #include #include #include +#include static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) @@ -135,8 +136,14 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu, void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) { - unsigned long vsstatus = csr_read(CSR_VSSTATUS); + unsigned long vsstatus; + if (is_cove_vcpu(vcpu)) { + kvm_err("RISC-V KVM do not support redirect to CoVE guest yet\n"); + return; + } + + vsstatus = csr_read(CSR_VSSTATUS); /* Change Guest SSTATUS.SPP bit */ vsstatus &= ~SR_SPP; if (vcpu->arch.guest_context.sstatus & SR_SPP) diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 7a6abed..331489f 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -6,6 +6,7 @@ #include #include +#include #define INSN_OPCODE_MASK 0x007c #define INSN_OPCODE_SHIFT 2 @@ -153,6 +154,10 @@ static int truly_illegal_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, { struct kvm_cpu_trap utrap = { 0 }; + /* The host can not redirect any illegal instruction trap to TVM */ + if (unlikely(is_cove_vcpu(vcpu))) + return -EPERM; + /* Redirect trap to Guest VCPU */ utrap.sepc = vcpu->arch.guest_context.sepc; utrap.scause = EXC_INST_ILLEGAL; @@ -169,6 +174,10 @@ static int truly_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, { struct kvm_cpu_trap utrap = { 0 }; + /* The host can not redirect any virtual instruction trap to TVM */ + if (unlikely(is_cove_vcpu(vcpu))) + return -EPERM; + /* Redirect trap to Guest VCPU */ utrap.sepc = vcpu->arch.guest_context.sepc; utrap.scause = EXC_VIRTUAL_INST_FAULT; @@ -417,6 +426,10 @@ int kvm_riscv_vcpu_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, if (unlikely(INSN_IS_16BIT(insn))) { if (insn == 0) { ct = &vcpu->arch.guest_context; + + if (unlikely(is_cove_vcpu(vcpu))) + return -EPERM; + insn = kvm_riscv_vcpu_unpriv_read(vcpu, true, ct->sepc, &utrap); @@ -469,6 +482,8 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run, insn = htinst | INSN_16BIT_MASK; insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2; } else { + if (unlikely(is_cove_vcpu(vcpu))) + return -EFAULT; /* * Bit[0] == 0 implies trapped instruction value is * zero or special value. @@ -595,6 +610,8 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, insn = htinst | INSN_16BIT_MASK; insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2; } else { + if (unlikely(is_cove_vcpu(vcpu))) + return -EFAULT; /* * Bit[0] == 0 implies trapped instruction value is * zero or special value. -- 2.25.1