From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF501C433B4 for ; Mon, 5 Apr 2021 01:34:10 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 13AC661392 for ; Mon, 5 Apr 2021 01:34:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 13AC661392 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4FDCqh4Pjkz3dfS for ; Mon, 5 Apr 2021 11:34:08 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jyF7Pig6; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::52b; helo=mail-pg1-x52b.google.com; envelope-from=npiggin@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jyF7Pig6; dkim-atps=neutral Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4FDCYW5rQ6z3cMj for ; Mon, 5 Apr 2021 11:21:51 +1000 (AEST) Received: by mail-pg1-x52b.google.com with SMTP id q10so7181477pgj.2 for ; Sun, 04 Apr 2021 18:21:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NIvhSKFNHbWq24cTYZ8ztE/61/g5rbwdjqPFlD602MI=; b=jyF7Pig69ubEIbVEY4OzKMBnU89qYrXvyM6J7Xi0fn+pW2K/yuACITDHrC1w776ZDR eCyC5zZ8cDkp6jXmH4pubJvQjniub+4cPiueYERc/MP+rzrw6euJB+XZo7zlmKX2QyeJ 5gRuBpvOlLo1WwI9HPc7T2LULvlzSeif22DQDM/Y8Zv5cvwCUznFl/lBIcLpCQz8nXlP 6uTPnPm6DtzxfFEedIzZS0io+27T561wMWTrB7Du+DhNl4X+rMiGByMSHHqyf0SUI+IF ejTlM1ETQDEwbn5jUJuEL2bWJJSl1jH9gfn4GL9Yg/2Mp7/X5/xTvSDmQTpt4qAKnJ/E zTLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NIvhSKFNHbWq24cTYZ8ztE/61/g5rbwdjqPFlD602MI=; b=PyGwXVoTikmoIngRo2UIKbmY969nf1GMtQbbDBLOwswKkQl2Rx9i5DxMieFs4czmuh 9gwzd1UqEDHBP0PscJXvyt2eaqc0MAFEYWgJrtNiadQrTBMjoQuykuVHptjuExymv7+1 oXn2CoDyvEpeCfUBuhwuSFVyvSI4ILy484EPzyCjS3EoWHdTziA6C3YUKssciZb5PrXw t1sBAxxOpt6Xk5NYRED1sqzZk51pQE2imdJFZvp9GakpzpW3NdzEtdfS+9OoGuh6bMHw 8s4TliXFALxRyh4+Mf2n7LsmdBV1sqNHPv30F+TqMcARUi68Zqf88F6wh7fwFxEj+r+z kzPQ== X-Gm-Message-State: AOAM533zlSIVltos7ZR6aZriCfsP01/r/BoRKYrgAKAouk5V/zf2Imor 64Q+Utfa3Jsh4m++PL+RPIY3JSs237EH0Q== X-Google-Smtp-Source: ABdhPJzOxEht77AnszZ6Bu+SLUYJSA4Dw/nEpElUUVjnJcsQ7y4HvcH1hZJ4JyJnqjTheGxmj0zFOA== X-Received: by 2002:a65:6645:: with SMTP id z5mr20664322pgv.273.1617585709524; Sun, 04 Apr 2021 18:21:49 -0700 (PDT) Received: from bobo.ibm.com ([1.132.215.134]) by smtp.gmail.com with ESMTPSA id e3sm14062536pfm.43.2021.04.04.18.21.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Apr 2021 18:21:49 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Subject: [PATCH v6 31/48] KVM: PPC: Book3S HV P9: inline kvmhv_load_hv_regs_and_go into __kvmhv_vcpu_entry_p9 Date: Mon, 5 Apr 2021 11:19:31 +1000 Message-Id: <20210405011948.675354-32-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> References: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, Nicholas Piggin , Fabiano Rosas Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" Now the initial C implementation is done, inline more HV code to make rearranging things easier. And rename __kvmhv_vcpu_entry_p9 to drop the leading underscores as it's now C, and is now a more complete vcpu entry. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 192 +---------------------- arch/powerpc/kvm/book3s_hv_interrupt.c | 179 ++++++++++++++++++++- 3 files changed, 180 insertions(+), 193 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index c214bcffb441..eaf3a562bf1e 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -153,7 +153,7 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } -int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu); +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #endif diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 9a1476daca8c..d6eecedaa5a5 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3498,194 +3498,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } -static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) -{ - struct kvmppc_vcore *vc = vcpu->arch.vcore; - struct kvm_nested_guest *nested = vcpu->arch.nested; - u32 lpid; - - lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; - - /* - * All the isync()s are overkill but trivially follow the ISA - * requirements. Some can likely be replaced with justification - * comment for why they are not needed. - */ - isync(); - mtspr(SPRN_LPID, lpid); - isync(); - mtspr(SPRN_LPCR, lpcr); - isync(); - mtspr(SPRN_PID, vcpu->arch.pid); - isync(); - - /* TLBIEL must have LPIDR set, so set guest LPID before flushing. */ - kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); -} - -static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) -{ - isync(); - mtspr(SPRN_PID, pid); - isync(); - mtspr(SPRN_LPID, kvm->arch.host_lpid); - isync(); - mtspr(SPRN_LPCR, kvm->arch.host_lpcr); - isync(); -} - -/* - * Load up hypervisor-mode registers on P9. - */ -static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, - unsigned long lpcr) -{ - struct kvm *kvm = vcpu->kvm; - struct kvmppc_vcore *vc = vcpu->arch.vcore; - s64 hdec; - u64 tb, purr, spurr; - int trap; - unsigned long host_hfscr = mfspr(SPRN_HFSCR); - unsigned long host_ciabr = mfspr(SPRN_CIABR); - unsigned long host_dawr0 = mfspr(SPRN_DAWR0); - unsigned long host_dawrx0 = mfspr(SPRN_DAWRX0); - unsigned long host_psscr = mfspr(SPRN_PSSCR); - unsigned long host_pidr = mfspr(SPRN_PID); - unsigned long host_dawr1 = 0; - unsigned long host_dawrx1 = 0; - - if (cpu_has_feature(CPU_FTR_DAWR1)) { - host_dawr1 = mfspr(SPRN_DAWR1); - host_dawrx1 = mfspr(SPRN_DAWRX1); - } - - tb = mftb(); - hdec = time_limit - tb; - if (hdec < 0) - return BOOK3S_INTERRUPT_HV_DECREMENTER; - - if (vc->tb_offset) { - u64 new_tb = tb + vc->tb_offset; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = vc->tb_offset; - } - - if (vc->pcr) - mtspr(SPRN_PCR, vc->pcr | PCR_MASK); - mtspr(SPRN_DPDES, vc->dpdes); - mtspr(SPRN_VTB, vc->vtb); - - local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); - local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); - mtspr(SPRN_PURR, vcpu->arch.purr); - mtspr(SPRN_SPURR, vcpu->arch.spurr); - - if (dawr_enabled()) { - mtspr(SPRN_DAWR0, vcpu->arch.dawr0); - mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, vcpu->arch.dawr1); - mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); - } - } - mtspr(SPRN_CIABR, vcpu->arch.ciabr); - mtspr(SPRN_IC, vcpu->arch.ic); - - mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); - - mtspr(SPRN_HFSCR, vcpu->arch.hfscr); - - mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); - mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); - mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); - mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); - - mtspr(SPRN_AMOR, ~0UL); - - switch_mmu_to_guest_radix(kvm, vcpu, lpcr); - - /* - * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, - * so set guest LPCR (with HDICE) before writing HDEC. - */ - mtspr(SPRN_HDEC, hdec); - - mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); - mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); - - trap = __kvmhv_vcpu_entry_p9(vcpu); - - /* Advance host PURR/SPURR by the amount used by guest */ - purr = mfspr(SPRN_PURR); - spurr = mfspr(SPRN_SPURR); - mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr + - purr - vcpu->arch.purr); - mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr + - spurr - vcpu->arch.spurr); - vcpu->arch.purr = purr; - vcpu->arch.spurr = spurr; - - vcpu->arch.ic = mfspr(SPRN_IC); - vcpu->arch.pid = mfspr(SPRN_PID); - vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS; - - vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0); - vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1); - vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); - vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); - - /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ - mtspr(SPRN_PSSCR, host_psscr | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); - mtspr(SPRN_HFSCR, host_hfscr); - mtspr(SPRN_CIABR, host_ciabr); - mtspr(SPRN_DAWR0, host_dawr0); - mtspr(SPRN_DAWRX0, host_dawrx0); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, host_dawr1); - mtspr(SPRN_DAWRX1, host_dawrx1); - } - - /* - * Since this is radix, do a eieio; tlbsync; ptesync sequence in - * case we interrupted the guest between a tlbie and a ptesync. - */ - asm volatile("eieio; tlbsync; ptesync"); - - /* - * cp_abort is required if the processor supports local copy-paste - * to clear the copy buffer that was under control of the guest. - */ - if (cpu_has_feature(CPU_FTR_ARCH_31)) - asm volatile(PPC_CP_ABORT); - - vc->dpdes = mfspr(SPRN_DPDES); - vc->vtb = mfspr(SPRN_VTB); - mtspr(SPRN_DPDES, 0); - if (vc->pcr) - mtspr(SPRN_PCR, PCR_MASK); - - if (vc->tb_offset_applied) { - u64 new_tb = mftb() - vc->tb_offset_applied; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = 0; - } - - /* HDEC must be at least as large as DEC, so decrementer_max fits */ - mtspr(SPRN_HDEC, decrementer_max); - - switch_mmu_to_host_radix(kvm, host_pidr); - - return trap; -} - static inline bool hcall_is_xics(unsigned long req) { return req == H_EOI || req == H_CPPR || req == H_IPI || @@ -3782,7 +3594,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, * We need to save and restore the guest visible part of the * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor * doesn't do this for us. Note only required if pseries since - * this is done in kvmhv_load_hv_regs_and_go() below otherwise. + * this is done in kvmhv_vcpu_entry_p9() below otherwise. */ unsigned long host_psscr; /* call our hypervisor to load up HV regs and go */ @@ -3820,7 +3632,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } } else { kvmppc_xive_push_vcpu(vcpu); - trap = kvmhv_load_hv_regs_and_go(vcpu, time_limit, lpcr); + trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr); if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && !(vcpu->arch.shregs.msr & MSR_PR)) { unsigned long req = kvmppc_get_gpr(vcpu, 3); diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index 3c507daed064..6fdd93936e16 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -55,6 +55,42 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator #define accumulate_time(vcpu, next) do {} while (0) #endif +static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + struct kvm_nested_guest *nested = vcpu->arch.nested; + u32 lpid; + + lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; + + /* + * All the isync()s are overkill but trivially follow the ISA + * requirements. Some can likely be replaced with justification + * comment for why they are not needed. + */ + isync(); + mtspr(SPRN_LPID, lpid); + isync(); + mtspr(SPRN_LPCR, lpcr); + isync(); + mtspr(SPRN_PID, vcpu->arch.pid); + isync(); + + /* TLBIEL must have LPIDR set, so set guest LPID before flushing. */ + kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); +} + +static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) +{ + isync(); + mtspr(SPRN_PID, pid); + isync(); + mtspr(SPRN_LPID, kvm->arch.host_lpid); + isync(); + mtspr(SPRN_LPCR, kvm->arch.host_lpcr); + isync(); +} + static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev) { asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx)); @@ -89,11 +125,86 @@ static void radix_clear_slb(void) } } -int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu) +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr) { + struct kvm *kvm = vcpu->kvm; + struct kvmppc_vcore *vc = vcpu->arch.vcore; + s64 hdec; + u64 tb, purr, spurr; u64 *exsave; unsigned long msr = mfmsr(); int trap; + unsigned long host_hfscr = mfspr(SPRN_HFSCR); + unsigned long host_ciabr = mfspr(SPRN_CIABR); + unsigned long host_dawr0 = mfspr(SPRN_DAWR0); + unsigned long host_dawrx0 = mfspr(SPRN_DAWRX0); + unsigned long host_psscr = mfspr(SPRN_PSSCR); + unsigned long host_pidr = mfspr(SPRN_PID); + unsigned long host_dawr1 = 0; + unsigned long host_dawrx1 = 0; + + if (cpu_has_feature(CPU_FTR_DAWR1)) { + host_dawr1 = mfspr(SPRN_DAWR1); + host_dawrx1 = mfspr(SPRN_DAWRX1); + } + + tb = mftb(); + hdec = time_limit - tb; + if (hdec < 0) + return BOOK3S_INTERRUPT_HV_DECREMENTER; + + if (vc->tb_offset) { + u64 new_tb = tb + vc->tb_offset; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = vc->tb_offset; + } + + if (vc->pcr) + mtspr(SPRN_PCR, vc->pcr | PCR_MASK); + mtspr(SPRN_DPDES, vc->dpdes); + mtspr(SPRN_VTB, vc->vtb); + + local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); + local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); + mtspr(SPRN_PURR, vcpu->arch.purr); + mtspr(SPRN_SPURR, vcpu->arch.spurr); + + if (dawr_enabled()) { + mtspr(SPRN_DAWR0, vcpu->arch.dawr0); + mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + mtspr(SPRN_DAWR1, vcpu->arch.dawr1); + mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); + } + } + mtspr(SPRN_CIABR, vcpu->arch.ciabr); + mtspr(SPRN_IC, vcpu->arch.ic); + + mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + + mtspr(SPRN_HFSCR, vcpu->arch.hfscr); + + mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); + mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); + mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); + mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); + + mtspr(SPRN_AMOR, ~0UL); + + switch_mmu_to_guest_radix(kvm, vcpu, lpcr); + + /* + * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, + * so set guest LPCR (with HDICE) before writing HDEC. + */ + mtspr(SPRN_HDEC, hdec); + + mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); + mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); start_timing(vcpu, &vcpu->arch.rm_entry); @@ -213,6 +324,70 @@ int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu) end_timing(vcpu); + /* Advance host PURR/SPURR by the amount used by guest */ + purr = mfspr(SPRN_PURR); + spurr = mfspr(SPRN_SPURR); + mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr + + purr - vcpu->arch.purr); + mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr + + spurr - vcpu->arch.spurr); + vcpu->arch.purr = purr; + vcpu->arch.spurr = spurr; + + vcpu->arch.ic = mfspr(SPRN_IC); + vcpu->arch.pid = mfspr(SPRN_PID); + vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS; + + vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0); + vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1); + vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); + vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); + + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ + mtspr(SPRN_PSSCR, host_psscr | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + mtspr(SPRN_HFSCR, host_hfscr); + mtspr(SPRN_CIABR, host_ciabr); + mtspr(SPRN_DAWR0, host_dawr0); + mtspr(SPRN_DAWRX0, host_dawrx0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + mtspr(SPRN_DAWR1, host_dawr1); + mtspr(SPRN_DAWRX1, host_dawrx1); + } + + /* + * Since this is radix, do a eieio; tlbsync; ptesync sequence in + * case we interrupted the guest between a tlbie and a ptesync. + */ + asm volatile("eieio; tlbsync; ptesync"); + + /* + * cp_abort is required if the processor supports local copy-paste + * to clear the copy buffer that was under control of the guest. + */ + if (cpu_has_feature(CPU_FTR_ARCH_31)) + asm volatile(PPC_CP_ABORT); + + vc->dpdes = mfspr(SPRN_DPDES); + vc->vtb = mfspr(SPRN_VTB); + mtspr(SPRN_DPDES, 0); + if (vc->pcr) + mtspr(SPRN_PCR, PCR_MASK); + + if (vc->tb_offset_applied) { + u64 new_tb = mftb() - vc->tb_offset_applied; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = 0; + } + + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); + + switch_mmu_to_host_radix(kvm, host_pidr); + return trap; } -EXPORT_SYMBOL_GPL(__kvmhv_vcpu_entry_p9); +EXPORT_SYMBOL_GPL(kvmhv_vcpu_entry_p9); -- 2.23.0 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Nicholas Piggin Date: Mon, 05 Apr 2021 01:19:31 +0000 Subject: [PATCH v6 31/48] KVM: PPC: Book3S HV P9: inline kvmhv_load_hv_regs_and_go into __kvmhv_vcpu_entry_p9 Message-Id: <20210405011948.675354-32-npiggin@gmail.com> List-Id: References: <20210405011948.675354-1-npiggin@gmail.com> In-Reply-To: <20210405011948.675354-1-npiggin@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: kvm-ppc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org, Nicholas Piggin , Fabiano Rosas Now the initial C implementation is done, inline more HV code to make rearranging things easier. And rename __kvmhv_vcpu_entry_p9 to drop the leading underscores as it's now C, and is now a more complete vcpu entry. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 192 +---------------------- arch/powerpc/kvm/book3s_hv_interrupt.c | 179 ++++++++++++++++++++- 3 files changed, 180 insertions(+), 193 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index c214bcffb441..eaf3a562bf1e 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -153,7 +153,7 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } -int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu); +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #endif diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 9a1476daca8c..d6eecedaa5a5 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3498,194 +3498,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } -static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) -{ - struct kvmppc_vcore *vc = vcpu->arch.vcore; - struct kvm_nested_guest *nested = vcpu->arch.nested; - u32 lpid; - - lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; - - /* - * All the isync()s are overkill but trivially follow the ISA - * requirements. Some can likely be replaced with justification - * comment for why they are not needed. - */ - isync(); - mtspr(SPRN_LPID, lpid); - isync(); - mtspr(SPRN_LPCR, lpcr); - isync(); - mtspr(SPRN_PID, vcpu->arch.pid); - isync(); - - /* TLBIEL must have LPIDR set, so set guest LPID before flushing. */ - kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); -} - -static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) -{ - isync(); - mtspr(SPRN_PID, pid); - isync(); - mtspr(SPRN_LPID, kvm->arch.host_lpid); - isync(); - mtspr(SPRN_LPCR, kvm->arch.host_lpcr); - isync(); -} - -/* - * Load up hypervisor-mode registers on P9. - */ -static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, - unsigned long lpcr) -{ - struct kvm *kvm = vcpu->kvm; - struct kvmppc_vcore *vc = vcpu->arch.vcore; - s64 hdec; - u64 tb, purr, spurr; - int trap; - unsigned long host_hfscr = mfspr(SPRN_HFSCR); - unsigned long host_ciabr = mfspr(SPRN_CIABR); - unsigned long host_dawr0 = mfspr(SPRN_DAWR0); - unsigned long host_dawrx0 = mfspr(SPRN_DAWRX0); - unsigned long host_psscr = mfspr(SPRN_PSSCR); - unsigned long host_pidr = mfspr(SPRN_PID); - unsigned long host_dawr1 = 0; - unsigned long host_dawrx1 = 0; - - if (cpu_has_feature(CPU_FTR_DAWR1)) { - host_dawr1 = mfspr(SPRN_DAWR1); - host_dawrx1 = mfspr(SPRN_DAWRX1); - } - - tb = mftb(); - hdec = time_limit - tb; - if (hdec < 0) - return BOOK3S_INTERRUPT_HV_DECREMENTER; - - if (vc->tb_offset) { - u64 new_tb = tb + vc->tb_offset; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = vc->tb_offset; - } - - if (vc->pcr) - mtspr(SPRN_PCR, vc->pcr | PCR_MASK); - mtspr(SPRN_DPDES, vc->dpdes); - mtspr(SPRN_VTB, vc->vtb); - - local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); - local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); - mtspr(SPRN_PURR, vcpu->arch.purr); - mtspr(SPRN_SPURR, vcpu->arch.spurr); - - if (dawr_enabled()) { - mtspr(SPRN_DAWR0, vcpu->arch.dawr0); - mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, vcpu->arch.dawr1); - mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); - } - } - mtspr(SPRN_CIABR, vcpu->arch.ciabr); - mtspr(SPRN_IC, vcpu->arch.ic); - - mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); - - mtspr(SPRN_HFSCR, vcpu->arch.hfscr); - - mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); - mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); - mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); - mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); - - mtspr(SPRN_AMOR, ~0UL); - - switch_mmu_to_guest_radix(kvm, vcpu, lpcr); - - /* - * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, - * so set guest LPCR (with HDICE) before writing HDEC. - */ - mtspr(SPRN_HDEC, hdec); - - mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); - mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); - - trap = __kvmhv_vcpu_entry_p9(vcpu); - - /* Advance host PURR/SPURR by the amount used by guest */ - purr = mfspr(SPRN_PURR); - spurr = mfspr(SPRN_SPURR); - mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr + - purr - vcpu->arch.purr); - mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr + - spurr - vcpu->arch.spurr); - vcpu->arch.purr = purr; - vcpu->arch.spurr = spurr; - - vcpu->arch.ic = mfspr(SPRN_IC); - vcpu->arch.pid = mfspr(SPRN_PID); - vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS; - - vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0); - vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1); - vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); - vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); - - /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ - mtspr(SPRN_PSSCR, host_psscr | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); - mtspr(SPRN_HFSCR, host_hfscr); - mtspr(SPRN_CIABR, host_ciabr); - mtspr(SPRN_DAWR0, host_dawr0); - mtspr(SPRN_DAWRX0, host_dawrx0); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, host_dawr1); - mtspr(SPRN_DAWRX1, host_dawrx1); - } - - /* - * Since this is radix, do a eieio; tlbsync; ptesync sequence in - * case we interrupted the guest between a tlbie and a ptesync. - */ - asm volatile("eieio; tlbsync; ptesync"); - - /* - * cp_abort is required if the processor supports local copy-paste - * to clear the copy buffer that was under control of the guest. - */ - if (cpu_has_feature(CPU_FTR_ARCH_31)) - asm volatile(PPC_CP_ABORT); - - vc->dpdes = mfspr(SPRN_DPDES); - vc->vtb = mfspr(SPRN_VTB); - mtspr(SPRN_DPDES, 0); - if (vc->pcr) - mtspr(SPRN_PCR, PCR_MASK); - - if (vc->tb_offset_applied) { - u64 new_tb = mftb() - vc->tb_offset_applied; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = 0; - } - - /* HDEC must be at least as large as DEC, so decrementer_max fits */ - mtspr(SPRN_HDEC, decrementer_max); - - switch_mmu_to_host_radix(kvm, host_pidr); - - return trap; -} - static inline bool hcall_is_xics(unsigned long req) { return req = H_EOI || req = H_CPPR || req = H_IPI || @@ -3782,7 +3594,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, * We need to save and restore the guest visible part of the * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor * doesn't do this for us. Note only required if pseries since - * this is done in kvmhv_load_hv_regs_and_go() below otherwise. + * this is done in kvmhv_vcpu_entry_p9() below otherwise. */ unsigned long host_psscr; /* call our hypervisor to load up HV regs and go */ @@ -3820,7 +3632,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } } else { kvmppc_xive_push_vcpu(vcpu); - trap = kvmhv_load_hv_regs_and_go(vcpu, time_limit, lpcr); + trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr); if (trap = BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && !(vcpu->arch.shregs.msr & MSR_PR)) { unsigned long req = kvmppc_get_gpr(vcpu, 3); diff --git a/arch/powerpc/kvm/book3s_hv_interrupt.c b/arch/powerpc/kvm/book3s_hv_interrupt.c index 3c507daed064..6fdd93936e16 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupt.c +++ b/arch/powerpc/kvm/book3s_hv_interrupt.c @@ -55,6 +55,42 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator #define accumulate_time(vcpu, next) do {} while (0) #endif +static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + struct kvm_nested_guest *nested = vcpu->arch.nested; + u32 lpid; + + lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; + + /* + * All the isync()s are overkill but trivially follow the ISA + * requirements. Some can likely be replaced with justification + * comment for why they are not needed. + */ + isync(); + mtspr(SPRN_LPID, lpid); + isync(); + mtspr(SPRN_LPCR, lpcr); + isync(); + mtspr(SPRN_PID, vcpu->arch.pid); + isync(); + + /* TLBIEL must have LPIDR set, so set guest LPID before flushing. */ + kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); +} + +static void switch_mmu_to_host_radix(struct kvm *kvm, u32 pid) +{ + isync(); + mtspr(SPRN_PID, pid); + isync(); + mtspr(SPRN_LPID, kvm->arch.host_lpid); + isync(); + mtspr(SPRN_LPCR, kvm->arch.host_lpcr); + isync(); +} + static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev) { asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx)); @@ -89,11 +125,86 @@ static void radix_clear_slb(void) } } -int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu) +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr) { + struct kvm *kvm = vcpu->kvm; + struct kvmppc_vcore *vc = vcpu->arch.vcore; + s64 hdec; + u64 tb, purr, spurr; u64 *exsave; unsigned long msr = mfmsr(); int trap; + unsigned long host_hfscr = mfspr(SPRN_HFSCR); + unsigned long host_ciabr = mfspr(SPRN_CIABR); + unsigned long host_dawr0 = mfspr(SPRN_DAWR0); + unsigned long host_dawrx0 = mfspr(SPRN_DAWRX0); + unsigned long host_psscr = mfspr(SPRN_PSSCR); + unsigned long host_pidr = mfspr(SPRN_PID); + unsigned long host_dawr1 = 0; + unsigned long host_dawrx1 = 0; + + if (cpu_has_feature(CPU_FTR_DAWR1)) { + host_dawr1 = mfspr(SPRN_DAWR1); + host_dawrx1 = mfspr(SPRN_DAWRX1); + } + + tb = mftb(); + hdec = time_limit - tb; + if (hdec < 0) + return BOOK3S_INTERRUPT_HV_DECREMENTER; + + if (vc->tb_offset) { + u64 new_tb = tb + vc->tb_offset; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = vc->tb_offset; + } + + if (vc->pcr) + mtspr(SPRN_PCR, vc->pcr | PCR_MASK); + mtspr(SPRN_DPDES, vc->dpdes); + mtspr(SPRN_VTB, vc->vtb); + + local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); + local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); + mtspr(SPRN_PURR, vcpu->arch.purr); + mtspr(SPRN_SPURR, vcpu->arch.spurr); + + if (dawr_enabled()) { + mtspr(SPRN_DAWR0, vcpu->arch.dawr0); + mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + mtspr(SPRN_DAWR1, vcpu->arch.dawr1); + mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); + } + } + mtspr(SPRN_CIABR, vcpu->arch.ciabr); + mtspr(SPRN_IC, vcpu->arch.ic); + + mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + + mtspr(SPRN_HFSCR, vcpu->arch.hfscr); + + mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); + mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); + mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); + mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); + + mtspr(SPRN_AMOR, ~0UL); + + switch_mmu_to_guest_radix(kvm, vcpu, lpcr); + + /* + * P9 suppresses the HDEC exception when LPCR[HDICE] = 0, + * so set guest LPCR (with HDICE) before writing HDEC. + */ + mtspr(SPRN_HDEC, hdec); + + mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); + mtspr(SPRN_SRR1, vcpu->arch.shregs.srr1); start_timing(vcpu, &vcpu->arch.rm_entry); @@ -213,6 +324,70 @@ int __kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu) end_timing(vcpu); + /* Advance host PURR/SPURR by the amount used by guest */ + purr = mfspr(SPRN_PURR); + spurr = mfspr(SPRN_SPURR); + mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr + + purr - vcpu->arch.purr); + mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr + + spurr - vcpu->arch.spurr); + vcpu->arch.purr = purr; + vcpu->arch.spurr = spurr; + + vcpu->arch.ic = mfspr(SPRN_IC); + vcpu->arch.pid = mfspr(SPRN_PID); + vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS; + + vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0); + vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1); + vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); + vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); + + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ + mtspr(SPRN_PSSCR, host_psscr | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + mtspr(SPRN_HFSCR, host_hfscr); + mtspr(SPRN_CIABR, host_ciabr); + mtspr(SPRN_DAWR0, host_dawr0); + mtspr(SPRN_DAWRX0, host_dawrx0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + mtspr(SPRN_DAWR1, host_dawr1); + mtspr(SPRN_DAWRX1, host_dawrx1); + } + + /* + * Since this is radix, do a eieio; tlbsync; ptesync sequence in + * case we interrupted the guest between a tlbie and a ptesync. + */ + asm volatile("eieio; tlbsync; ptesync"); + + /* + * cp_abort is required if the processor supports local copy-paste + * to clear the copy buffer that was under control of the guest. + */ + if (cpu_has_feature(CPU_FTR_ARCH_31)) + asm volatile(PPC_CP_ABORT); + + vc->dpdes = mfspr(SPRN_DPDES); + vc->vtb = mfspr(SPRN_VTB); + mtspr(SPRN_DPDES, 0); + if (vc->pcr) + mtspr(SPRN_PCR, PCR_MASK); + + if (vc->tb_offset_applied) { + u64 new_tb = mftb() - vc->tb_offset_applied; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = 0; + } + + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); + + switch_mmu_to_host_radix(kvm, host_pidr); + return trap; } -EXPORT_SYMBOL_GPL(__kvmhv_vcpu_entry_p9); +EXPORT_SYMBOL_GPL(kvmhv_vcpu_entry_p9); -- 2.23.0