* [PATCH v3 0/2] Support for H_RPT_INVALIDATE in PowerPC KVM
@ 2021-01-05 9:05 Bharata B Rao
2021-01-05 9:05 ` [PATCH v3 1/2] KVM: PPC: Book3S HV: Add support for H_RPT_INVALIDATE Bharata B Rao
2021-01-05 9:05 ` [PATCH v3 2/2] KVM: PPC: Book3S HV: Use H_RPT_INVALIDATE in nested KVM Bharata B Rao
0 siblings, 2 replies; 4+ messages in thread
From: Bharata B Rao @ 2021-01-05 9:05 UTC (permalink / raw)
To: kvm-ppc, linuxppc-dev
Cc: farosas, aneesh.kumar, npiggin, Bharata B Rao, david
This patchset adds support for the new hcall H_RPT_INVALIDATE
and replaces the nested tlb flush calls with this new hcall
if support for the same exists.
Changes in v3:
-------------
- Reused the tlb flush routines from radix_tlb.c
- Ensured fixups are take care of due to the above change.
v2: https://lore.kernel.org/linuxppc-dev/20201217033335.GD310465@yekko.fritz.box/T/#t
H_RPT_INVALIDATE
================
Syntax:
int64 /* H_Success: Return code on successful completion */
/* H_Busy - repeat the call with the same */
/* H_Parameter, H_P2, H_P3, H_P4, H_P5 : Invalid parameters */
hcall(const uint64 H_RPT_INVALIDATE, /* Invalidate RPT translation lookaside information */
uint64 pid, /* PID/LPID to invalidate */
uint64 target, /* Invalidation target */
uint64 type, /* Type of lookaside information */
uint64 pageSizes, /* Page sizes */
uint64 start, /* Start of Effective Address (EA) range (inclusive) */
uint64 end) /* End of EA range (exclusive) */
Invalidation targets (target)
-----------------------------
Core MMU 0x01 /* All virtual processors in the partition */
Core local MMU 0x02 /* Current virtual processor */
Nest MMU 0x04 /* All nest/accelerator agents in use by the partition */
A combination of the above can be specified, except core and core local.
Type of translation to invalidate (type)
---------------------------------------
NESTED 0x0001 /* Invalidate nested guest partition-scope */
TLB 0x0002 /* Invalidate TLB */
PWC 0x0004 /* Invalidate Page Walk Cache */
PRT 0x0008 /* Invalidate Process Table Entries if NESTED is clear */
PAT 0x0008 /* Invalidate Partition Table Entries if NESTED is set */
A combination of the above can be specified.
Page size mask (pageSizes)
--------------------------
4K 0x01
64K 0x02
2M 0x04
1G 0x08
All sizes (-1UL)
A combination of the above can be specified.
All page sizes can be selected with -1.
Semantics: Invalidate radix tree lookaside information
matching the parameters given.
* Return H_P2, H_P3 or H_P4 if target, type, or pageSizes parameters are
different from the defined values.
* Return H_PARAMETER if NESTED is set and pid is not a valid nested
LPID allocated to this partition
* Return H_P5 if (start, end) doesn't form a valid range. Start and end
should be a valid Quadrant address and end > start.
* Return H_NotSupported if the partition is not in running in radix
translation mode.
* May invalidate more translation information than requested.
* If start = 0 and end = -1, set the range to cover all valid addresses.
Else start and end should be aligned to 4kB (lower 11 bits clear).
* If NESTED is clear, then invalidate process scoped lookaside information.
Else pid specifies a nested LPID, and the invalidation is performed
on nested guest partition table and nested guest partition scope real
addresses.
* If pid = 0 and NESTED is clear, then valid addresses are quadrant 3 and
quadrant 0 spaces, Else valid addresses are quadrant 0.
* Pages which are fully covered by the range are to be invalidated.
Those which are partially covered are considered outside invalidation
range, which allows a caller to optimally invalidate ranges that may
contain mixed page sizes.
* Return H_SUCCESS on success.
Bharata B Rao (2):
KVM: PPC: Book3S HV: Add support for H_RPT_INVALIDATE
KVM: PPC: Book3S HV: Use H_RPT_INVALIDATE in nested KVM
Documentation/virt/kvm/api.rst | 17 +++
.../include/asm/book3s/64/tlbflush-radix.h | 18 +++
arch/powerpc/include/asm/kvm_book3s.h | 3 +
arch/powerpc/include/asm/mmu_context.h | 7 ++
arch/powerpc/kvm/book3s_64_mmu_radix.c | 27 ++++-
arch/powerpc/kvm/book3s_hv.c | 56 +++++++++
arch/powerpc/kvm/book3s_hv_nested.c | 108 +++++++++++++++++-
arch/powerpc/kvm/powerpc.c | 3 +
arch/powerpc/mm/book3s64/radix_tlb.c | 24 +++-
include/uapi/linux/kvm.h | 1 +
10 files changed, 253 insertions(+), 11 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v3 1/2] KVM: PPC: Book3S HV: Add support for H_RPT_INVALIDATE
2021-01-05 9:05 [PATCH v3 0/2] Support for H_RPT_INVALIDATE in PowerPC KVM Bharata B Rao
@ 2021-01-05 9:05 ` Bharata B Rao
[not found] ` <87k0sp95g0.fsf@linux.ibm.com>
2021-01-05 9:05 ` [PATCH v3 2/2] KVM: PPC: Book3S HV: Use H_RPT_INVALIDATE in nested KVM Bharata B Rao
1 sibling, 1 reply; 4+ messages in thread
From: Bharata B Rao @ 2021-01-05 9:05 UTC (permalink / raw)
To: kvm-ppc, linuxppc-dev
Cc: farosas, aneesh.kumar, npiggin, Bharata B Rao, david
Implement H_RPT_INVALIDATE hcall and add KVM capability
KVM_CAP_PPC_RPT_INVALIDATE to indicate the support for the same.
This hcall does two types of TLB invalidations:
1. Process-scoped invalidations for guests with LPCR[GTSE]=0.
This is currently not used in KVM as GTSE is not usually
disabled in KVM.
2. Partition-scoped invalidations that an L1 hypervisor does on
behalf of an L2 guest. This replaces the uses of the existing
hcall H_TLB_INVALIDATE.
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
Documentation/virt/kvm/api.rst | 17 ++++
.../include/asm/book3s/64/tlbflush-radix.h | 18 ++++
arch/powerpc/include/asm/kvm_book3s.h | 3 +
arch/powerpc/include/asm/mmu_context.h | 7 ++
arch/powerpc/kvm/book3s_hv.c | 56 +++++++++++
arch/powerpc/kvm/book3s_hv_nested.c | 96 +++++++++++++++++++
arch/powerpc/kvm/powerpc.c | 3 +
arch/powerpc/mm/book3s64/radix_tlb.c | 24 ++++-
include/uapi/linux/kvm.h | 1 +
9 files changed, 221 insertions(+), 4 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 70254eaa5229..acf1fb4a0e1d 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6032,6 +6032,23 @@ KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR exit notifications which user space
can then handle to implement model specific MSR handling and/or user notifications
to inform a user that an MSR was not handled.
+7.22 KVM_CAP_PPC_RPT_INVALIDATE
+------------------------------
+
+:Capability: KVM_CAP_PPC_RPT_INVALIDATE
+:Architectures: ppc
+:Type: vm
+
+This capability indicates that the kernel is capable of handling
+H_RPT_INVALIDATE hcall.
+
+In order to enable the use of H_RPT_INVALIDATE in the guest,
+user space might have to advertise it for the guest. For example,
+IBM pSeries (sPAPR) guest starts using it if "hcall-rpt-invalidate" is
+present in the "ibm,hypertas-functions" device-tree property.
+
+This capability is always enabled.
+
8. Other capabilities.
======================
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
index 94439e0cefc9..aace7e9b2397 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
@@ -4,6 +4,10 @@
#include <asm/hvcall.h>
+#define RIC_FLUSH_TLB 0
+#define RIC_FLUSH_PWC 1
+#define RIC_FLUSH_ALL 2
+
struct vm_area_struct;
struct mm_struct;
struct mmu_gather;
@@ -21,6 +25,20 @@ static inline u64 psize_to_rpti_pgsize(unsigned long psize)
return H_RPTI_PAGE_ALL;
}
+static inline int rpti_pgsize_to_psize(unsigned long page_size)
+{
+ if (page_size == H_RPTI_PAGE_4K)
+ return MMU_PAGE_4K;
+ if (page_size == H_RPTI_PAGE_64K)
+ return MMU_PAGE_64K;
+ if (page_size == H_RPTI_PAGE_2M)
+ return MMU_PAGE_2M;
+ if (page_size == H_RPTI_PAGE_1G)
+ return MMU_PAGE_1G;
+ else
+ return MMU_PAGE_64K; /* Default */
+}
+
static inline int mmu_get_ap(int psize)
{
return mmu_psize_defs[psize].ap;
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index d32ec9ae73bd..0f1c5fa6e8ce 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -298,6 +298,9 @@ void kvmhv_set_ptbl_entry(unsigned int lpid, u64 dw0, u64 dw1);
void kvmhv_release_all_nested(struct kvm *kvm);
long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu);
long kvmhv_do_nested_tlbie(struct kvm_vcpu *vcpu);
+long kvmhv_h_rpti_nested(struct kvm_vcpu *vcpu, unsigned long lpid,
+ unsigned long type, unsigned long pg_sizes,
+ unsigned long start, unsigned long end);
int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu,
u64 time_limit, unsigned long lpcr);
void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr);
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index d5821834dba9..17b2995a55ed 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -124,8 +124,15 @@ static inline bool need_extra_context(struct mm_struct *mm, unsigned long ea)
#if defined(CONFIG_KVM_BOOK3S_HV_POSSIBLE) && defined(CONFIG_PPC_RADIX_MMU)
extern void radix_kvm_prefetch_workaround(struct mm_struct *mm);
+void do_h_rpt_invalidate(unsigned long pid, unsigned long type,
+ unsigned long page_size, unsigned long psize,
+ unsigned long start, unsigned long end);
#else
static inline void radix_kvm_prefetch_workaround(struct mm_struct *mm) { }
+static inline void do_h_rpt_invalidate(unsigned long pid, unsigned long type,
+ unsigned long page_size,
+ unsigned long psize, unsigned long start,
+ unsigned long end) { }
#endif
extern void switch_cop(struct mm_struct *next);
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 6f612d240392..8df391b63330 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -904,6 +904,53 @@ static int kvmppc_get_yield_count(struct kvm_vcpu *vcpu)
return yield_count;
}
+static long kvmppc_h_rpt_invalidate(struct kvm_vcpu *vcpu,
+ unsigned long pid, unsigned long target,
+ unsigned long type, unsigned long pg_sizes,
+ unsigned long start, unsigned long end)
+{
+ unsigned long psize;
+
+ if (!kvm_is_radix(vcpu->kvm))
+ return H_UNSUPPORTED;
+
+ if (end < start)
+ return H_P5;
+
+ if (type & H_RPTI_TYPE_NESTED) {
+ if (!nesting_enabled(vcpu->kvm))
+ return H_FUNCTION;
+
+ /* Support only cores as target */
+ if (target != H_RPTI_TARGET_CMMU)
+ return H_P2;
+
+ return kvmhv_h_rpti_nested(vcpu, pid,
+ (type & ~H_RPTI_TYPE_NESTED),
+ pg_sizes, start, end);
+ }
+
+ if (pg_sizes & H_RPTI_PAGE_64K) {
+ psize = rpti_pgsize_to_psize(pg_sizes & H_RPTI_PAGE_64K);
+ do_h_rpt_invalidate(pid, type, (1UL << 16), psize,
+ start, end);
+ }
+
+ if (pg_sizes & H_RPTI_PAGE_2M) {
+ psize = rpti_pgsize_to_psize(pg_sizes & H_RPTI_PAGE_2M);
+ do_h_rpt_invalidate(pid, type, (1UL << 21), psize,
+ start, end);
+ }
+
+ if (pg_sizes & H_RPTI_PAGE_1G) {
+ psize = rpti_pgsize_to_psize(pg_sizes & H_RPTI_PAGE_1G);
+ do_h_rpt_invalidate(pid, type, (1UL << 30), psize,
+ start, end);
+ }
+
+ return H_SUCCESS;
+}
+
int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
{
unsigned long req = kvmppc_get_gpr(vcpu, 3);
@@ -1112,6 +1159,14 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
*/
ret = kvmppc_h_svm_init_abort(vcpu->kvm);
break;
+ case H_RPT_INVALIDATE:
+ ret = kvmppc_h_rpt_invalidate(vcpu, kvmppc_get_gpr(vcpu, 4),
+ kvmppc_get_gpr(vcpu, 5),
+ kvmppc_get_gpr(vcpu, 6),
+ kvmppc_get_gpr(vcpu, 7),
+ kvmppc_get_gpr(vcpu, 8),
+ kvmppc_get_gpr(vcpu, 9));
+ break;
default:
return RESUME_HOST;
@@ -1158,6 +1213,7 @@ static int kvmppc_hcall_impl_hv(unsigned long cmd)
case H_XIRR_X:
#endif
case H_PAGE_INIT:
+ case H_RPT_INVALIDATE:
return 1;
}
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 33b58549a9aa..40ed4eb80adb 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -1149,6 +1149,102 @@ long kvmhv_do_nested_tlbie(struct kvm_vcpu *vcpu)
return H_SUCCESS;
}
+static long do_tlb_invalidate_nested_tlb(struct kvm_vcpu *vcpu,
+ unsigned long lpid,
+ unsigned long page_size,
+ unsigned long ap,
+ unsigned long start,
+ unsigned long end)
+{
+ unsigned long addr = start;
+ int ret;
+
+ do {
+ ret = kvmhv_emulate_tlbie_tlb_addr(vcpu, lpid, ap,
+ get_epn(addr));
+ if (ret)
+ return ret;
+ addr += page_size;
+ } while (addr < end);
+
+ return ret;
+}
+
+static long do_tlb_invalidate_nested_all(struct kvm_vcpu *vcpu,
+ unsigned long lpid)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct kvm_nested_guest *gp;
+
+ gp = kvmhv_get_nested(kvm, lpid, false);
+ if (gp) {
+ kvmhv_emulate_tlbie_lpid(vcpu, gp, RIC_FLUSH_ALL);
+ kvmhv_put_nested(gp);
+ }
+ return H_SUCCESS;
+}
+
+long kvmhv_h_rpti_nested(struct kvm_vcpu *vcpu, unsigned long lpid,
+ unsigned long type, unsigned long pg_sizes,
+ unsigned long start, unsigned long end)
+{
+ struct kvm_nested_guest *gp;
+ long ret;
+ unsigned long psize, ap;
+
+ /*
+ * If L2 lpid isn't valid, we need to return H_PARAMETER.
+ *
+ * However, nested KVM issues a L2 lpid flush call when creating
+ * partition table entries for L2. This happens even before the
+ * corresponding shadow lpid is created in HV which happens in
+ * H_ENTER_NESTED call. Since we can't differentiate this case from
+ * the invalid case, we ignore such flush requests and return success.
+ */
+ gp = kvmhv_find_nested(vcpu->kvm, lpid);
+ if (!gp)
+ return H_SUCCESS;
+
+ if ((type & H_RPTI_TYPE_NESTED_ALL) == H_RPTI_TYPE_NESTED_ALL)
+ return do_tlb_invalidate_nested_all(vcpu, lpid);
+
+ if ((type & H_RPTI_TYPE_TLB) == H_RPTI_TYPE_TLB) {
+ if (pg_sizes & H_RPTI_PAGE_64K) {
+ psize = rpti_pgsize_to_psize(pg_sizes & H_RPTI_PAGE_64K);
+ ap = mmu_get_ap(psize);
+
+ ret = do_tlb_invalidate_nested_tlb(vcpu, lpid,
+ (1UL << 16),
+ ap, start, end);
+ if (ret)
+ return H_P4;
+ }
+
+ if (pg_sizes & H_RPTI_PAGE_2M) {
+ psize = rpti_pgsize_to_psize(pg_sizes & H_RPTI_PAGE_2M);
+ ap = mmu_get_ap(psize);
+
+ ret = do_tlb_invalidate_nested_tlb(vcpu, lpid,
+ (1UL << 21),
+ ap, start, end);
+ if (ret)
+ return H_P4;
+ }
+
+ if (pg_sizes & H_RPTI_PAGE_1G) {
+ psize = rpti_pgsize_to_psize(pg_sizes & H_RPTI_PAGE_1G);
+ ap = mmu_get_ap(psize);
+
+ ret = do_tlb_invalidate_nested_tlb(vcpu, lpid,
+ (1UL << 30),
+ ap, start, end);
+ if (ret)
+ return H_P4;
+ }
+ }
+ return H_SUCCESS;
+}
+
/* Used to convert a nested guest real address to a L1 guest real address */
static int kvmhv_translate_addr_nested(struct kvm_vcpu *vcpu,
struct kvm_nested_guest *gp,
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index cf52d26f49cd..5388cd4a206a 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -678,6 +678,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = hv_enabled && kvmppc_hv_ops->enable_svm &&
!kvmppc_hv_ops->enable_svm(NULL);
break;
+ case KVM_CAP_PPC_RPT_INVALIDATE:
+ r = 1;
+ break;
#endif
default:
r = 0;
diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
index fb66d154b26c..c120a2ad4684 100644
--- a/arch/powerpc/mm/book3s64/radix_tlb.c
+++ b/arch/powerpc/mm/book3s64/radix_tlb.c
@@ -18,10 +18,6 @@
#include <asm/cputhreads.h>
#include <asm/plpar_wrappers.h>
-#define RIC_FLUSH_TLB 0
-#define RIC_FLUSH_PWC 1
-#define RIC_FLUSH_ALL 2
-
/*
* tlbiel instruction for radix, set invalidation
* i.e., r=1 and is=01 or is=10 or is=11
@@ -1286,4 +1282,24 @@ extern void radix_kvm_prefetch_workaround(struct mm_struct *mm)
}
}
EXPORT_SYMBOL_GPL(radix_kvm_prefetch_workaround);
+
+void do_h_rpt_invalidate(unsigned long pid, unsigned long type,
+ unsigned long page_size, unsigned long psize,
+ unsigned long start, unsigned long end)
+{
+ if ((type & H_RPTI_TYPE_ALL) == H_RPTI_TYPE_ALL) {
+ _tlbie_pid(pid, RIC_FLUSH_ALL);
+ return;
+ }
+
+ if (type & H_RPTI_TYPE_PWC)
+ _tlbie_pid(pid, RIC_FLUSH_PWC);
+
+ if (!start && end == -1) /* PID */
+ _tlbie_pid(pid, RIC_FLUSH_TLB);
+ else /* EA */
+ _tlbie_va_range(start, end, pid, page_size, psize, false);
+}
+EXPORT_SYMBOL_GPL(do_h_rpt_invalidate);
+
#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 886802b8ffba..ae9854830dcc 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1056,6 +1056,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_ENFORCE_PV_FEATURE_CPUID 190
#define KVM_CAP_SYS_HYPERV_CPUID 191
#define KVM_CAP_DIRTY_LOG_RING 192
+#define KVM_CAP_PPC_RPT_INVALIDATE 193
#ifdef KVM_CAP_IRQ_ROUTING
--
2.26.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH v3 2/2] KVM: PPC: Book3S HV: Use H_RPT_INVALIDATE in nested KVM
2021-01-05 9:05 [PATCH v3 0/2] Support for H_RPT_INVALIDATE in PowerPC KVM Bharata B Rao
2021-01-05 9:05 ` [PATCH v3 1/2] KVM: PPC: Book3S HV: Add support for H_RPT_INVALIDATE Bharata B Rao
@ 2021-01-05 9:05 ` Bharata B Rao
1 sibling, 0 replies; 4+ messages in thread
From: Bharata B Rao @ 2021-01-05 9:05 UTC (permalink / raw)
To: kvm-ppc, linuxppc-dev
Cc: farosas, aneesh.kumar, npiggin, Bharata B Rao, david
In the nested KVM case, replace H_TLB_INVALIDATE by the new hcall
H_RPT_INVALIDATE if available. The availability of this hcall
is determined from "hcall-rpt-invalidate" string in ibm,hypertas-functions
DT property.
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 27 +++++++++++++++++++++-----
arch/powerpc/kvm/book3s_hv_nested.c | 12 ++++++++++--
2 files changed, 32 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index bb35490400e9..7ea5459022cb 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -21,6 +21,7 @@
#include <asm/pte-walk.h>
#include <asm/ultravisor.h>
#include <asm/kvm_book3s_uvmem.h>
+#include <asm/plpar_wrappers.h>
/*
* Supported radix tree geometry.
@@ -318,9 +319,19 @@ void kvmppc_radix_tlbie_page(struct kvm *kvm, unsigned long addr,
}
psi = shift_to_mmu_psize(pshift);
- rb = addr | (mmu_get_ap(psi) << PPC_BITLSHIFT(58));
- rc = plpar_hcall_norets(H_TLB_INVALIDATE, H_TLBIE_P1_ENC(0, 0, 1),
- lpid, rb);
+
+ if (!firmware_has_feature(FW_FEATURE_RPT_INVALIDATE)) {
+ rb = addr | (mmu_get_ap(psi) << PPC_BITLSHIFT(58));
+ rc = plpar_hcall_norets(H_TLB_INVALIDATE, H_TLBIE_P1_ENC(0, 0, 1),
+ lpid, rb);
+ } else {
+ rc = pseries_rpt_invalidate(lpid, H_RPTI_TARGET_CMMU,
+ H_RPTI_TYPE_NESTED |
+ H_RPTI_TYPE_TLB,
+ psize_to_rpti_pgsize(psi),
+ addr, addr + psize);
+ }
+
if (rc)
pr_err("KVM: TLB page invalidation hcall failed, rc=%ld\n", rc);
}
@@ -334,8 +345,14 @@ static void kvmppc_radix_flush_pwc(struct kvm *kvm, unsigned int lpid)
return;
}
- rc = plpar_hcall_norets(H_TLB_INVALIDATE, H_TLBIE_P1_ENC(1, 0, 1),
- lpid, TLBIEL_INVAL_SET_LPID);
+ if (!firmware_has_feature(FW_FEATURE_RPT_INVALIDATE))
+ rc = plpar_hcall_norets(H_TLB_INVALIDATE, H_TLBIE_P1_ENC(1, 0, 1),
+ lpid, TLBIEL_INVAL_SET_LPID);
+ else
+ rc = pseries_rpt_invalidate(lpid, H_RPTI_TARGET_CMMU,
+ H_RPTI_TYPE_NESTED |
+ H_RPTI_TYPE_PWC, H_RPTI_PAGE_ALL,
+ 0, -1UL);
if (rc)
pr_err("KVM: TLB PWC invalidation hcall failed, rc=%ld\n", rc);
}
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 40ed4eb80adb..0ebddb615684 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -19,6 +19,7 @@
#include <asm/pgalloc.h>
#include <asm/pte-walk.h>
#include <asm/reg.h>
+#include <asm/plpar_wrappers.h>
static struct patb_entry *pseries_partition_tb;
@@ -402,8 +403,15 @@ static void kvmhv_flush_lpid(unsigned int lpid)
return;
}
- rc = plpar_hcall_norets(H_TLB_INVALIDATE, H_TLBIE_P1_ENC(2, 0, 1),
- lpid, TLBIEL_INVAL_SET_LPID);
+ if (!firmware_has_feature(FW_FEATURE_RPT_INVALIDATE))
+ rc = plpar_hcall_norets(H_TLB_INVALIDATE, H_TLBIE_P1_ENC(2, 0, 1),
+ lpid, TLBIEL_INVAL_SET_LPID);
+ else
+ rc = pseries_rpt_invalidate(lpid, H_RPTI_TARGET_CMMU,
+ H_RPTI_TYPE_NESTED |
+ H_RPTI_TYPE_TLB | H_RPTI_TYPE_PWC |
+ H_RPTI_TYPE_PAT,
+ H_RPTI_PAGE_ALL, 0, -1UL);
if (rc)
pr_err("KVM: TLB LPID invalidation hcall failed, rc=%ld\n", rc);
}
--
2.26.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v3 1/2] KVM: PPC: Book3S HV: Add support for H_RPT_INVALIDATE
[not found] ` <87k0sp95g0.fsf@linux.ibm.com>
@ 2021-01-07 4:08 ` Bharata B Rao
0 siblings, 0 replies; 4+ messages in thread
From: Bharata B Rao @ 2021-01-07 4:08 UTC (permalink / raw)
To: Fabiano Rosas; +Cc: aneesh.kumar, npiggin, kvm-ppc, linuxppc-dev, david
On Wed, Jan 06, 2021 at 05:27:27PM -0300, Fabiano Rosas wrote:
> Bharata B Rao <bharata@linux.ibm.com> writes:
> > +
> > +long kvmhv_h_rpti_nested(struct kvm_vcpu *vcpu, unsigned long lpid,
> > + unsigned long type, unsigned long pg_sizes,
> > + unsigned long start, unsigned long end)
> > +{
> > + struct kvm_nested_guest *gp;
> > + long ret;
> > + unsigned long psize, ap;
> > +
> > + /*
> > + * If L2 lpid isn't valid, we need to return H_PARAMETER.
> > + *
> > + * However, nested KVM issues a L2 lpid flush call when creating
> > + * partition table entries for L2. This happens even before the
> > + * corresponding shadow lpid is created in HV which happens in
> > + * H_ENTER_NESTED call. Since we can't differentiate this case from
> > + * the invalid case, we ignore such flush requests and return success.
> > + */
>
> So for a nested lpid the H_TLB_INVALIDATE in:
>
> kvmppc_core_init_vm_hv -> kvmppc_setup_partition_table ->
> kvmhv_set_ptbl_entry -> kvmhv_flush_lpid
>
> has always been a noop? It seems that we could just skip
> kvmhv_flush_lpid in L1 during init_vm then.
May be, but I suppose that flush is required and could be fixed
eventually.
Regards,
Bharata.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-01-07 4:10 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-05 9:05 [PATCH v3 0/2] Support for H_RPT_INVALIDATE in PowerPC KVM Bharata B Rao
2021-01-05 9:05 ` [PATCH v3 1/2] KVM: PPC: Book3S HV: Add support for H_RPT_INVALIDATE Bharata B Rao
[not found] ` <87k0sp95g0.fsf@linux.ibm.com>
2021-01-07 4:08 ` Bharata B Rao
2021-01-05 9:05 ` [PATCH v3 2/2] KVM: PPC: Book3S HV: Use H_RPT_INVALIDATE in nested KVM Bharata B Rao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).