All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] KVM RISC-V Svpbmt support
@ 2022-07-07 14:52 ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

This series extends KVM RISC-V to detect and use Svpbmt for both
G-stage (hypervisor) and VS-stage (guest) page table.

The corresponding KVMTOOL patches used for testing this series
can be found in riscv_svpbmt_sstc_v1 branch at:
https://github.com/avpatel/kvmtool.git

These patches can also be found in riscv_kvm_svpbmt_v1 branch at:
https://github.com/avpatel/linux.git

Alexandre Ghiti (1):
  riscv: Fix missing PAGE_PFN_MASK

Anup Patel (4):
  KVM: Add gfp_custom flag in struct kvm_mmu_memory_cache
  RISC-V: KVM: Add G-stage ioremap() and iounmap() functions
  RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap()
  RISC-V: KVM: Add support for Svpbmt inside Guest/VM

 arch/riscv/include/asm/csr.h        | 16 ++++++++++++++++
 arch/riscv/include/asm/kvm_host.h   |  5 +++++
 arch/riscv/include/asm/pgtable-64.h | 12 ++++++------
 arch/riscv/include/asm/pgtable.h    |  6 +++---
 arch/riscv/include/uapi/asm/kvm.h   |  1 +
 arch/riscv/kvm/mmu.c                | 22 ++++++++++++++++------
 arch/riscv/kvm/vcpu.c               | 16 ++++++++++++++++
 include/linux/kvm_types.h           |  1 +
 virt/kvm/kvm_main.c                 |  4 +++-
 9 files changed, 67 insertions(+), 16 deletions(-)

-- 
2.34.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 0/5] KVM RISC-V Svpbmt support
@ 2022-07-07 14:52 ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

This series extends KVM RISC-V to detect and use Svpbmt for both
G-stage (hypervisor) and VS-stage (guest) page table.

The corresponding KVMTOOL patches used for testing this series
can be found in riscv_svpbmt_sstc_v1 branch at:
https://github.com/avpatel/kvmtool.git

These patches can also be found in riscv_kvm_svpbmt_v1 branch at:
https://github.com/avpatel/linux.git

Alexandre Ghiti (1):
  riscv: Fix missing PAGE_PFN_MASK

Anup Patel (4):
  KVM: Add gfp_custom flag in struct kvm_mmu_memory_cache
  RISC-V: KVM: Add G-stage ioremap() and iounmap() functions
  RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap()
  RISC-V: KVM: Add support for Svpbmt inside Guest/VM

 arch/riscv/include/asm/csr.h        | 16 ++++++++++++++++
 arch/riscv/include/asm/kvm_host.h   |  5 +++++
 arch/riscv/include/asm/pgtable-64.h | 12 ++++++------
 arch/riscv/include/asm/pgtable.h    |  6 +++---
 arch/riscv/include/uapi/asm/kvm.h   |  1 +
 arch/riscv/kvm/mmu.c                | 22 ++++++++++++++++------
 arch/riscv/kvm/vcpu.c               | 16 ++++++++++++++++
 include/linux/kvm_types.h           |  1 +
 virt/kvm/kvm_main.c                 |  4 +++-
 9 files changed, 67 insertions(+), 16 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/5] KVM: Add gfp_custom flag in struct kvm_mmu_memory_cache
  2022-07-07 14:52 ` Anup Patel
@ 2022-07-07 14:52   ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

The kvm_mmu_topup_memory_cache() always uses GFP_KERNEL_ACCOUNT for
memory allocation which prevents it's use in atomic context. To address
this limitation of kvm_mmu_topup_memory_cache(), we add gfp_custom flag
in struct kvm_mmu_memory_cache. When the gfp_custom flag is set to some
GFP_xyz flags, the kvm_mmu_topup_memory_cache() will use that instead of
GFP_KERNEL_ACCOUNT.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 include/linux/kvm_types.h | 1 +
 virt/kvm/kvm_main.c       | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index ac1ebb37a0ff..1dcfba68076a 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -87,6 +87,7 @@ struct gfn_to_pfn_cache {
 struct kvm_mmu_memory_cache {
 	int nobjs;
 	gfp_t gfp_zero;
+	gfp_t gfp_custom;
 	struct kmem_cache *kmem_cache;
 	void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE];
 };
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a49df8988cd6..e3a6f7647474 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -386,7 +386,9 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min)
 	if (mc->nobjs >= min)
 		return 0;
 	while (mc->nobjs < ARRAY_SIZE(mc->objects)) {
-		obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT);
+		obj = mmu_memory_cache_alloc_obj(mc, (mc->gfp_custom) ?
+						 mc->gfp_custom :
+						 GFP_KERNEL_ACCOUNT);
 		if (!obj)
 			return mc->nobjs >= min ? 0 : -ENOMEM;
 		mc->objects[mc->nobjs++] = obj;
-- 
2.34.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 1/5] KVM: Add gfp_custom flag in struct kvm_mmu_memory_cache
@ 2022-07-07 14:52   ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

The kvm_mmu_topup_memory_cache() always uses GFP_KERNEL_ACCOUNT for
memory allocation which prevents it's use in atomic context. To address
this limitation of kvm_mmu_topup_memory_cache(), we add gfp_custom flag
in struct kvm_mmu_memory_cache. When the gfp_custom flag is set to some
GFP_xyz flags, the kvm_mmu_topup_memory_cache() will use that instead of
GFP_KERNEL_ACCOUNT.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 include/linux/kvm_types.h | 1 +
 virt/kvm/kvm_main.c       | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index ac1ebb37a0ff..1dcfba68076a 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -87,6 +87,7 @@ struct gfn_to_pfn_cache {
 struct kvm_mmu_memory_cache {
 	int nobjs;
 	gfp_t gfp_zero;
+	gfp_t gfp_custom;
 	struct kmem_cache *kmem_cache;
 	void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE];
 };
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a49df8988cd6..e3a6f7647474 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -386,7 +386,9 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min)
 	if (mc->nobjs >= min)
 		return 0;
 	while (mc->nobjs < ARRAY_SIZE(mc->objects)) {
-		obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT);
+		obj = mmu_memory_cache_alloc_obj(mc, (mc->gfp_custom) ?
+						 mc->gfp_custom :
+						 GFP_KERNEL_ACCOUNT);
 		if (!obj)
 			return mc->nobjs >= min ? 0 : -ENOMEM;
 		mc->objects[mc->nobjs++] = obj;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/5] riscv: Fix missing PAGE_PFN_MASK
  2022-07-07 14:52 ` Anup Patel
@ 2022-07-07 14:52   ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Alexandre Ghiti

From: Alexandre Ghiti <alexandre.ghiti@canonical.com>

There are a bunch of functions that use the PFN from a page table entry
that end up with the svpbmt upper-bits because they are missing the newly
introduced PAGE_PFN_MASK which leads to wrong addresses conversions and
then crash: fix this by adding this mask.

Fixes: 100631b48ded ("riscv: Fix accessing pfn bits in PTEs for non-32bit variants")
Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
---
 arch/riscv/include/asm/pgtable-64.h | 12 ++++++------
 arch/riscv/include/asm/pgtable.h    |  6 +++---
 arch/riscv/kvm/mmu.c                |  2 +-
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
index 5c2aba5efbd0..dc42375c2357 100644
--- a/arch/riscv/include/asm/pgtable-64.h
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -175,7 +175,7 @@ static inline pud_t pfn_pud(unsigned long pfn, pgprot_t prot)
 
 static inline unsigned long _pud_pfn(pud_t pud)
 {
-	return pud_val(pud) >> _PAGE_PFN_SHIFT;
+	return __page_val_to_pfn(pud_val(pud));
 }
 
 static inline pmd_t *pud_pgtable(pud_t pud)
@@ -278,13 +278,13 @@ static inline p4d_t pfn_p4d(unsigned long pfn, pgprot_t prot)
 
 static inline unsigned long _p4d_pfn(p4d_t p4d)
 {
-	return p4d_val(p4d) >> _PAGE_PFN_SHIFT;
+	return __page_val_to_pfn(p4d_val(p4d));
 }
 
 static inline pud_t *p4d_pgtable(p4d_t p4d)
 {
 	if (pgtable_l4_enabled)
-		return (pud_t *)pfn_to_virt(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
+		return (pud_t *)pfn_to_virt(__page_val_to_pfn(p4d_val(p4d)));
 
 	return (pud_t *)pud_pgtable((pud_t) { p4d_val(p4d) });
 }
@@ -292,7 +292,7 @@ static inline pud_t *p4d_pgtable(p4d_t p4d)
 
 static inline struct page *p4d_page(p4d_t p4d)
 {
-	return pfn_to_page(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
+	return pfn_to_page(__page_val_to_pfn(p4d_val(p4d)));
 }
 
 #define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
@@ -347,7 +347,7 @@ static inline void pgd_clear(pgd_t *pgd)
 static inline p4d_t *pgd_pgtable(pgd_t pgd)
 {
 	if (pgtable_l5_enabled)
-		return (p4d_t *)pfn_to_virt(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
+		return (p4d_t *)pfn_to_virt(__page_val_to_pfn(pgd_val(pgd)));
 
 	return (p4d_t *)p4d_pgtable((p4d_t) { pgd_val(pgd) });
 }
@@ -355,7 +355,7 @@ static inline p4d_t *pgd_pgtable(pgd_t pgd)
 
 static inline struct page *pgd_page(pgd_t pgd)
 {
-	return pfn_to_page(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
+	return pfn_to_page(__page_val_to_pfn(pgd_val(pgd)));
 }
 #define pgd_page(pgd)	pgd_page(pgd)
 
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 1d1be9d9419c..5dbd6610729b 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -261,7 +261,7 @@ static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
 
 static inline unsigned long _pgd_pfn(pgd_t pgd)
 {
-	return pgd_val(pgd) >> _PAGE_PFN_SHIFT;
+	return __page_val_to_pfn(pgd_val(pgd));
 }
 
 static inline struct page *pmd_page(pmd_t pmd)
@@ -590,14 +590,14 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd)
 	return __pmd(pmd_val(pmd) & ~(_PAGE_PRESENT|_PAGE_PROT_NONE));
 }
 
-#define __pmd_to_phys(pmd)  (pmd_val(pmd) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+#define __pmd_to_phys(pmd)  (__page_val_to_pfn(pmd_val(pmd)) << PAGE_SHIFT)
 
 static inline unsigned long pmd_pfn(pmd_t pmd)
 {
 	return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
 }
 
-#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+#define __pud_to_phys(pud)  (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
 
 static inline unsigned long pud_pfn(pud_t pud)
 {
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 2965284a490d..b75d4e200064 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -54,7 +54,7 @@ static inline unsigned long gstage_pte_index(gpa_t addr, u32 level)
 
 static inline unsigned long gstage_pte_page_vaddr(pte_t pte)
 {
-	return (unsigned long)pfn_to_virt(pte_val(pte) >> _PAGE_PFN_SHIFT);
+	return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte)));
 }
 
 static int gstage_page_size_to_level(unsigned long page_size, u32 *out_level)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/5] riscv: Fix missing PAGE_PFN_MASK
@ 2022-07-07 14:52   ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Alexandre Ghiti

From: Alexandre Ghiti <alexandre.ghiti@canonical.com>

There are a bunch of functions that use the PFN from a page table entry
that end up with the svpbmt upper-bits because they are missing the newly
introduced PAGE_PFN_MASK which leads to wrong addresses conversions and
then crash: fix this by adding this mask.

Fixes: 100631b48ded ("riscv: Fix accessing pfn bits in PTEs for non-32bit variants")
Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
---
 arch/riscv/include/asm/pgtable-64.h | 12 ++++++------
 arch/riscv/include/asm/pgtable.h    |  6 +++---
 arch/riscv/kvm/mmu.c                |  2 +-
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
index 5c2aba5efbd0..dc42375c2357 100644
--- a/arch/riscv/include/asm/pgtable-64.h
+++ b/arch/riscv/include/asm/pgtable-64.h
@@ -175,7 +175,7 @@ static inline pud_t pfn_pud(unsigned long pfn, pgprot_t prot)
 
 static inline unsigned long _pud_pfn(pud_t pud)
 {
-	return pud_val(pud) >> _PAGE_PFN_SHIFT;
+	return __page_val_to_pfn(pud_val(pud));
 }
 
 static inline pmd_t *pud_pgtable(pud_t pud)
@@ -278,13 +278,13 @@ static inline p4d_t pfn_p4d(unsigned long pfn, pgprot_t prot)
 
 static inline unsigned long _p4d_pfn(p4d_t p4d)
 {
-	return p4d_val(p4d) >> _PAGE_PFN_SHIFT;
+	return __page_val_to_pfn(p4d_val(p4d));
 }
 
 static inline pud_t *p4d_pgtable(p4d_t p4d)
 {
 	if (pgtable_l4_enabled)
-		return (pud_t *)pfn_to_virt(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
+		return (pud_t *)pfn_to_virt(__page_val_to_pfn(p4d_val(p4d)));
 
 	return (pud_t *)pud_pgtable((pud_t) { p4d_val(p4d) });
 }
@@ -292,7 +292,7 @@ static inline pud_t *p4d_pgtable(p4d_t p4d)
 
 static inline struct page *p4d_page(p4d_t p4d)
 {
-	return pfn_to_page(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
+	return pfn_to_page(__page_val_to_pfn(p4d_val(p4d)));
 }
 
 #define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
@@ -347,7 +347,7 @@ static inline void pgd_clear(pgd_t *pgd)
 static inline p4d_t *pgd_pgtable(pgd_t pgd)
 {
 	if (pgtable_l5_enabled)
-		return (p4d_t *)pfn_to_virt(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
+		return (p4d_t *)pfn_to_virt(__page_val_to_pfn(pgd_val(pgd)));
 
 	return (p4d_t *)p4d_pgtable((p4d_t) { pgd_val(pgd) });
 }
@@ -355,7 +355,7 @@ static inline p4d_t *pgd_pgtable(pgd_t pgd)
 
 static inline struct page *pgd_page(pgd_t pgd)
 {
-	return pfn_to_page(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
+	return pfn_to_page(__page_val_to_pfn(pgd_val(pgd)));
 }
 #define pgd_page(pgd)	pgd_page(pgd)
 
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 1d1be9d9419c..5dbd6610729b 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -261,7 +261,7 @@ static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
 
 static inline unsigned long _pgd_pfn(pgd_t pgd)
 {
-	return pgd_val(pgd) >> _PAGE_PFN_SHIFT;
+	return __page_val_to_pfn(pgd_val(pgd));
 }
 
 static inline struct page *pmd_page(pmd_t pmd)
@@ -590,14 +590,14 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd)
 	return __pmd(pmd_val(pmd) & ~(_PAGE_PRESENT|_PAGE_PROT_NONE));
 }
 
-#define __pmd_to_phys(pmd)  (pmd_val(pmd) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+#define __pmd_to_phys(pmd)  (__page_val_to_pfn(pmd_val(pmd)) << PAGE_SHIFT)
 
 static inline unsigned long pmd_pfn(pmd_t pmd)
 {
 	return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
 }
 
-#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+#define __pud_to_phys(pud)  (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
 
 static inline unsigned long pud_pfn(pud_t pud)
 {
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 2965284a490d..b75d4e200064 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -54,7 +54,7 @@ static inline unsigned long gstage_pte_index(gpa_t addr, u32 level)
 
 static inline unsigned long gstage_pte_page_vaddr(pte_t pte)
 {
-	return (unsigned long)pfn_to_virt(pte_val(pte) >> _PAGE_PFN_SHIFT);
+	return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte)));
 }
 
 static int gstage_page_size_to_level(unsigned long page_size, u32 *out_level)
-- 
2.34.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 3/5] RISC-V: KVM: Add G-stage ioremap() and iounmap() functions
  2022-07-07 14:52 ` Anup Patel
@ 2022-07-07 14:52   ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

The in-kernel AIA IMSIC support requires on-demand mapping / unmapping
of Guest IMSIC address to Host IMSIC guest files. To help achieve this,
we add kvm_riscv_stage2_ioremap() and kvm_riscv_stage2_iounmap()
functions. These new functions for updating G-stage page table mappings
will be called in atomic context so we have special "in_atomic" parameter
for this purpose.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/include/asm/kvm_host.h |  5 +++++
 arch/riscv/kvm/mmu.c              | 18 ++++++++++++++----
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index 59a0cf2ca7b9..60c517e4d576 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -284,6 +284,11 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm,
 void kvm_riscv_hfence_vvma_all(struct kvm *kvm,
 			       unsigned long hbase, unsigned long hmask);
 
+int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
+			     phys_addr_t hpa, unsigned long size,
+			     bool writable, bool in_atomic);
+void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa,
+			      unsigned long size);
 int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
 			 struct kvm_memory_slot *memslot,
 			 gpa_t gpa, unsigned long hva, bool is_write);
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index b75d4e200064..f7862ca4c4c6 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -343,8 +343,9 @@ static void gstage_wp_memory_region(struct kvm *kvm, int slot)
 	kvm_flush_remote_tlbs(kvm);
 }
 
-static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
-			  unsigned long size, bool writable)
+int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
+			     phys_addr_t hpa, unsigned long size,
+			     bool writable, bool in_atomic)
 {
 	pte_t pte;
 	int ret = 0;
@@ -353,6 +354,7 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
 	struct kvm_mmu_memory_cache pcache;
 
 	memset(&pcache, 0, sizeof(pcache));
+	pcache.gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0;
 	pcache.gfp_zero = __GFP_ZERO;
 
 	end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
@@ -382,6 +384,13 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
 	return ret;
 }
 
+void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size)
+{
+	spin_lock(&kvm->mmu_lock);
+	gstage_unmap_range(kvm, gpa, size, false);
+	spin_unlock(&kvm->mmu_lock);
+}
+
 void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 					     struct kvm_memory_slot *slot,
 					     gfn_t gfn_offset,
@@ -517,8 +526,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				goto out;
 			}
 
-			ret = gstage_ioremap(kvm, gpa, pa,
-					     vm_end - vm_start, writable);
+			ret = kvm_riscv_gstage_ioremap(kvm, gpa, pa,
+						       vm_end - vm_start,
+						       writable, false);
 			if (ret)
 				break;
 		}
-- 
2.34.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 3/5] RISC-V: KVM: Add G-stage ioremap() and iounmap() functions
@ 2022-07-07 14:52   ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

The in-kernel AIA IMSIC support requires on-demand mapping / unmapping
of Guest IMSIC address to Host IMSIC guest files. To help achieve this,
we add kvm_riscv_stage2_ioremap() and kvm_riscv_stage2_iounmap()
functions. These new functions for updating G-stage page table mappings
will be called in atomic context so we have special "in_atomic" parameter
for this purpose.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/include/asm/kvm_host.h |  5 +++++
 arch/riscv/kvm/mmu.c              | 18 ++++++++++++++----
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index 59a0cf2ca7b9..60c517e4d576 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -284,6 +284,11 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm,
 void kvm_riscv_hfence_vvma_all(struct kvm *kvm,
 			       unsigned long hbase, unsigned long hmask);
 
+int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
+			     phys_addr_t hpa, unsigned long size,
+			     bool writable, bool in_atomic);
+void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa,
+			      unsigned long size);
 int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
 			 struct kvm_memory_slot *memslot,
 			 gpa_t gpa, unsigned long hva, bool is_write);
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index b75d4e200064..f7862ca4c4c6 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -343,8 +343,9 @@ static void gstage_wp_memory_region(struct kvm *kvm, int slot)
 	kvm_flush_remote_tlbs(kvm);
 }
 
-static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
-			  unsigned long size, bool writable)
+int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
+			     phys_addr_t hpa, unsigned long size,
+			     bool writable, bool in_atomic)
 {
 	pte_t pte;
 	int ret = 0;
@@ -353,6 +354,7 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
 	struct kvm_mmu_memory_cache pcache;
 
 	memset(&pcache, 0, sizeof(pcache));
+	pcache.gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0;
 	pcache.gfp_zero = __GFP_ZERO;
 
 	end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
@@ -382,6 +384,13 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
 	return ret;
 }
 
+void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size)
+{
+	spin_lock(&kvm->mmu_lock);
+	gstage_unmap_range(kvm, gpa, size, false);
+	spin_unlock(&kvm->mmu_lock);
+}
+
 void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 					     struct kvm_memory_slot *slot,
 					     gfn_t gfn_offset,
@@ -517,8 +526,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				goto out;
 			}
 
-			ret = gstage_ioremap(kvm, gpa, pa,
-					     vm_end - vm_start, writable);
+			ret = kvm_riscv_gstage_ioremap(kvm, gpa, pa,
+						       vm_end - vm_start,
+						       writable, false);
 			if (ret)
 				break;
 		}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 4/5] RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap()
  2022-07-07 14:52 ` Anup Patel
@ 2022-07-07 14:52   ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

When the host has Svpbmt extension, we should use page based memory
type 2 (i.e. IO) for IO mappings in the G-stage page table.

To achieve this, we replace use of PAGE_KERNEL with PAGE_KERNEL_IO
in the kvm_riscv_gstage_ioremap().

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/kvm/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index f7862ca4c4c6..bc545aef6034 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -361,7 +361,7 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
 	pfn = __phys_to_pfn(hpa);
 
 	for (addr = gpa; addr < end; addr += PAGE_SIZE) {
-		pte = pfn_pte(pfn, PAGE_KERNEL);
+		pte = pfn_pte(pfn, PAGE_KERNEL_IO);
 
 		if (!writable)
 			pte = pte_wrprotect(pte);
-- 
2.34.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 4/5] RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap()
@ 2022-07-07 14:52   ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

When the host has Svpbmt extension, we should use page based memory
type 2 (i.e. IO) for IO mappings in the G-stage page table.

To achieve this, we replace use of PAGE_KERNEL with PAGE_KERNEL_IO
in the kvm_riscv_gstage_ioremap().

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/kvm/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index f7862ca4c4c6..bc545aef6034 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -361,7 +361,7 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
 	pfn = __phys_to_pfn(hpa);
 
 	for (addr = gpa; addr < end; addr += PAGE_SIZE) {
-		pte = pfn_pte(pfn, PAGE_KERNEL);
+		pte = pfn_pte(pfn, PAGE_KERNEL_IO);
 
 		if (!writable)
 			pte = pte_wrprotect(pte);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 5/5] RISC-V: KVM: Add support for Svpbmt inside Guest/VM
  2022-07-07 14:52 ` Anup Patel
@ 2022-07-07 14:52   ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

The Guest/VM can use Svpbmt in VS-stage page tables when allowed by the
Hypervisor using the henvcfg.PBMTE bit.

We add Svpbmt support for the KVM Guest/VM which can be enabled/disabled
by the KVM user-space (QEMU/KVMTOOL) using the ISA extension ONE_REG
interface.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/include/asm/csr.h      | 16 ++++++++++++++++
 arch/riscv/include/uapi/asm/kvm.h |  1 +
 arch/riscv/kvm/vcpu.c             | 16 ++++++++++++++++
 3 files changed, 33 insertions(+)

diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
index 6d85655e7edf..17516afc389a 100644
--- a/arch/riscv/include/asm/csr.h
+++ b/arch/riscv/include/asm/csr.h
@@ -156,6 +156,18 @@
 				 (_AC(1, UL) << IRQ_S_TIMER) | \
 				 (_AC(1, UL) << IRQ_S_EXT))
 
+/* xENVCFG flags */
+#define ENVCFG_STCE			(_AC(1, ULL) << 63)
+#define ENVCFG_PBMTE			(_AC(1, ULL) << 62)
+#define ENVCFG_CBZE			(_AC(1, UL) << 7)
+#define ENVCFG_CBCFE			(_AC(1, UL) << 6)
+#define ENVCFG_CBIE_SHIFT		4
+#define ENVCFG_CBIE			(_AC(0x3, UL) << ENVCFG_CBIE_SHIFT)
+#define ENVCFG_CBIE_ILL			_AC(0x0, UL)
+#define ENVCFG_CBIE_FLUSH		_AC(0x1, UL)
+#define ENVCFG_CBIE_INV			_AC(0x3, UL)
+#define ENVCFG_FIOM			_AC(0x1, UL)
+
 /* symbolic CSR names: */
 #define CSR_CYCLE		0xc00
 #define CSR_TIME		0xc01
@@ -252,7 +264,9 @@
 #define CSR_HTIMEDELTA		0x605
 #define CSR_HCOUNTEREN		0x606
 #define CSR_HGEIE		0x607
+#define CSR_HENVCFG		0x60a
 #define CSR_HTIMEDELTAH		0x615
+#define CSR_HENVCFGH		0x61a
 #define CSR_HTVAL		0x643
 #define CSR_HIP			0x644
 #define CSR_HVIP		0x645
@@ -264,6 +278,8 @@
 #define CSR_MISA		0x301
 #define CSR_MIE			0x304
 #define CSR_MTVEC		0x305
+#define CSR_MENVCFG		0x30a
+#define CSR_MENVCFGH		0x31a
 #define CSR_MSCRATCH		0x340
 #define CSR_MEPC		0x341
 #define CSR_MCAUSE		0x342
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index 6119368ba6d5..24b2a6e27698 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -96,6 +96,7 @@ enum KVM_RISCV_ISA_EXT_ID {
 	KVM_RISCV_ISA_EXT_H,
 	KVM_RISCV_ISA_EXT_I,
 	KVM_RISCV_ISA_EXT_M,
+	KVM_RISCV_ISA_EXT_SVPBMT,
 	KVM_RISCV_ISA_EXT_MAX,
 };
 
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 6dd9cf729614..b7a433c54d0f 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -51,6 +51,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
 	RISCV_ISA_EXT_h,
 	RISCV_ISA_EXT_i,
 	RISCV_ISA_EXT_m,
+	RISCV_ISA_EXT_SVPBMT,
 };
 
 static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
@@ -777,6 +778,19 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 	return -EINVAL;
 }
 
+static void kvm_riscv_vcpu_update_config(const unsigned long *isa)
+{
+	u64 henvcfg = 0;
+
+	if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SVPBMT))
+		henvcfg |= ENVCFG_PBMTE;
+
+	csr_write(CSR_HENVCFG, henvcfg);
+#ifdef CONFIG_32BIT
+	csr_write(CSR_HENVCFGH, henvcfg >> 32);
+#endif
+}
+
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
@@ -791,6 +805,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	csr_write(CSR_HVIP, csr->hvip);
 	csr_write(CSR_VSATP, csr->vsatp);
 
+	kvm_riscv_vcpu_update_config(vcpu->arch.isa);
+
 	kvm_riscv_gstage_update_hgatp(vcpu);
 
 	kvm_riscv_vcpu_timer_restore(vcpu);
-- 
2.34.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 5/5] RISC-V: KVM: Add support for Svpbmt inside Guest/VM
@ 2022-07-07 14:52   ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-07 14:52 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

The Guest/VM can use Svpbmt in VS-stage page tables when allowed by the
Hypervisor using the henvcfg.PBMTE bit.

We add Svpbmt support for the KVM Guest/VM which can be enabled/disabled
by the KVM user-space (QEMU/KVMTOOL) using the ISA extension ONE_REG
interface.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/include/asm/csr.h      | 16 ++++++++++++++++
 arch/riscv/include/uapi/asm/kvm.h |  1 +
 arch/riscv/kvm/vcpu.c             | 16 ++++++++++++++++
 3 files changed, 33 insertions(+)

diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
index 6d85655e7edf..17516afc389a 100644
--- a/arch/riscv/include/asm/csr.h
+++ b/arch/riscv/include/asm/csr.h
@@ -156,6 +156,18 @@
 				 (_AC(1, UL) << IRQ_S_TIMER) | \
 				 (_AC(1, UL) << IRQ_S_EXT))
 
+/* xENVCFG flags */
+#define ENVCFG_STCE			(_AC(1, ULL) << 63)
+#define ENVCFG_PBMTE			(_AC(1, ULL) << 62)
+#define ENVCFG_CBZE			(_AC(1, UL) << 7)
+#define ENVCFG_CBCFE			(_AC(1, UL) << 6)
+#define ENVCFG_CBIE_SHIFT		4
+#define ENVCFG_CBIE			(_AC(0x3, UL) << ENVCFG_CBIE_SHIFT)
+#define ENVCFG_CBIE_ILL			_AC(0x0, UL)
+#define ENVCFG_CBIE_FLUSH		_AC(0x1, UL)
+#define ENVCFG_CBIE_INV			_AC(0x3, UL)
+#define ENVCFG_FIOM			_AC(0x1, UL)
+
 /* symbolic CSR names: */
 #define CSR_CYCLE		0xc00
 #define CSR_TIME		0xc01
@@ -252,7 +264,9 @@
 #define CSR_HTIMEDELTA		0x605
 #define CSR_HCOUNTEREN		0x606
 #define CSR_HGEIE		0x607
+#define CSR_HENVCFG		0x60a
 #define CSR_HTIMEDELTAH		0x615
+#define CSR_HENVCFGH		0x61a
 #define CSR_HTVAL		0x643
 #define CSR_HIP			0x644
 #define CSR_HVIP		0x645
@@ -264,6 +278,8 @@
 #define CSR_MISA		0x301
 #define CSR_MIE			0x304
 #define CSR_MTVEC		0x305
+#define CSR_MENVCFG		0x30a
+#define CSR_MENVCFGH		0x31a
 #define CSR_MSCRATCH		0x340
 #define CSR_MEPC		0x341
 #define CSR_MCAUSE		0x342
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index 6119368ba6d5..24b2a6e27698 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -96,6 +96,7 @@ enum KVM_RISCV_ISA_EXT_ID {
 	KVM_RISCV_ISA_EXT_H,
 	KVM_RISCV_ISA_EXT_I,
 	KVM_RISCV_ISA_EXT_M,
+	KVM_RISCV_ISA_EXT_SVPBMT,
 	KVM_RISCV_ISA_EXT_MAX,
 };
 
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 6dd9cf729614..b7a433c54d0f 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -51,6 +51,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
 	RISCV_ISA_EXT_h,
 	RISCV_ISA_EXT_i,
 	RISCV_ISA_EXT_m,
+	RISCV_ISA_EXT_SVPBMT,
 };
 
 static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
@@ -777,6 +778,19 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 	return -EINVAL;
 }
 
+static void kvm_riscv_vcpu_update_config(const unsigned long *isa)
+{
+	u64 henvcfg = 0;
+
+	if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SVPBMT))
+		henvcfg |= ENVCFG_PBMTE;
+
+	csr_write(CSR_HENVCFG, henvcfg);
+#ifdef CONFIG_32BIT
+	csr_write(CSR_HENVCFGH, henvcfg >> 32);
+#endif
+}
+
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
@@ -791,6 +805,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	csr_write(CSR_HVIP, csr->hvip);
 	csr_write(CSR_VSATP, csr->vsatp);
 
+	kvm_riscv_vcpu_update_config(vcpu->arch.isa);
+
 	kvm_riscv_gstage_update_hgatp(vcpu);
 
 	kvm_riscv_vcpu_timer_restore(vcpu);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/5] riscv: Fix missing PAGE_PFN_MASK
  2022-07-07 14:52   ` Anup Patel
@ 2022-07-11  4:17     ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-11  4:17 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel,
	KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List, Alexandre Ghiti

On Thu, Jul 7, 2022 at 8:23 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> From: Alexandre Ghiti <alexandre.ghiti@canonical.com>
>
> There are a bunch of functions that use the PFN from a page table entry
> that end up with the svpbmt upper-bits because they are missing the newly
> introduced PAGE_PFN_MASK which leads to wrong addresses conversions and
> then crash: fix this by adding this mask.
>
> Fixes: 100631b48ded ("riscv: Fix accessing pfn bits in PTEs for non-32bit variants")
> Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com>
> Reviewed-by: Anup Patel <anup@brainfault.org>

I have queued this patch for 5.19-rcX fixes which I will send-out this week.

Thanks,
Anup

> ---
>  arch/riscv/include/asm/pgtable-64.h | 12 ++++++------
>  arch/riscv/include/asm/pgtable.h    |  6 +++---
>  arch/riscv/kvm/mmu.c                |  2 +-
>  3 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
> index 5c2aba5efbd0..dc42375c2357 100644
> --- a/arch/riscv/include/asm/pgtable-64.h
> +++ b/arch/riscv/include/asm/pgtable-64.h
> @@ -175,7 +175,7 @@ static inline pud_t pfn_pud(unsigned long pfn, pgprot_t prot)
>
>  static inline unsigned long _pud_pfn(pud_t pud)
>  {
> -       return pud_val(pud) >> _PAGE_PFN_SHIFT;
> +       return __page_val_to_pfn(pud_val(pud));
>  }
>
>  static inline pmd_t *pud_pgtable(pud_t pud)
> @@ -278,13 +278,13 @@ static inline p4d_t pfn_p4d(unsigned long pfn, pgprot_t prot)
>
>  static inline unsigned long _p4d_pfn(p4d_t p4d)
>  {
> -       return p4d_val(p4d) >> _PAGE_PFN_SHIFT;
> +       return __page_val_to_pfn(p4d_val(p4d));
>  }
>
>  static inline pud_t *p4d_pgtable(p4d_t p4d)
>  {
>         if (pgtable_l4_enabled)
> -               return (pud_t *)pfn_to_virt(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
> +               return (pud_t *)pfn_to_virt(__page_val_to_pfn(p4d_val(p4d)));
>
>         return (pud_t *)pud_pgtable((pud_t) { p4d_val(p4d) });
>  }
> @@ -292,7 +292,7 @@ static inline pud_t *p4d_pgtable(p4d_t p4d)
>
>  static inline struct page *p4d_page(p4d_t p4d)
>  {
> -       return pfn_to_page(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
> +       return pfn_to_page(__page_val_to_pfn(p4d_val(p4d)));
>  }
>
>  #define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
> @@ -347,7 +347,7 @@ static inline void pgd_clear(pgd_t *pgd)
>  static inline p4d_t *pgd_pgtable(pgd_t pgd)
>  {
>         if (pgtable_l5_enabled)
> -               return (p4d_t *)pfn_to_virt(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
> +               return (p4d_t *)pfn_to_virt(__page_val_to_pfn(pgd_val(pgd)));
>
>         return (p4d_t *)p4d_pgtable((p4d_t) { pgd_val(pgd) });
>  }
> @@ -355,7 +355,7 @@ static inline p4d_t *pgd_pgtable(pgd_t pgd)
>
>  static inline struct page *pgd_page(pgd_t pgd)
>  {
> -       return pfn_to_page(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
> +       return pfn_to_page(__page_val_to_pfn(pgd_val(pgd)));
>  }
>  #define pgd_page(pgd)  pgd_page(pgd)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 1d1be9d9419c..5dbd6610729b 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -261,7 +261,7 @@ static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
>
>  static inline unsigned long _pgd_pfn(pgd_t pgd)
>  {
> -       return pgd_val(pgd) >> _PAGE_PFN_SHIFT;
> +       return __page_val_to_pfn(pgd_val(pgd));
>  }
>
>  static inline struct page *pmd_page(pmd_t pmd)
> @@ -590,14 +590,14 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd)
>         return __pmd(pmd_val(pmd) & ~(_PAGE_PRESENT|_PAGE_PROT_NONE));
>  }
>
> -#define __pmd_to_phys(pmd)  (pmd_val(pmd) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
> +#define __pmd_to_phys(pmd)  (__page_val_to_pfn(pmd_val(pmd)) << PAGE_SHIFT)
>
>  static inline unsigned long pmd_pfn(pmd_t pmd)
>  {
>         return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
>  }
>
> -#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
> +#define __pud_to_phys(pud)  (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
>
>  static inline unsigned long pud_pfn(pud_t pud)
>  {
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 2965284a490d..b75d4e200064 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -54,7 +54,7 @@ static inline unsigned long gstage_pte_index(gpa_t addr, u32 level)
>
>  static inline unsigned long gstage_pte_page_vaddr(pte_t pte)
>  {
> -       return (unsigned long)pfn_to_virt(pte_val(pte) >> _PAGE_PFN_SHIFT);
> +       return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte)));
>  }
>
>  static int gstage_page_size_to_level(unsigned long page_size, u32 *out_level)
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/5] riscv: Fix missing PAGE_PFN_MASK
@ 2022-07-11  4:17     ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-11  4:17 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Alistair Francis, Anup Patel,
	KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List, Alexandre Ghiti

On Thu, Jul 7, 2022 at 8:23 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> From: Alexandre Ghiti <alexandre.ghiti@canonical.com>
>
> There are a bunch of functions that use the PFN from a page table entry
> that end up with the svpbmt upper-bits because they are missing the newly
> introduced PAGE_PFN_MASK which leads to wrong addresses conversions and
> then crash: fix this by adding this mask.
>
> Fixes: 100631b48ded ("riscv: Fix accessing pfn bits in PTEs for non-32bit variants")
> Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com>
> Reviewed-by: Anup Patel <anup@brainfault.org>

I have queued this patch for 5.19-rcX fixes which I will send-out this week.

Thanks,
Anup

> ---
>  arch/riscv/include/asm/pgtable-64.h | 12 ++++++------
>  arch/riscv/include/asm/pgtable.h    |  6 +++---
>  arch/riscv/kvm/mmu.c                |  2 +-
>  3 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
> index 5c2aba5efbd0..dc42375c2357 100644
> --- a/arch/riscv/include/asm/pgtable-64.h
> +++ b/arch/riscv/include/asm/pgtable-64.h
> @@ -175,7 +175,7 @@ static inline pud_t pfn_pud(unsigned long pfn, pgprot_t prot)
>
>  static inline unsigned long _pud_pfn(pud_t pud)
>  {
> -       return pud_val(pud) >> _PAGE_PFN_SHIFT;
> +       return __page_val_to_pfn(pud_val(pud));
>  }
>
>  static inline pmd_t *pud_pgtable(pud_t pud)
> @@ -278,13 +278,13 @@ static inline p4d_t pfn_p4d(unsigned long pfn, pgprot_t prot)
>
>  static inline unsigned long _p4d_pfn(p4d_t p4d)
>  {
> -       return p4d_val(p4d) >> _PAGE_PFN_SHIFT;
> +       return __page_val_to_pfn(p4d_val(p4d));
>  }
>
>  static inline pud_t *p4d_pgtable(p4d_t p4d)
>  {
>         if (pgtable_l4_enabled)
> -               return (pud_t *)pfn_to_virt(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
> +               return (pud_t *)pfn_to_virt(__page_val_to_pfn(p4d_val(p4d)));
>
>         return (pud_t *)pud_pgtable((pud_t) { p4d_val(p4d) });
>  }
> @@ -292,7 +292,7 @@ static inline pud_t *p4d_pgtable(p4d_t p4d)
>
>  static inline struct page *p4d_page(p4d_t p4d)
>  {
> -       return pfn_to_page(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
> +       return pfn_to_page(__page_val_to_pfn(p4d_val(p4d)));
>  }
>
>  #define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
> @@ -347,7 +347,7 @@ static inline void pgd_clear(pgd_t *pgd)
>  static inline p4d_t *pgd_pgtable(pgd_t pgd)
>  {
>         if (pgtable_l5_enabled)
> -               return (p4d_t *)pfn_to_virt(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
> +               return (p4d_t *)pfn_to_virt(__page_val_to_pfn(pgd_val(pgd)));
>
>         return (p4d_t *)p4d_pgtable((p4d_t) { pgd_val(pgd) });
>  }
> @@ -355,7 +355,7 @@ static inline p4d_t *pgd_pgtable(pgd_t pgd)
>
>  static inline struct page *pgd_page(pgd_t pgd)
>  {
> -       return pfn_to_page(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
> +       return pfn_to_page(__page_val_to_pfn(pgd_val(pgd)));
>  }
>  #define pgd_page(pgd)  pgd_page(pgd)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 1d1be9d9419c..5dbd6610729b 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -261,7 +261,7 @@ static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
>
>  static inline unsigned long _pgd_pfn(pgd_t pgd)
>  {
> -       return pgd_val(pgd) >> _PAGE_PFN_SHIFT;
> +       return __page_val_to_pfn(pgd_val(pgd));
>  }
>
>  static inline struct page *pmd_page(pmd_t pmd)
> @@ -590,14 +590,14 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd)
>         return __pmd(pmd_val(pmd) & ~(_PAGE_PRESENT|_PAGE_PROT_NONE));
>  }
>
> -#define __pmd_to_phys(pmd)  (pmd_val(pmd) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
> +#define __pmd_to_phys(pmd)  (__page_val_to_pfn(pmd_val(pmd)) << PAGE_SHIFT)
>
>  static inline unsigned long pmd_pfn(pmd_t pmd)
>  {
>         return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
>  }
>
> -#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
> +#define __pud_to_phys(pud)  (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
>
>  static inline unsigned long pud_pfn(pud_t pud)
>  {
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 2965284a490d..b75d4e200064 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -54,7 +54,7 @@ static inline unsigned long gstage_pte_index(gpa_t addr, u32 level)
>
>  static inline unsigned long gstage_pte_page_vaddr(pte_t pte)
>  {
> -       return (unsigned long)pfn_to_virt(pte_val(pte) >> _PAGE_PFN_SHIFT);
> +       return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte)));
>  }
>
>  static int gstage_page_size_to_level(unsigned long page_size, u32 *out_level)
> --
> 2.34.1
>

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/5] RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap()
  2022-07-07 14:52   ` Anup Patel
@ 2022-07-13  1:23     ` Atish Patra
  -1 siblings, 0 replies; 30+ messages in thread
From: Atish Patra @ 2022-07-13  1:23 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	Anup Patel, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> When the host has Svpbmt extension, we should use page based memory
> type 2 (i.e. IO) for IO mappings in the G-stage page table.
>
> To achieve this, we replace use of PAGE_KERNEL with PAGE_KERNEL_IO
> in the kvm_riscv_gstage_ioremap().
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/kvm/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index f7862ca4c4c6..bc545aef6034 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -361,7 +361,7 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
>         pfn = __phys_to_pfn(hpa);
>
>         for (addr = gpa; addr < end; addr += PAGE_SIZE) {
> -               pte = pfn_pte(pfn, PAGE_KERNEL);
> +               pte = pfn_pte(pfn, PAGE_KERNEL_IO);
>
>                 if (!writable)
>                         pte = pte_wrprotect(pte);
> --
> 2.34.1
>

LGTM.

Reviewed-by: Atish Patra <atishp@rivosinc.com>

-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/5] RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap()
@ 2022-07-13  1:23     ` Atish Patra
  0 siblings, 0 replies; 30+ messages in thread
From: Atish Patra @ 2022-07-13  1:23 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	Anup Patel, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> When the host has Svpbmt extension, we should use page based memory
> type 2 (i.e. IO) for IO mappings in the G-stage page table.
>
> To achieve this, we replace use of PAGE_KERNEL with PAGE_KERNEL_IO
> in the kvm_riscv_gstage_ioremap().
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/kvm/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index f7862ca4c4c6..bc545aef6034 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -361,7 +361,7 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
>         pfn = __phys_to_pfn(hpa);
>
>         for (addr = gpa; addr < end; addr += PAGE_SIZE) {
> -               pte = pfn_pte(pfn, PAGE_KERNEL);
> +               pte = pfn_pte(pfn, PAGE_KERNEL_IO);
>
>                 if (!writable)
>                         pte = pte_wrprotect(pte);
> --
> 2.34.1
>

LGTM.

Reviewed-by: Atish Patra <atishp@rivosinc.com>

-- 
Regards,
Atish

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/5] RISC-V: KVM: Add support for Svpbmt inside Guest/VM
  2022-07-07 14:52   ` Anup Patel
@ 2022-07-13  1:23     ` Atish Patra
  -1 siblings, 0 replies; 30+ messages in thread
From: Atish Patra @ 2022-07-13  1:23 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	Anup Patel, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> The Guest/VM can use Svpbmt in VS-stage page tables when allowed by the
> Hypervisor using the henvcfg.PBMTE bit.
>
> We add Svpbmt support for the KVM Guest/VM which can be enabled/disabled
> by the KVM user-space (QEMU/KVMTOOL) using the ISA extension ONE_REG
> interface.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/asm/csr.h      | 16 ++++++++++++++++
>  arch/riscv/include/uapi/asm/kvm.h |  1 +
>  arch/riscv/kvm/vcpu.c             | 16 ++++++++++++++++
>  3 files changed, 33 insertions(+)
>
> diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
> index 6d85655e7edf..17516afc389a 100644
> --- a/arch/riscv/include/asm/csr.h
> +++ b/arch/riscv/include/asm/csr.h
> @@ -156,6 +156,18 @@
>                                  (_AC(1, UL) << IRQ_S_TIMER) | \
>                                  (_AC(1, UL) << IRQ_S_EXT))
>
> +/* xENVCFG flags */
> +#define ENVCFG_STCE                    (_AC(1, ULL) << 63)
> +#define ENVCFG_PBMTE                   (_AC(1, ULL) << 62)
> +#define ENVCFG_CBZE                    (_AC(1, UL) << 7)
> +#define ENVCFG_CBCFE                   (_AC(1, UL) << 6)
> +#define ENVCFG_CBIE_SHIFT              4
> +#define ENVCFG_CBIE                    (_AC(0x3, UL) << ENVCFG_CBIE_SHIFT)
> +#define ENVCFG_CBIE_ILL                        _AC(0x0, UL)
> +#define ENVCFG_CBIE_FLUSH              _AC(0x1, UL)
> +#define ENVCFG_CBIE_INV                        _AC(0x3, UL)
> +#define ENVCFG_FIOM                    _AC(0x1, UL)
> +
>  /* symbolic CSR names: */
>  #define CSR_CYCLE              0xc00
>  #define CSR_TIME               0xc01
> @@ -252,7 +264,9 @@
>  #define CSR_HTIMEDELTA         0x605
>  #define CSR_HCOUNTEREN         0x606
>  #define CSR_HGEIE              0x607
> +#define CSR_HENVCFG            0x60a
>  #define CSR_HTIMEDELTAH                0x615
> +#define CSR_HENVCFGH           0x61a
>  #define CSR_HTVAL              0x643
>  #define CSR_HIP                        0x644
>  #define CSR_HVIP               0x645
> @@ -264,6 +278,8 @@
>  #define CSR_MISA               0x301
>  #define CSR_MIE                        0x304
>  #define CSR_MTVEC              0x305
> +#define CSR_MENVCFG            0x30a
> +#define CSR_MENVCFGH           0x31a
>  #define CSR_MSCRATCH           0x340
>  #define CSR_MEPC               0x341
>  #define CSR_MCAUSE             0x342
> diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> index 6119368ba6d5..24b2a6e27698 100644
> --- a/arch/riscv/include/uapi/asm/kvm.h
> +++ b/arch/riscv/include/uapi/asm/kvm.h
> @@ -96,6 +96,7 @@ enum KVM_RISCV_ISA_EXT_ID {
>         KVM_RISCV_ISA_EXT_H,
>         KVM_RISCV_ISA_EXT_I,
>         KVM_RISCV_ISA_EXT_M,
> +       KVM_RISCV_ISA_EXT_SVPBMT,
>         KVM_RISCV_ISA_EXT_MAX,
>  };
>
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index 6dd9cf729614..b7a433c54d0f 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -51,6 +51,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
>         RISCV_ISA_EXT_h,
>         RISCV_ISA_EXT_i,
>         RISCV_ISA_EXT_m,
> +       RISCV_ISA_EXT_SVPBMT,
>  };
>
>  static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
> @@ -777,6 +778,19 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
>         return -EINVAL;
>  }
>
> +static void kvm_riscv_vcpu_update_config(const unsigned long *isa)
> +{
> +       u64 henvcfg = 0;
> +
> +       if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SVPBMT))
> +               henvcfg |= ENVCFG_PBMTE;
> +
> +       csr_write(CSR_HENVCFG, henvcfg);
> +#ifdef CONFIG_32BIT
> +       csr_write(CSR_HENVCFGH, henvcfg >> 32);
> +#endif
> +}
> +
>  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  {
>         struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> @@ -791,6 +805,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>         csr_write(CSR_HVIP, csr->hvip);
>         csr_write(CSR_VSATP, csr->vsatp);
>
> +       kvm_riscv_vcpu_update_config(vcpu->arch.isa);
> +
>         kvm_riscv_gstage_update_hgatp(vcpu);
>
>         kvm_riscv_vcpu_timer_restore(vcpu);
> --
> 2.34.1
>

LGTM.


Reviewed-by: Atish Patra <atishp@rivosinc.com>


-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/5] RISC-V: KVM: Add support for Svpbmt inside Guest/VM
@ 2022-07-13  1:23     ` Atish Patra
  0 siblings, 0 replies; 30+ messages in thread
From: Atish Patra @ 2022-07-13  1:23 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	Anup Patel, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> The Guest/VM can use Svpbmt in VS-stage page tables when allowed by the
> Hypervisor using the henvcfg.PBMTE bit.
>
> We add Svpbmt support for the KVM Guest/VM which can be enabled/disabled
> by the KVM user-space (QEMU/KVMTOOL) using the ISA extension ONE_REG
> interface.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/asm/csr.h      | 16 ++++++++++++++++
>  arch/riscv/include/uapi/asm/kvm.h |  1 +
>  arch/riscv/kvm/vcpu.c             | 16 ++++++++++++++++
>  3 files changed, 33 insertions(+)
>
> diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
> index 6d85655e7edf..17516afc389a 100644
> --- a/arch/riscv/include/asm/csr.h
> +++ b/arch/riscv/include/asm/csr.h
> @@ -156,6 +156,18 @@
>                                  (_AC(1, UL) << IRQ_S_TIMER) | \
>                                  (_AC(1, UL) << IRQ_S_EXT))
>
> +/* xENVCFG flags */
> +#define ENVCFG_STCE                    (_AC(1, ULL) << 63)
> +#define ENVCFG_PBMTE                   (_AC(1, ULL) << 62)
> +#define ENVCFG_CBZE                    (_AC(1, UL) << 7)
> +#define ENVCFG_CBCFE                   (_AC(1, UL) << 6)
> +#define ENVCFG_CBIE_SHIFT              4
> +#define ENVCFG_CBIE                    (_AC(0x3, UL) << ENVCFG_CBIE_SHIFT)
> +#define ENVCFG_CBIE_ILL                        _AC(0x0, UL)
> +#define ENVCFG_CBIE_FLUSH              _AC(0x1, UL)
> +#define ENVCFG_CBIE_INV                        _AC(0x3, UL)
> +#define ENVCFG_FIOM                    _AC(0x1, UL)
> +
>  /* symbolic CSR names: */
>  #define CSR_CYCLE              0xc00
>  #define CSR_TIME               0xc01
> @@ -252,7 +264,9 @@
>  #define CSR_HTIMEDELTA         0x605
>  #define CSR_HCOUNTEREN         0x606
>  #define CSR_HGEIE              0x607
> +#define CSR_HENVCFG            0x60a
>  #define CSR_HTIMEDELTAH                0x615
> +#define CSR_HENVCFGH           0x61a
>  #define CSR_HTVAL              0x643
>  #define CSR_HIP                        0x644
>  #define CSR_HVIP               0x645
> @@ -264,6 +278,8 @@
>  #define CSR_MISA               0x301
>  #define CSR_MIE                        0x304
>  #define CSR_MTVEC              0x305
> +#define CSR_MENVCFG            0x30a
> +#define CSR_MENVCFGH           0x31a
>  #define CSR_MSCRATCH           0x340
>  #define CSR_MEPC               0x341
>  #define CSR_MCAUSE             0x342
> diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> index 6119368ba6d5..24b2a6e27698 100644
> --- a/arch/riscv/include/uapi/asm/kvm.h
> +++ b/arch/riscv/include/uapi/asm/kvm.h
> @@ -96,6 +96,7 @@ enum KVM_RISCV_ISA_EXT_ID {
>         KVM_RISCV_ISA_EXT_H,
>         KVM_RISCV_ISA_EXT_I,
>         KVM_RISCV_ISA_EXT_M,
> +       KVM_RISCV_ISA_EXT_SVPBMT,
>         KVM_RISCV_ISA_EXT_MAX,
>  };
>
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index 6dd9cf729614..b7a433c54d0f 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -51,6 +51,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
>         RISCV_ISA_EXT_h,
>         RISCV_ISA_EXT_i,
>         RISCV_ISA_EXT_m,
> +       RISCV_ISA_EXT_SVPBMT,
>  };
>
>  static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
> @@ -777,6 +778,19 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
>         return -EINVAL;
>  }
>
> +static void kvm_riscv_vcpu_update_config(const unsigned long *isa)
> +{
> +       u64 henvcfg = 0;
> +
> +       if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SVPBMT))
> +               henvcfg |= ENVCFG_PBMTE;
> +
> +       csr_write(CSR_HENVCFG, henvcfg);
> +#ifdef CONFIG_32BIT
> +       csr_write(CSR_HENVCFGH, henvcfg >> 32);
> +#endif
> +}
> +
>  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  {
>         struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> @@ -791,6 +805,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>         csr_write(CSR_HVIP, csr->hvip);
>         csr_write(CSR_VSATP, csr->vsatp);
>
> +       kvm_riscv_vcpu_update_config(vcpu->arch.isa);
> +
>         kvm_riscv_gstage_update_hgatp(vcpu);
>
>         kvm_riscv_vcpu_timer_restore(vcpu);
> --
> 2.34.1
>

LGTM.


Reviewed-by: Atish Patra <atishp@rivosinc.com>


-- 
Regards,
Atish

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/5] RISC-V: KVM: Add G-stage ioremap() and iounmap() functions
  2022-07-07 14:52   ` Anup Patel
@ 2022-07-13  1:26     ` Atish Patra
  -1 siblings, 0 replies; 30+ messages in thread
From: Atish Patra @ 2022-07-13  1:26 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	Anup Patel, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> The in-kernel AIA IMSIC support requires on-demand mapping / unmapping
> of Guest IMSIC address to Host IMSIC guest files. To help achieve this,
> we add kvm_riscv_stage2_ioremap() and kvm_riscv_stage2_iounmap()
> functions. These new functions for updating G-stage page table mappings
> will be called in atomic context so we have special "in_atomic" parameter
> for this purpose.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/asm/kvm_host.h |  5 +++++
>  arch/riscv/kvm/mmu.c              | 18 ++++++++++++++----
>  2 files changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> index 59a0cf2ca7b9..60c517e4d576 100644
> --- a/arch/riscv/include/asm/kvm_host.h
> +++ b/arch/riscv/include/asm/kvm_host.h
> @@ -284,6 +284,11 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm,
>  void kvm_riscv_hfence_vvma_all(struct kvm *kvm,
>                                unsigned long hbase, unsigned long hmask);
>
> +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> +                            phys_addr_t hpa, unsigned long size,
> +                            bool writable, bool in_atomic);
> +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa,
> +                             unsigned long size);
>  int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>                          struct kvm_memory_slot *memslot,
>                          gpa_t gpa, unsigned long hva, bool is_write);
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index b75d4e200064..f7862ca4c4c6 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -343,8 +343,9 @@ static void gstage_wp_memory_region(struct kvm *kvm, int slot)
>         kvm_flush_remote_tlbs(kvm);
>  }
>
> -static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
> -                         unsigned long size, bool writable)
> +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> +                            phys_addr_t hpa, unsigned long size,
> +                            bool writable, bool in_atomic)
>  {
>         pte_t pte;
>         int ret = 0;
> @@ -353,6 +354,7 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
>         struct kvm_mmu_memory_cache pcache;
>
>         memset(&pcache, 0, sizeof(pcache));
> +       pcache.gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0;
>         pcache.gfp_zero = __GFP_ZERO;
>
>         end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
> @@ -382,6 +384,13 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
>         return ret;
>  }
>
> +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size)
> +{
> +       spin_lock(&kvm->mmu_lock);
> +       gstage_unmap_range(kvm, gpa, size, false);
> +       spin_unlock(&kvm->mmu_lock);
> +}
> +
>  void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>                                              struct kvm_memory_slot *slot,
>                                              gfn_t gfn_offset,
> @@ -517,8 +526,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>                                 goto out;
>                         }
>
> -                       ret = gstage_ioremap(kvm, gpa, pa,
> -                                            vm_end - vm_start, writable);
> +                       ret = kvm_riscv_gstage_ioremap(kvm, gpa, pa,
> +                                                      vm_end - vm_start,
> +                                                      writable, false);
>                         if (ret)
>                                 break;
>                 }
> --
> 2.34.1
>

Reviewed-by: Atish Patra <atishp@rivosinc.com>

-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/5] RISC-V: KVM: Add G-stage ioremap() and iounmap() functions
@ 2022-07-13  1:26     ` Atish Patra
  0 siblings, 0 replies; 30+ messages in thread
From: Atish Patra @ 2022-07-13  1:26 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	Anup Patel, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> The in-kernel AIA IMSIC support requires on-demand mapping / unmapping
> of Guest IMSIC address to Host IMSIC guest files. To help achieve this,
> we add kvm_riscv_stage2_ioremap() and kvm_riscv_stage2_iounmap()
> functions. These new functions for updating G-stage page table mappings
> will be called in atomic context so we have special "in_atomic" parameter
> for this purpose.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/asm/kvm_host.h |  5 +++++
>  arch/riscv/kvm/mmu.c              | 18 ++++++++++++++----
>  2 files changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> index 59a0cf2ca7b9..60c517e4d576 100644
> --- a/arch/riscv/include/asm/kvm_host.h
> +++ b/arch/riscv/include/asm/kvm_host.h
> @@ -284,6 +284,11 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm,
>  void kvm_riscv_hfence_vvma_all(struct kvm *kvm,
>                                unsigned long hbase, unsigned long hmask);
>
> +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> +                            phys_addr_t hpa, unsigned long size,
> +                            bool writable, bool in_atomic);
> +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa,
> +                             unsigned long size);
>  int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>                          struct kvm_memory_slot *memslot,
>                          gpa_t gpa, unsigned long hva, bool is_write);
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index b75d4e200064..f7862ca4c4c6 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -343,8 +343,9 @@ static void gstage_wp_memory_region(struct kvm *kvm, int slot)
>         kvm_flush_remote_tlbs(kvm);
>  }
>
> -static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
> -                         unsigned long size, bool writable)
> +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> +                            phys_addr_t hpa, unsigned long size,
> +                            bool writable, bool in_atomic)
>  {
>         pte_t pte;
>         int ret = 0;
> @@ -353,6 +354,7 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
>         struct kvm_mmu_memory_cache pcache;
>
>         memset(&pcache, 0, sizeof(pcache));
> +       pcache.gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0;
>         pcache.gfp_zero = __GFP_ZERO;
>
>         end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
> @@ -382,6 +384,13 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
>         return ret;
>  }
>
> +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size)
> +{
> +       spin_lock(&kvm->mmu_lock);
> +       gstage_unmap_range(kvm, gpa, size, false);
> +       spin_unlock(&kvm->mmu_lock);
> +}
> +
>  void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>                                              struct kvm_memory_slot *slot,
>                                              gfn_t gfn_offset,
> @@ -517,8 +526,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>                                 goto out;
>                         }
>
> -                       ret = gstage_ioremap(kvm, gpa, pa,
> -                                            vm_end - vm_start, writable);
> +                       ret = kvm_riscv_gstage_ioremap(kvm, gpa, pa,
> +                                                      vm_end - vm_start,
> +                                                      writable, false);
>                         if (ret)
>                                 break;
>                 }
> --
> 2.34.1
>

Reviewed-by: Atish Patra <atishp@rivosinc.com>

-- 
Regards,
Atish

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/5] KVM: Add gfp_custom flag in struct kvm_mmu_memory_cache
  2022-07-07 14:52   ` Anup Patel
@ 2022-07-13  1:27     ` Atish Patra
  -1 siblings, 0 replies; 30+ messages in thread
From: Atish Patra @ 2022-07-13  1:27 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	Anup Patel, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> The kvm_mmu_topup_memory_cache() always uses GFP_KERNEL_ACCOUNT for
> memory allocation which prevents it's use in atomic context. To address
> this limitation of kvm_mmu_topup_memory_cache(), we add gfp_custom flag
> in struct kvm_mmu_memory_cache. When the gfp_custom flag is set to some
> GFP_xyz flags, the kvm_mmu_topup_memory_cache() will use that instead of
> GFP_KERNEL_ACCOUNT.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  include/linux/kvm_types.h | 1 +
>  virt/kvm/kvm_main.c       | 4 +++-
>  2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> index ac1ebb37a0ff..1dcfba68076a 100644
> --- a/include/linux/kvm_types.h
> +++ b/include/linux/kvm_types.h
> @@ -87,6 +87,7 @@ struct gfn_to_pfn_cache {
>  struct kvm_mmu_memory_cache {
>         int nobjs;
>         gfp_t gfp_zero;
> +       gfp_t gfp_custom;
>         struct kmem_cache *kmem_cache;
>         void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE];
>  };
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index a49df8988cd6..e3a6f7647474 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -386,7 +386,9 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min)
>         if (mc->nobjs >= min)
>                 return 0;
>         while (mc->nobjs < ARRAY_SIZE(mc->objects)) {
> -               obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT);
> +               obj = mmu_memory_cache_alloc_obj(mc, (mc->gfp_custom) ?
> +                                                mc->gfp_custom :
> +                                                GFP_KERNEL_ACCOUNT);
>                 if (!obj)
>                         return mc->nobjs >= min ? 0 : -ENOMEM;
>                 mc->objects[mc->nobjs++] = obj;
> --
> 2.34.1
>

Acked-by: Atish Patra <atishp@rivosinc.com>

-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/5] KVM: Add gfp_custom flag in struct kvm_mmu_memory_cache
@ 2022-07-13  1:27     ` Atish Patra
  0 siblings, 0 replies; 30+ messages in thread
From: Atish Patra @ 2022-07-13  1:27 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	Anup Patel, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
>
> The kvm_mmu_topup_memory_cache() always uses GFP_KERNEL_ACCOUNT for
> memory allocation which prevents it's use in atomic context. To address
> this limitation of kvm_mmu_topup_memory_cache(), we add gfp_custom flag
> in struct kvm_mmu_memory_cache. When the gfp_custom flag is set to some
> GFP_xyz flags, the kvm_mmu_topup_memory_cache() will use that instead of
> GFP_KERNEL_ACCOUNT.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  include/linux/kvm_types.h | 1 +
>  virt/kvm/kvm_main.c       | 4 +++-
>  2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> index ac1ebb37a0ff..1dcfba68076a 100644
> --- a/include/linux/kvm_types.h
> +++ b/include/linux/kvm_types.h
> @@ -87,6 +87,7 @@ struct gfn_to_pfn_cache {
>  struct kvm_mmu_memory_cache {
>         int nobjs;
>         gfp_t gfp_zero;
> +       gfp_t gfp_custom;
>         struct kmem_cache *kmem_cache;
>         void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE];
>  };
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index a49df8988cd6..e3a6f7647474 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -386,7 +386,9 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min)
>         if (mc->nobjs >= min)
>                 return 0;
>         while (mc->nobjs < ARRAY_SIZE(mc->objects)) {
> -               obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT);
> +               obj = mmu_memory_cache_alloc_obj(mc, (mc->gfp_custom) ?
> +                                                mc->gfp_custom :
> +                                                GFP_KERNEL_ACCOUNT);
>                 if (!obj)
>                         return mc->nobjs >= min ? 0 : -ENOMEM;
>                 mc->objects[mc->nobjs++] = obj;
> --
> 2.34.1
>

Acked-by: Atish Patra <atishp@rivosinc.com>

-- 
Regards,
Atish

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/5] KVM: Add gfp_custom flag in struct kvm_mmu_memory_cache
  2022-07-13  1:27     ` Atish Patra
@ 2022-07-18  4:04       ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-18  4:04 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Anup Patel, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List, Atish Patra

Hi Paolo,

On Wed, Jul 13, 2022 at 6:57 AM Atish Patra <atishp@atishpatra.org> wrote:
>
> On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > The kvm_mmu_topup_memory_cache() always uses GFP_KERNEL_ACCOUNT for
> > memory allocation which prevents it's use in atomic context. To address
> > this limitation of kvm_mmu_topup_memory_cache(), we add gfp_custom flag
> > in struct kvm_mmu_memory_cache. When the gfp_custom flag is set to some
> > GFP_xyz flags, the kvm_mmu_topup_memory_cache() will use that instead of
> > GFP_KERNEL_ACCOUNT.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  include/linux/kvm_types.h | 1 +
> >  virt/kvm/kvm_main.c       | 4 +++-
> >  2 files changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> > index ac1ebb37a0ff..1dcfba68076a 100644
> > --- a/include/linux/kvm_types.h
> > +++ b/include/linux/kvm_types.h
> > @@ -87,6 +87,7 @@ struct gfn_to_pfn_cache {
> >  struct kvm_mmu_memory_cache {
> >         int nobjs;
> >         gfp_t gfp_zero;
> > +       gfp_t gfp_custom;
> >         struct kmem_cache *kmem_cache;
> >         void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE];
> >  };
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index a49df8988cd6..e3a6f7647474 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -386,7 +386,9 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min)
> >         if (mc->nobjs >= min)
> >                 return 0;
> >         while (mc->nobjs < ARRAY_SIZE(mc->objects)) {
> > -               obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT);
> > +               obj = mmu_memory_cache_alloc_obj(mc, (mc->gfp_custom) ?
> > +                                                mc->gfp_custom :
> > +                                                GFP_KERNEL_ACCOUNT);
> >                 if (!obj)
> >                         return mc->nobjs >= min ? 0 : -ENOMEM;
> >                 mc->objects[mc->nobjs++] = obj;
> > --
> > 2.34.1
> >
>
> Acked-by: Atish Patra <atishp@rivosinc.com>

I have queued this patch for 5.20.

Please let me know if you are not okay or need changes in this patch.

Thanks,
Anup

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/5] KVM: Add gfp_custom flag in struct kvm_mmu_memory_cache
@ 2022-07-18  4:04       ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-18  4:04 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Anup Patel, Palmer Dabbelt, Paul Walmsley, Alistair Francis,
	KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List, Atish Patra

Hi Paolo,

On Wed, Jul 13, 2022 at 6:57 AM Atish Patra <atishp@atishpatra.org> wrote:
>
> On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > The kvm_mmu_topup_memory_cache() always uses GFP_KERNEL_ACCOUNT for
> > memory allocation which prevents it's use in atomic context. To address
> > this limitation of kvm_mmu_topup_memory_cache(), we add gfp_custom flag
> > in struct kvm_mmu_memory_cache. When the gfp_custom flag is set to some
> > GFP_xyz flags, the kvm_mmu_topup_memory_cache() will use that instead of
> > GFP_KERNEL_ACCOUNT.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  include/linux/kvm_types.h | 1 +
> >  virt/kvm/kvm_main.c       | 4 +++-
> >  2 files changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> > index ac1ebb37a0ff..1dcfba68076a 100644
> > --- a/include/linux/kvm_types.h
> > +++ b/include/linux/kvm_types.h
> > @@ -87,6 +87,7 @@ struct gfn_to_pfn_cache {
> >  struct kvm_mmu_memory_cache {
> >         int nobjs;
> >         gfp_t gfp_zero;
> > +       gfp_t gfp_custom;
> >         struct kmem_cache *kmem_cache;
> >         void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE];
> >  };
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index a49df8988cd6..e3a6f7647474 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -386,7 +386,9 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min)
> >         if (mc->nobjs >= min)
> >                 return 0;
> >         while (mc->nobjs < ARRAY_SIZE(mc->objects)) {
> > -               obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT);
> > +               obj = mmu_memory_cache_alloc_obj(mc, (mc->gfp_custom) ?
> > +                                                mc->gfp_custom :
> > +                                                GFP_KERNEL_ACCOUNT);
> >                 if (!obj)
> >                         return mc->nobjs >= min ? 0 : -ENOMEM;
> >                 mc->objects[mc->nobjs++] = obj;
> > --
> > 2.34.1
> >
>
> Acked-by: Atish Patra <atishp@rivosinc.com>

I have queued this patch for 5.20.

Please let me know if you are not okay or need changes in this patch.

Thanks,
Anup

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/5] RISC-V: KVM: Add G-stage ioremap() and iounmap() functions
  2022-07-13  1:26     ` Atish Patra
@ 2022-07-18  4:06       ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-18  4:06 UTC (permalink / raw)
  To: Atish Patra
  Cc: Anup Patel, Paolo Bonzini, Palmer Dabbelt, Paul Walmsley,
	Alistair Francis, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Wed, Jul 13, 2022 at 6:56 AM Atish Patra <atishp@atishpatra.org> wrote:
>
> On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > The in-kernel AIA IMSIC support requires on-demand mapping / unmapping
> > of Guest IMSIC address to Host IMSIC guest files. To help achieve this,
> > we add kvm_riscv_stage2_ioremap() and kvm_riscv_stage2_iounmap()
> > functions. These new functions for updating G-stage page table mappings
> > will be called in atomic context so we have special "in_atomic" parameter
> > for this purpose.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/include/asm/kvm_host.h |  5 +++++
> >  arch/riscv/kvm/mmu.c              | 18 ++++++++++++++----
> >  2 files changed, 19 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> > index 59a0cf2ca7b9..60c517e4d576 100644
> > --- a/arch/riscv/include/asm/kvm_host.h
> > +++ b/arch/riscv/include/asm/kvm_host.h
> > @@ -284,6 +284,11 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm,
> >  void kvm_riscv_hfence_vvma_all(struct kvm *kvm,
> >                                unsigned long hbase, unsigned long hmask);
> >
> > +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> > +                            phys_addr_t hpa, unsigned long size,
> > +                            bool writable, bool in_atomic);
> > +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa,
> > +                             unsigned long size);
> >  int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
> >                          struct kvm_memory_slot *memslot,
> >                          gpa_t gpa, unsigned long hva, bool is_write);
> > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > index b75d4e200064..f7862ca4c4c6 100644
> > --- a/arch/riscv/kvm/mmu.c
> > +++ b/arch/riscv/kvm/mmu.c
> > @@ -343,8 +343,9 @@ static void gstage_wp_memory_region(struct kvm *kvm, int slot)
> >         kvm_flush_remote_tlbs(kvm);
> >  }
> >
> > -static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
> > -                         unsigned long size, bool writable)
> > +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> > +                            phys_addr_t hpa, unsigned long size,
> > +                            bool writable, bool in_atomic)
> >  {
> >         pte_t pte;
> >         int ret = 0;
> > @@ -353,6 +354,7 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
> >         struct kvm_mmu_memory_cache pcache;
> >
> >         memset(&pcache, 0, sizeof(pcache));
> > +       pcache.gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0;
> >         pcache.gfp_zero = __GFP_ZERO;
> >
> >         end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
> > @@ -382,6 +384,13 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
> >         return ret;
> >  }
> >
> > +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size)
> > +{
> > +       spin_lock(&kvm->mmu_lock);
> > +       gstage_unmap_range(kvm, gpa, size, false);
> > +       spin_unlock(&kvm->mmu_lock);
> > +}
> > +
> >  void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
> >                                              struct kvm_memory_slot *slot,
> >                                              gfn_t gfn_offset,
> > @@ -517,8 +526,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> >                                 goto out;
> >                         }
> >
> > -                       ret = gstage_ioremap(kvm, gpa, pa,
> > -                                            vm_end - vm_start, writable);
> > +                       ret = kvm_riscv_gstage_ioremap(kvm, gpa, pa,
> > +                                                      vm_end - vm_start,
> > +                                                      writable, false);
> >                         if (ret)
> >                                 break;
> >                 }
> > --
> > 2.34.1
> >
>
> Reviewed-by: Atish Patra <atishp@rivosinc.com>

Queued this patch for 5.20.

Thanks,
Anup

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/5] RISC-V: KVM: Add G-stage ioremap() and iounmap() functions
@ 2022-07-18  4:06       ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-18  4:06 UTC (permalink / raw)
  To: Atish Patra
  Cc: Anup Patel, Paolo Bonzini, Palmer Dabbelt, Paul Walmsley,
	Alistair Francis, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Wed, Jul 13, 2022 at 6:56 AM Atish Patra <atishp@atishpatra.org> wrote:
>
> On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > The in-kernel AIA IMSIC support requires on-demand mapping / unmapping
> > of Guest IMSIC address to Host IMSIC guest files. To help achieve this,
> > we add kvm_riscv_stage2_ioremap() and kvm_riscv_stage2_iounmap()
> > functions. These new functions for updating G-stage page table mappings
> > will be called in atomic context so we have special "in_atomic" parameter
> > for this purpose.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/include/asm/kvm_host.h |  5 +++++
> >  arch/riscv/kvm/mmu.c              | 18 ++++++++++++++----
> >  2 files changed, 19 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> > index 59a0cf2ca7b9..60c517e4d576 100644
> > --- a/arch/riscv/include/asm/kvm_host.h
> > +++ b/arch/riscv/include/asm/kvm_host.h
> > @@ -284,6 +284,11 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm,
> >  void kvm_riscv_hfence_vvma_all(struct kvm *kvm,
> >                                unsigned long hbase, unsigned long hmask);
> >
> > +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> > +                            phys_addr_t hpa, unsigned long size,
> > +                            bool writable, bool in_atomic);
> > +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa,
> > +                             unsigned long size);
> >  int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
> >                          struct kvm_memory_slot *memslot,
> >                          gpa_t gpa, unsigned long hva, bool is_write);
> > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > index b75d4e200064..f7862ca4c4c6 100644
> > --- a/arch/riscv/kvm/mmu.c
> > +++ b/arch/riscv/kvm/mmu.c
> > @@ -343,8 +343,9 @@ static void gstage_wp_memory_region(struct kvm *kvm, int slot)
> >         kvm_flush_remote_tlbs(kvm);
> >  }
> >
> > -static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
> > -                         unsigned long size, bool writable)
> > +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> > +                            phys_addr_t hpa, unsigned long size,
> > +                            bool writable, bool in_atomic)
> >  {
> >         pte_t pte;
> >         int ret = 0;
> > @@ -353,6 +354,7 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
> >         struct kvm_mmu_memory_cache pcache;
> >
> >         memset(&pcache, 0, sizeof(pcache));
> > +       pcache.gfp_custom = (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0;
> >         pcache.gfp_zero = __GFP_ZERO;
> >
> >         end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK;
> > @@ -382,6 +384,13 @@ static int gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,
> >         return ret;
> >  }
> >
> > +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size)
> > +{
> > +       spin_lock(&kvm->mmu_lock);
> > +       gstage_unmap_range(kvm, gpa, size, false);
> > +       spin_unlock(&kvm->mmu_lock);
> > +}
> > +
> >  void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
> >                                              struct kvm_memory_slot *slot,
> >                                              gfn_t gfn_offset,
> > @@ -517,8 +526,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> >                                 goto out;
> >                         }
> >
> > -                       ret = gstage_ioremap(kvm, gpa, pa,
> > -                                            vm_end - vm_start, writable);
> > +                       ret = kvm_riscv_gstage_ioremap(kvm, gpa, pa,
> > +                                                      vm_end - vm_start,
> > +                                                      writable, false);
> >                         if (ret)
> >                                 break;
> >                 }
> > --
> > 2.34.1
> >
>
> Reviewed-by: Atish Patra <atishp@rivosinc.com>

Queued this patch for 5.20.

Thanks,
Anup

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/5] RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap()
  2022-07-13  1:23     ` Atish Patra
@ 2022-07-18  4:07       ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-18  4:07 UTC (permalink / raw)
  To: Atish Patra
  Cc: Anup Patel, Paolo Bonzini, Palmer Dabbelt, Paul Walmsley,
	Alistair Francis, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Wed, Jul 13, 2022 at 6:53 AM Atish Patra <atishp@atishpatra.org> wrote:
>
> On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > When the host has Svpbmt extension, we should use page based memory
> > type 2 (i.e. IO) for IO mappings in the G-stage page table.
> >
> > To achieve this, we replace use of PAGE_KERNEL with PAGE_KERNEL_IO
> > in the kvm_riscv_gstage_ioremap().
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/kvm/mmu.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > index f7862ca4c4c6..bc545aef6034 100644
> > --- a/arch/riscv/kvm/mmu.c
> > +++ b/arch/riscv/kvm/mmu.c
> > @@ -361,7 +361,7 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> >         pfn = __phys_to_pfn(hpa);
> >
> >         for (addr = gpa; addr < end; addr += PAGE_SIZE) {
> > -               pte = pfn_pte(pfn, PAGE_KERNEL);
> > +               pte = pfn_pte(pfn, PAGE_KERNEL_IO);
> >
> >                 if (!writable)
> >                         pte = pte_wrprotect(pte);
> > --
> > 2.34.1
> >
>
> LGTM.
>
> Reviewed-by: Atish Patra <atishp@rivosinc.com>

Queued this patch for 5.20.

Thanks,
Anup

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/5] RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap()
@ 2022-07-18  4:07       ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-18  4:07 UTC (permalink / raw)
  To: Atish Patra
  Cc: Anup Patel, Paolo Bonzini, Palmer Dabbelt, Paul Walmsley,
	Alistair Francis, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Wed, Jul 13, 2022 at 6:53 AM Atish Patra <atishp@atishpatra.org> wrote:
>
> On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > When the host has Svpbmt extension, we should use page based memory
> > type 2 (i.e. IO) for IO mappings in the G-stage page table.
> >
> > To achieve this, we replace use of PAGE_KERNEL with PAGE_KERNEL_IO
> > in the kvm_riscv_gstage_ioremap().
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/kvm/mmu.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> > index f7862ca4c4c6..bc545aef6034 100644
> > --- a/arch/riscv/kvm/mmu.c
> > +++ b/arch/riscv/kvm/mmu.c
> > @@ -361,7 +361,7 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa,
> >         pfn = __phys_to_pfn(hpa);
> >
> >         for (addr = gpa; addr < end; addr += PAGE_SIZE) {
> > -               pte = pfn_pte(pfn, PAGE_KERNEL);
> > +               pte = pfn_pte(pfn, PAGE_KERNEL_IO);
> >
> >                 if (!writable)
> >                         pte = pte_wrprotect(pte);
> > --
> > 2.34.1
> >
>
> LGTM.
>
> Reviewed-by: Atish Patra <atishp@rivosinc.com>

Queued this patch for 5.20.

Thanks,
Anup

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/5] RISC-V: KVM: Add support for Svpbmt inside Guest/VM
  2022-07-13  1:23     ` Atish Patra
@ 2022-07-18  4:07       ` Anup Patel
  -1 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-18  4:07 UTC (permalink / raw)
  To: Atish Patra
  Cc: Anup Patel, Paolo Bonzini, Palmer Dabbelt, Paul Walmsley,
	Alistair Francis, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Wed, Jul 13, 2022 at 6:54 AM Atish Patra <atishp@atishpatra.org> wrote:
>
> On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > The Guest/VM can use Svpbmt in VS-stage page tables when allowed by the
> > Hypervisor using the henvcfg.PBMTE bit.
> >
> > We add Svpbmt support for the KVM Guest/VM which can be enabled/disabled
> > by the KVM user-space (QEMU/KVMTOOL) using the ISA extension ONE_REG
> > interface.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/include/asm/csr.h      | 16 ++++++++++++++++
> >  arch/riscv/include/uapi/asm/kvm.h |  1 +
> >  arch/riscv/kvm/vcpu.c             | 16 ++++++++++++++++
> >  3 files changed, 33 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
> > index 6d85655e7edf..17516afc389a 100644
> > --- a/arch/riscv/include/asm/csr.h
> > +++ b/arch/riscv/include/asm/csr.h
> > @@ -156,6 +156,18 @@
> >                                  (_AC(1, UL) << IRQ_S_TIMER) | \
> >                                  (_AC(1, UL) << IRQ_S_EXT))
> >
> > +/* xENVCFG flags */
> > +#define ENVCFG_STCE                    (_AC(1, ULL) << 63)
> > +#define ENVCFG_PBMTE                   (_AC(1, ULL) << 62)
> > +#define ENVCFG_CBZE                    (_AC(1, UL) << 7)
> > +#define ENVCFG_CBCFE                   (_AC(1, UL) << 6)
> > +#define ENVCFG_CBIE_SHIFT              4
> > +#define ENVCFG_CBIE                    (_AC(0x3, UL) << ENVCFG_CBIE_SHIFT)
> > +#define ENVCFG_CBIE_ILL                        _AC(0x0, UL)
> > +#define ENVCFG_CBIE_FLUSH              _AC(0x1, UL)
> > +#define ENVCFG_CBIE_INV                        _AC(0x3, UL)
> > +#define ENVCFG_FIOM                    _AC(0x1, UL)
> > +
> >  /* symbolic CSR names: */
> >  #define CSR_CYCLE              0xc00
> >  #define CSR_TIME               0xc01
> > @@ -252,7 +264,9 @@
> >  #define CSR_HTIMEDELTA         0x605
> >  #define CSR_HCOUNTEREN         0x606
> >  #define CSR_HGEIE              0x607
> > +#define CSR_HENVCFG            0x60a
> >  #define CSR_HTIMEDELTAH                0x615
> > +#define CSR_HENVCFGH           0x61a
> >  #define CSR_HTVAL              0x643
> >  #define CSR_HIP                        0x644
> >  #define CSR_HVIP               0x645
> > @@ -264,6 +278,8 @@
> >  #define CSR_MISA               0x301
> >  #define CSR_MIE                        0x304
> >  #define CSR_MTVEC              0x305
> > +#define CSR_MENVCFG            0x30a
> > +#define CSR_MENVCFGH           0x31a
> >  #define CSR_MSCRATCH           0x340
> >  #define CSR_MEPC               0x341
> >  #define CSR_MCAUSE             0x342
> > diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> > index 6119368ba6d5..24b2a6e27698 100644
> > --- a/arch/riscv/include/uapi/asm/kvm.h
> > +++ b/arch/riscv/include/uapi/asm/kvm.h
> > @@ -96,6 +96,7 @@ enum KVM_RISCV_ISA_EXT_ID {
> >         KVM_RISCV_ISA_EXT_H,
> >         KVM_RISCV_ISA_EXT_I,
> >         KVM_RISCV_ISA_EXT_M,
> > +       KVM_RISCV_ISA_EXT_SVPBMT,
> >         KVM_RISCV_ISA_EXT_MAX,
> >  };
> >
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index 6dd9cf729614..b7a433c54d0f 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -51,6 +51,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
> >         RISCV_ISA_EXT_h,
> >         RISCV_ISA_EXT_i,
> >         RISCV_ISA_EXT_m,
> > +       RISCV_ISA_EXT_SVPBMT,
> >  };
> >
> >  static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
> > @@ -777,6 +778,19 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
> >         return -EINVAL;
> >  }
> >
> > +static void kvm_riscv_vcpu_update_config(const unsigned long *isa)
> > +{
> > +       u64 henvcfg = 0;
> > +
> > +       if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SVPBMT))
> > +               henvcfg |= ENVCFG_PBMTE;
> > +
> > +       csr_write(CSR_HENVCFG, henvcfg);
> > +#ifdef CONFIG_32BIT
> > +       csr_write(CSR_HENVCFGH, henvcfg >> 32);
> > +#endif
> > +}
> > +
> >  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> >  {
> >         struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> > @@ -791,6 +805,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> >         csr_write(CSR_HVIP, csr->hvip);
> >         csr_write(CSR_VSATP, csr->vsatp);
> >
> > +       kvm_riscv_vcpu_update_config(vcpu->arch.isa);
> > +
> >         kvm_riscv_gstage_update_hgatp(vcpu);
> >
> >         kvm_riscv_vcpu_timer_restore(vcpu);
> > --
> > 2.34.1
> >
>
> LGTM.
>
>
> Reviewed-by: Atish Patra <atishp@rivosinc.com>
>

Queued this patch for 5.20.

Thanks,
Anup

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/5] RISC-V: KVM: Add support for Svpbmt inside Guest/VM
@ 2022-07-18  4:07       ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2022-07-18  4:07 UTC (permalink / raw)
  To: Atish Patra
  Cc: Anup Patel, Paolo Bonzini, Palmer Dabbelt, Paul Walmsley,
	Alistair Francis, KVM General,
	open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv),
	linux-riscv, linux-kernel@vger.kernel.org List

On Wed, Jul 13, 2022 at 6:54 AM Atish Patra <atishp@atishpatra.org> wrote:
>
> On Thu, Jul 7, 2022 at 7:53 AM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > The Guest/VM can use Svpbmt in VS-stage page tables when allowed by the
> > Hypervisor using the henvcfg.PBMTE bit.
> >
> > We add Svpbmt support for the KVM Guest/VM which can be enabled/disabled
> > by the KVM user-space (QEMU/KVMTOOL) using the ISA extension ONE_REG
> > interface.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/include/asm/csr.h      | 16 ++++++++++++++++
> >  arch/riscv/include/uapi/asm/kvm.h |  1 +
> >  arch/riscv/kvm/vcpu.c             | 16 ++++++++++++++++
> >  3 files changed, 33 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
> > index 6d85655e7edf..17516afc389a 100644
> > --- a/arch/riscv/include/asm/csr.h
> > +++ b/arch/riscv/include/asm/csr.h
> > @@ -156,6 +156,18 @@
> >                                  (_AC(1, UL) << IRQ_S_TIMER) | \
> >                                  (_AC(1, UL) << IRQ_S_EXT))
> >
> > +/* xENVCFG flags */
> > +#define ENVCFG_STCE                    (_AC(1, ULL) << 63)
> > +#define ENVCFG_PBMTE                   (_AC(1, ULL) << 62)
> > +#define ENVCFG_CBZE                    (_AC(1, UL) << 7)
> > +#define ENVCFG_CBCFE                   (_AC(1, UL) << 6)
> > +#define ENVCFG_CBIE_SHIFT              4
> > +#define ENVCFG_CBIE                    (_AC(0x3, UL) << ENVCFG_CBIE_SHIFT)
> > +#define ENVCFG_CBIE_ILL                        _AC(0x0, UL)
> > +#define ENVCFG_CBIE_FLUSH              _AC(0x1, UL)
> > +#define ENVCFG_CBIE_INV                        _AC(0x3, UL)
> > +#define ENVCFG_FIOM                    _AC(0x1, UL)
> > +
> >  /* symbolic CSR names: */
> >  #define CSR_CYCLE              0xc00
> >  #define CSR_TIME               0xc01
> > @@ -252,7 +264,9 @@
> >  #define CSR_HTIMEDELTA         0x605
> >  #define CSR_HCOUNTEREN         0x606
> >  #define CSR_HGEIE              0x607
> > +#define CSR_HENVCFG            0x60a
> >  #define CSR_HTIMEDELTAH                0x615
> > +#define CSR_HENVCFGH           0x61a
> >  #define CSR_HTVAL              0x643
> >  #define CSR_HIP                        0x644
> >  #define CSR_HVIP               0x645
> > @@ -264,6 +278,8 @@
> >  #define CSR_MISA               0x301
> >  #define CSR_MIE                        0x304
> >  #define CSR_MTVEC              0x305
> > +#define CSR_MENVCFG            0x30a
> > +#define CSR_MENVCFGH           0x31a
> >  #define CSR_MSCRATCH           0x340
> >  #define CSR_MEPC               0x341
> >  #define CSR_MCAUSE             0x342
> > diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> > index 6119368ba6d5..24b2a6e27698 100644
> > --- a/arch/riscv/include/uapi/asm/kvm.h
> > +++ b/arch/riscv/include/uapi/asm/kvm.h
> > @@ -96,6 +96,7 @@ enum KVM_RISCV_ISA_EXT_ID {
> >         KVM_RISCV_ISA_EXT_H,
> >         KVM_RISCV_ISA_EXT_I,
> >         KVM_RISCV_ISA_EXT_M,
> > +       KVM_RISCV_ISA_EXT_SVPBMT,
> >         KVM_RISCV_ISA_EXT_MAX,
> >  };
> >
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index 6dd9cf729614..b7a433c54d0f 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -51,6 +51,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
> >         RISCV_ISA_EXT_h,
> >         RISCV_ISA_EXT_i,
> >         RISCV_ISA_EXT_m,
> > +       RISCV_ISA_EXT_SVPBMT,
> >  };
> >
> >  static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
> > @@ -777,6 +778,19 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
> >         return -EINVAL;
> >  }
> >
> > +static void kvm_riscv_vcpu_update_config(const unsigned long *isa)
> > +{
> > +       u64 henvcfg = 0;
> > +
> > +       if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SVPBMT))
> > +               henvcfg |= ENVCFG_PBMTE;
> > +
> > +       csr_write(CSR_HENVCFG, henvcfg);
> > +#ifdef CONFIG_32BIT
> > +       csr_write(CSR_HENVCFGH, henvcfg >> 32);
> > +#endif
> > +}
> > +
> >  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> >  {
> >         struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> > @@ -791,6 +805,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> >         csr_write(CSR_HVIP, csr->hvip);
> >         csr_write(CSR_VSATP, csr->vsatp);
> >
> > +       kvm_riscv_vcpu_update_config(vcpu->arch.isa);
> > +
> >         kvm_riscv_gstage_update_hgatp(vcpu);
> >
> >         kvm_riscv_vcpu_timer_restore(vcpu);
> > --
> > 2.34.1
> >
>
> LGTM.
>
>
> Reviewed-by: Atish Patra <atishp@rivosinc.com>
>

Queued this patch for 5.20.

Thanks,
Anup

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2022-07-18  4:07 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-07 14:52 [PATCH 0/5] KVM RISC-V Svpbmt support Anup Patel
2022-07-07 14:52 ` Anup Patel
2022-07-07 14:52 ` [PATCH 1/5] KVM: Add gfp_custom flag in struct kvm_mmu_memory_cache Anup Patel
2022-07-07 14:52   ` Anup Patel
2022-07-13  1:27   ` Atish Patra
2022-07-13  1:27     ` Atish Patra
2022-07-18  4:04     ` Anup Patel
2022-07-18  4:04       ` Anup Patel
2022-07-07 14:52 ` [PATCH 2/5] riscv: Fix missing PAGE_PFN_MASK Anup Patel
2022-07-07 14:52   ` Anup Patel
2022-07-11  4:17   ` Anup Patel
2022-07-11  4:17     ` Anup Patel
2022-07-07 14:52 ` [PATCH 3/5] RISC-V: KVM: Add G-stage ioremap() and iounmap() functions Anup Patel
2022-07-07 14:52   ` Anup Patel
2022-07-13  1:26   ` Atish Patra
2022-07-13  1:26     ` Atish Patra
2022-07-18  4:06     ` Anup Patel
2022-07-18  4:06       ` Anup Patel
2022-07-07 14:52 ` [PATCH 4/5] RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap() Anup Patel
2022-07-07 14:52   ` Anup Patel
2022-07-13  1:23   ` Atish Patra
2022-07-13  1:23     ` Atish Patra
2022-07-18  4:07     ` Anup Patel
2022-07-18  4:07       ` Anup Patel
2022-07-07 14:52 ` [PATCH 5/5] RISC-V: KVM: Add support for Svpbmt inside Guest/VM Anup Patel
2022-07-07 14:52   ` Anup Patel
2022-07-13  1:23   ` Atish Patra
2022-07-13  1:23     ` Atish Patra
2022-07-18  4:07     ` Anup Patel
2022-07-18  4:07       ` Anup Patel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.