All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/4] Fix some PMP implementations
@ 2020-07-28  8:26 Zong Li
  2020-07-28  8:26   ` Zong Li
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Zong Li @ 2020-07-28  8:26 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

This patch set contains the fixes for wrong index of pmpcfg CSR on rv64,
and the pmp range in CSR function table. After 3rd version of this patch
series, we also fix the PMP issues such as wrong physical address
translation and wrong ignoring PMP checking.

Changed in v6:
 - Mask low 12 bits of return value of riscv_cpu_get_phys_page_debug.
   Suggested by Alistair Francis.

Changed in v5:
 - Pick the suggestion which was lost in last version.

Changed in v4:
 - Refine the implementation. Suggested by Bin Meng.
 - Add fix for PMP checking was ignored.

Changed in v3:
 - Refine the implementation. Suggested by Bin Meng.
 - Add fix for wrong physical address translation.

Changed in v2:
 - Move out the shifting operation from loop. Suggested by Bin Meng.

Zong Li (4):
  target/riscv: Fix the range of pmpcfg of CSR funcion table
  target/riscv/pmp.c: Fix the index offset on RV64
  target/riscv: Fix the translation of physical address
  target/riscv: Change the TLB page size depends on PMP entries.

 target/riscv/cpu_helper.c | 15 +++++++---
 target/riscv/csr.c        |  2 +-
 target/riscv/pmp.c        | 63 ++++++++++++++++++++++++++++++++++++++-
 target/riscv/pmp.h        |  2 ++
 4 files changed, 76 insertions(+), 6 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v6 1/4] target/riscv: Fix the range of pmpcfg of CSR funcion table
  2020-07-28  8:26 [PATCH v6 0/4] Fix some PMP implementations Zong Li
@ 2020-07-28  8:26   ` Zong Li
  2020-07-28  8:26   ` Zong Li
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Zong Li @ 2020-07-28  8:26 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Bin Meng, Alistair Francis, Zong Li

The range of Physical Memory Protection should be from CSR_PMPCFG0
to CSR_PMPCFG3, not to CSR_PMPADDR9.

Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
---
 target/riscv/csr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index ac01c835e1..6a96a01b1c 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1353,7 +1353,7 @@ static riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
     [CSR_MTINST] =              { hmode,   read_mtinst,      write_mtinst     },
 
     /* Physical Memory Protection */
-    [CSR_PMPCFG0  ... CSR_PMPADDR9] =  { pmp,   read_pmpcfg,  write_pmpcfg   },
+    [CSR_PMPCFG0  ... CSR_PMPCFG3]   = { pmp,   read_pmpcfg,  write_pmpcfg   },
     [CSR_PMPADDR0 ... CSR_PMPADDR15] = { pmp,   read_pmpaddr, write_pmpaddr  },
 
     /* Performance Counters */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v6 1/4] target/riscv: Fix the range of pmpcfg of CSR funcion table
@ 2020-07-28  8:26   ` Zong Li
  0 siblings, 0 replies; 16+ messages in thread
From: Zong Li @ 2020-07-28  8:26 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li, Alistair Francis, Bin Meng

The range of Physical Memory Protection should be from CSR_PMPCFG0
to CSR_PMPCFG3, not to CSR_PMPADDR9.

Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
---
 target/riscv/csr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index ac01c835e1..6a96a01b1c 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1353,7 +1353,7 @@ static riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
     [CSR_MTINST] =              { hmode,   read_mtinst,      write_mtinst     },
 
     /* Physical Memory Protection */
-    [CSR_PMPCFG0  ... CSR_PMPADDR9] =  { pmp,   read_pmpcfg,  write_pmpcfg   },
+    [CSR_PMPCFG0  ... CSR_PMPCFG3]   = { pmp,   read_pmpcfg,  write_pmpcfg   },
     [CSR_PMPADDR0 ... CSR_PMPADDR15] = { pmp,   read_pmpaddr, write_pmpaddr  },
 
     /* Performance Counters */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v6 2/4] target/riscv/pmp.c: Fix the index offset on RV64
  2020-07-28  8:26 [PATCH v6 0/4] Fix some PMP implementations Zong Li
@ 2020-07-28  8:26   ` Zong Li
  2020-07-28  8:26   ` Zong Li
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Zong Li @ 2020-07-28  8:26 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Bin Meng, Alistair Francis, Zong Li

On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
implementation, the second parameter of pmp_write_cfg is
"reg_index * sizeof(target_ulong)", and we get the the result
which is started from 16 if reg_index is 2, but we expect that
it should be started from 8. Separate the implementation for
RV32 and RV64 respectively.

Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/pmp.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 2a2b9f5363..aeba796484 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -318,6 +318,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
         return;
     }
 
+#if defined(TARGET_RISCV64)
+    reg_index >>= 1;
+#endif
+
     for (i = 0; i < sizeof(target_ulong); i++) {
         cfg_val = (val >> 8 * i)  & 0xff;
         pmp_write_cfg(env, (reg_index * sizeof(target_ulong)) + i,
@@ -335,11 +339,16 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
     target_ulong cfg_val = 0;
     target_ulong val = 0;
 
+    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
+
+#if defined(TARGET_RISCV64)
+    reg_index >>= 1;
+#endif
+
     for (i = 0; i < sizeof(target_ulong); i++) {
         val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
         cfg_val |= (val << (i * 8));
     }
-    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
 
     return cfg_val;
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v6 2/4] target/riscv/pmp.c: Fix the index offset on RV64
@ 2020-07-28  8:26   ` Zong Li
  0 siblings, 0 replies; 16+ messages in thread
From: Zong Li @ 2020-07-28  8:26 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li, Bin Meng, Alistair Francis

On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
implementation, the second parameter of pmp_write_cfg is
"reg_index * sizeof(target_ulong)", and we get the the result
which is started from 16 if reg_index is 2, but we expect that
it should be started from 8. Separate the implementation for
RV32 and RV64 respectively.

Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/pmp.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 2a2b9f5363..aeba796484 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -318,6 +318,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
         return;
     }
 
+#if defined(TARGET_RISCV64)
+    reg_index >>= 1;
+#endif
+
     for (i = 0; i < sizeof(target_ulong); i++) {
         cfg_val = (val >> 8 * i)  & 0xff;
         pmp_write_cfg(env, (reg_index * sizeof(target_ulong)) + i,
@@ -335,11 +339,16 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
     target_ulong cfg_val = 0;
     target_ulong val = 0;
 
+    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
+
+#if defined(TARGET_RISCV64)
+    reg_index >>= 1;
+#endif
+
     for (i = 0; i < sizeof(target_ulong); i++) {
         val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
         cfg_val |= (val << (i * 8));
     }
-    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
 
     return cfg_val;
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v6 3/4] target/riscv: Fix the translation of physical address
  2020-07-28  8:26 [PATCH v6 0/4] Fix some PMP implementations Zong Li
  2020-07-28  8:26   ` Zong Li
  2020-07-28  8:26   ` Zong Li
@ 2020-07-28  8:26 ` Zong Li
  2020-07-28 23:55     ` Alistair Francis
  2020-07-28  8:26 ` [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li
  3 siblings, 1 reply; 16+ messages in thread
From: Zong Li @ 2020-07-28  8:26 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

The real physical address should add the 12 bits page offset. It also
causes the PMP wrong checking due to the minimum granularity of PMP is
4 byte, but we always get the physical address which is 4KB alignment,
that means, we always use the start address of the page to check PMP for
all addresses which in the same page.

Signed-off-by: Zong Li <zong.li@sifive.com>
---
 target/riscv/cpu_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 75d2ae3434..2f337e418c 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -543,7 +543,8 @@ restart:
             /* for superpage mappings, make a fake leaf PTE for the TLB's
                benefit. */
             target_ulong vpn = addr >> PGSHIFT;
-            *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
+            *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
+                        (addr & ~TARGET_PAGE_MASK);
 
             /* set permissions on the TLB entry */
             if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
@@ -630,7 +631,7 @@ hwaddr riscv_cpu_get_phys_page_debug(CPUState *cs, vaddr addr)
         }
     }
 
-    return phys_addr;
+    return phys_addr & TARGET_PAGE_MASK;
 }
 
 void riscv_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries.
  2020-07-28  8:26 [PATCH v6 0/4] Fix some PMP implementations Zong Li
                   ` (2 preceding siblings ...)
  2020-07-28  8:26 ` [PATCH v6 3/4] target/riscv: Fix the translation of physical address Zong Li
@ 2020-07-28  8:26 ` Zong Li
  2020-08-05  2:43   ` Zong Li
  2020-08-12 15:11     ` Alistair Francis
  3 siblings, 2 replies; 16+ messages in thread
From: Zong Li @ 2020-07-28  8:26 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

The minimum granularity of PMP is 4 bytes, it is small than 4KB page
size, therefore, the pmp checking would be ignored if its range doesn't
start from the alignment of one page. This patch detects the pmp entries
and sets the small page size to TLB if there is a PMP entry which cover
the page size.

Signed-off-by: Zong Li <zong.li@sifive.com>
---
 target/riscv/cpu_helper.c | 10 ++++++--
 target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
 target/riscv/pmp.h        |  2 ++
 3 files changed, 62 insertions(+), 2 deletions(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 2f337e418c..fd1d373b6f 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     bool first_stage_error = true;
     int ret = TRANSLATE_FAIL;
     int mode = mmu_idx;
+    target_ulong tlb_size = 0;
 
     env->guest_phys_fault_addr = 0;
 
@@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     }
 
     if (ret == TRANSLATE_SUCCESS) {
-        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
-                     prot, mmu_idx, TARGET_PAGE_SIZE);
+        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
+            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
+                         prot, mmu_idx, tlb_size);
+        } else {
+            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
+                         prot, mmu_idx, TARGET_PAGE_SIZE);
+        }
         return true;
     } else if (probe) {
         return false;
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index aeba796484..adadf6e9ba 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -393,3 +393,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
 
     return val;
 }
+
+/*
+ * Calculate the TLB size if the start address or the end address of
+ * PMP entry is presented in thie TLB page.
+ */
+static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
+    target_ulong tlb_sa, target_ulong tlb_ea)
+{
+    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
+    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
+
+    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
+        return pmp_ea - pmp_sa + 1;
+    }
+
+    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
+        return tlb_ea - pmp_sa + 1;
+    }
+
+    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
+        return pmp_ea - tlb_sa + 1;
+    }
+
+    return 0;
+}
+
+/*
+ * Check is there a PMP entry whcih range covers this page. If so,
+ * try to find the minimum granularity for the TLB size.
+ */
+bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
+    target_ulong *tlb_size)
+{
+    int i;
+    target_ulong val;
+    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
+
+    for (i = 0; i < MAX_RISCV_PMPS; i++) {
+        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
+        if (val) {
+            if (*tlb_size == 0 || *tlb_size > val) {
+                *tlb_size = val;
+            }
+        }
+    }
+
+    if (*tlb_size != 0) {
+        return true;
+    }
+
+    return false;
+}
diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
index 8e19793132..c70f2ea4c4 100644
--- a/target/riscv/pmp.h
+++ b/target/riscv/pmp.h
@@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
 target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
 bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
     target_ulong size, pmp_priv_t priv, target_ulong mode);
+bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
+    target_ulong *tlb_size);
 
 #endif
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 3/4] target/riscv: Fix the translation of physical address
  2020-07-28  8:26 ` [PATCH v6 3/4] target/riscv: Fix the translation of physical address Zong Li
@ 2020-07-28 23:55     ` Alistair Francis
  0 siblings, 0 replies; 16+ messages in thread
From: Alistair Francis @ 2020-07-28 23:55 UTC (permalink / raw)
  To: Zong Li
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt, Bin Meng

On Tue, Jul 28, 2020 at 1:26 AM Zong Li <zong.li@sifive.com> wrote:
>
> The real physical address should add the 12 bits page offset. It also
> causes the PMP wrong checking due to the minimum granularity of PMP is
> 4 byte, but we always get the physical address which is 4KB alignment,
> that means, we always use the start address of the page to check PMP for
> all addresses which in the same page.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  target/riscv/cpu_helper.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 75d2ae3434..2f337e418c 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -543,7 +543,8 @@ restart:
>              /* for superpage mappings, make a fake leaf PTE for the TLB's
>                 benefit. */
>              target_ulong vpn = addr >> PGSHIFT;
> -            *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
> +            *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
> +                        (addr & ~TARGET_PAGE_MASK);
>
>              /* set permissions on the TLB entry */
>              if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
> @@ -630,7 +631,7 @@ hwaddr riscv_cpu_get_phys_page_debug(CPUState *cs, vaddr addr)
>          }
>      }
>
> -    return phys_addr;
> +    return phys_addr & TARGET_PAGE_MASK;
>  }
>
>  void riscv_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
> --
> 2.27.0
>
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 3/4] target/riscv: Fix the translation of physical address
@ 2020-07-28 23:55     ` Alistair Francis
  0 siblings, 0 replies; 16+ messages in thread
From: Alistair Francis @ 2020-07-28 23:55 UTC (permalink / raw)
  To: Zong Li
  Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Tue, Jul 28, 2020 at 1:26 AM Zong Li <zong.li@sifive.com> wrote:
>
> The real physical address should add the 12 bits page offset. It also
> causes the PMP wrong checking due to the minimum granularity of PMP is
> 4 byte, but we always get the physical address which is 4KB alignment,
> that means, we always use the start address of the page to check PMP for
> all addresses which in the same page.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  target/riscv/cpu_helper.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 75d2ae3434..2f337e418c 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -543,7 +543,8 @@ restart:
>              /* for superpage mappings, make a fake leaf PTE for the TLB's
>                 benefit. */
>              target_ulong vpn = addr >> PGSHIFT;
> -            *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
> +            *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
> +                        (addr & ~TARGET_PAGE_MASK);
>
>              /* set permissions on the TLB entry */
>              if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
> @@ -630,7 +631,7 @@ hwaddr riscv_cpu_get_phys_page_debug(CPUState *cs, vaddr addr)
>          }
>      }
>
> -    return phys_addr;
> +    return phys_addr & TARGET_PAGE_MASK;
>  }
>
>  void riscv_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
> --
> 2.27.0
>
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries.
  2020-07-28  8:26 ` [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li
@ 2020-08-05  2:43   ` Zong Li
  2020-08-12 15:11     ` Alistair Francis
  1 sibling, 0 replies; 16+ messages in thread
From: Zong Li @ 2020-08-05  2:43 UTC (permalink / raw)
  To: Palmer Dabbelt, Alistair Francis, Bin Meng, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Tue, Jul 28, 2020 at 4:26 PM Zong Li <zong.li@sifive.com> wrote:
>
> The minimum granularity of PMP is 4 bytes, it is small than 4KB page
> size, therefore, the pmp checking would be ignored if its range doesn't
> start from the alignment of one page. This patch detects the pmp entries
> and sets the small page size to TLB if there is a PMP entry which cover
> the page size.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>
> ---
>  target/riscv/cpu_helper.c | 10 ++++++--
>  target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
>  target/riscv/pmp.h        |  2 ++
>  3 files changed, 62 insertions(+), 2 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 2f337e418c..fd1d373b6f 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
>      bool first_stage_error = true;
>      int ret = TRANSLATE_FAIL;
>      int mode = mmu_idx;
> +    target_ulong tlb_size = 0;
>
>      env->guest_phys_fault_addr = 0;
>
> @@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
>      }
>
>      if (ret == TRANSLATE_SUCCESS) {
> -        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> -                     prot, mmu_idx, TARGET_PAGE_SIZE);
> +        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
> +            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
> +                         prot, mmu_idx, tlb_size);
> +        } else {
> +            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> +                         prot, mmu_idx, TARGET_PAGE_SIZE);
> +        }
>          return true;
>      } else if (probe) {
>          return false;
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index aeba796484..adadf6e9ba 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -393,3 +393,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
>
>      return val;
>  }
> +
> +/*
> + * Calculate the TLB size if the start address or the end address of
> + * PMP entry is presented in thie TLB page.
> + */
> +static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
> +    target_ulong tlb_sa, target_ulong tlb_ea)
> +{
> +    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
> +    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
> +
> +    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
> +        return pmp_ea - pmp_sa + 1;
> +    }
> +
> +    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
> +        return tlb_ea - pmp_sa + 1;
> +    }
> +
> +    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
> +        return pmp_ea - tlb_sa + 1;
> +    }
> +
> +    return 0;
> +}
> +
> +/*
> + * Check is there a PMP entry whcih range covers this page. If so,
> + * try to find the minimum granularity for the TLB size.
> + */
> +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> +    target_ulong *tlb_size)
> +{
> +    int i;
> +    target_ulong val;
> +    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
> +
> +    for (i = 0; i < MAX_RISCV_PMPS; i++) {
> +        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
> +        if (val) {
> +            if (*tlb_size == 0 || *tlb_size > val) {
> +                *tlb_size = val;
> +            }
> +        }
> +    }
> +
> +    if (*tlb_size != 0) {
> +        return true;
> +    }
> +
> +    return false;
> +}
> diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
> index 8e19793132..c70f2ea4c4 100644
> --- a/target/riscv/pmp.h
> +++ b/target/riscv/pmp.h
> @@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
>  target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
>  bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
>      target_ulong size, pmp_priv_t priv, target_ulong mode);
> +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> +    target_ulong *tlb_size);
>
>  #endif
> --
> 2.27.0
>

ping


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries.
  2020-07-28  8:26 ` [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li
@ 2020-08-12 15:11     ` Alistair Francis
  2020-08-12 15:11     ` Alistair Francis
  1 sibling, 0 replies; 16+ messages in thread
From: Alistair Francis @ 2020-08-12 15:11 UTC (permalink / raw)
  To: Zong Li
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt, Bin Meng

On Tue, Jul 28, 2020 at 1:29 AM Zong Li <zong.li@sifive.com> wrote:
>
> The minimum granularity of PMP is 4 bytes, it is small than 4KB page
> size, therefore, the pmp checking would be ignored if its range doesn't
> start from the alignment of one page. This patch detects the pmp entries
> and sets the small page size to TLB if there is a PMP entry which cover
> the page size.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>
> ---
>  target/riscv/cpu_helper.c | 10 ++++++--
>  target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
>  target/riscv/pmp.h        |  2 ++
>  3 files changed, 62 insertions(+), 2 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 2f337e418c..fd1d373b6f 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
>      bool first_stage_error = true;
>      int ret = TRANSLATE_FAIL;
>      int mode = mmu_idx;
> +    target_ulong tlb_size = 0;
>
>      env->guest_phys_fault_addr = 0;
>
> @@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
>      }
>
>      if (ret == TRANSLATE_SUCCESS) {
> -        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> -                     prot, mmu_idx, TARGET_PAGE_SIZE);
> +        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
> +            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
> +                         prot, mmu_idx, tlb_size);
> +        } else {
> +            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> +                         prot, mmu_idx, TARGET_PAGE_SIZE);
> +        }
>          return true;
>      } else if (probe) {
>          return false;
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index aeba796484..adadf6e9ba 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -393,3 +393,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
>
>      return val;
>  }
> +
> +/*
> + * Calculate the TLB size if the start address or the end address of
> + * PMP entry is presented in thie TLB page.
> + */
> +static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
> +    target_ulong tlb_sa, target_ulong tlb_ea)
> +{
> +    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
> +    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
> +
> +    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
> +        return pmp_ea - pmp_sa + 1;
> +    }
> +
> +    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
> +        return tlb_ea - pmp_sa + 1;
> +    }
> +
> +    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
> +        return pmp_ea - tlb_sa + 1;
> +    }
> +
> +    return 0;
> +}
> +
> +/*
> + * Check is there a PMP entry whcih range covers this page. If so,

s/whcih/which/g

I fixed this when apply it.

> + * try to find the minimum granularity for the TLB size.
> + */
> +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> +    target_ulong *tlb_size)
> +{
> +    int i;
> +    target_ulong val;
> +    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
> +
> +    for (i = 0; i < MAX_RISCV_PMPS; i++) {
> +        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
> +        if (val) {
> +            if (*tlb_size == 0 || *tlb_size > val) {
> +                *tlb_size = val;
> +            }
> +        }
> +    }
> +
> +    if (*tlb_size != 0) {
> +        return true;
> +    }
> +
> +    return false;
> +}
> diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
> index 8e19793132..c70f2ea4c4 100644
> --- a/target/riscv/pmp.h
> +++ b/target/riscv/pmp.h
> @@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
>  target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
>  bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
>      target_ulong size, pmp_priv_t priv, target_ulong mode);
> +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> +    target_ulong *tlb_size);

The indentation is wrong here (as it is in the rest of the file). I
just fixed this up as well as the others when I applied it.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

I have applied patch 3 and 4 of this series. Patch 1 has already been
applied and patch 2 no longer applies due to a different fix, sorry
about that.

Alistair

>
>  #endif
> --
> 2.27.0
>
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries.
@ 2020-08-12 15:11     ` Alistair Francis
  0 siblings, 0 replies; 16+ messages in thread
From: Alistair Francis @ 2020-08-12 15:11 UTC (permalink / raw)
  To: Zong Li
  Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Tue, Jul 28, 2020 at 1:29 AM Zong Li <zong.li@sifive.com> wrote:
>
> The minimum granularity of PMP is 4 bytes, it is small than 4KB page
> size, therefore, the pmp checking would be ignored if its range doesn't
> start from the alignment of one page. This patch detects the pmp entries
> and sets the small page size to TLB if there is a PMP entry which cover
> the page size.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>
> ---
>  target/riscv/cpu_helper.c | 10 ++++++--
>  target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
>  target/riscv/pmp.h        |  2 ++
>  3 files changed, 62 insertions(+), 2 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 2f337e418c..fd1d373b6f 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
>      bool first_stage_error = true;
>      int ret = TRANSLATE_FAIL;
>      int mode = mmu_idx;
> +    target_ulong tlb_size = 0;
>
>      env->guest_phys_fault_addr = 0;
>
> @@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
>      }
>
>      if (ret == TRANSLATE_SUCCESS) {
> -        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> -                     prot, mmu_idx, TARGET_PAGE_SIZE);
> +        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
> +            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
> +                         prot, mmu_idx, tlb_size);
> +        } else {
> +            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> +                         prot, mmu_idx, TARGET_PAGE_SIZE);
> +        }
>          return true;
>      } else if (probe) {
>          return false;
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index aeba796484..adadf6e9ba 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -393,3 +393,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
>
>      return val;
>  }
> +
> +/*
> + * Calculate the TLB size if the start address or the end address of
> + * PMP entry is presented in thie TLB page.
> + */
> +static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
> +    target_ulong tlb_sa, target_ulong tlb_ea)
> +{
> +    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
> +    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
> +
> +    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
> +        return pmp_ea - pmp_sa + 1;
> +    }
> +
> +    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
> +        return tlb_ea - pmp_sa + 1;
> +    }
> +
> +    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
> +        return pmp_ea - tlb_sa + 1;
> +    }
> +
> +    return 0;
> +}
> +
> +/*
> + * Check is there a PMP entry whcih range covers this page. If so,

s/whcih/which/g

I fixed this when apply it.

> + * try to find the minimum granularity for the TLB size.
> + */
> +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> +    target_ulong *tlb_size)
> +{
> +    int i;
> +    target_ulong val;
> +    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
> +
> +    for (i = 0; i < MAX_RISCV_PMPS; i++) {
> +        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
> +        if (val) {
> +            if (*tlb_size == 0 || *tlb_size > val) {
> +                *tlb_size = val;
> +            }
> +        }
> +    }
> +
> +    if (*tlb_size != 0) {
> +        return true;
> +    }
> +
> +    return false;
> +}
> diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
> index 8e19793132..c70f2ea4c4 100644
> --- a/target/riscv/pmp.h
> +++ b/target/riscv/pmp.h
> @@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
>  target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
>  bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
>      target_ulong size, pmp_priv_t priv, target_ulong mode);
> +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> +    target_ulong *tlb_size);

The indentation is wrong here (as it is in the rest of the file). I
just fixed this up as well as the others when I applied it.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

I have applied patch 3 and 4 of this series. Patch 1 has already been
applied and patch 2 no longer applies due to a different fix, sorry
about that.

Alistair

>
>  #endif
> --
> 2.27.0
>
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries.
  2020-08-12 15:11     ` Alistair Francis
@ 2020-08-13  3:04       ` Zong Li
  -1 siblings, 0 replies; 16+ messages in thread
From: Zong Li @ 2020-08-13  3:04 UTC (permalink / raw)
  To: Alistair Francis
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt, Bin Meng

On Wed, Aug 12, 2020 at 11:21 PM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Tue, Jul 28, 2020 at 1:29 AM Zong Li <zong.li@sifive.com> wrote:
> >
> > The minimum granularity of PMP is 4 bytes, it is small than 4KB page
> > size, therefore, the pmp checking would be ignored if its range doesn't
> > start from the alignment of one page. This patch detects the pmp entries
> > and sets the small page size to TLB if there is a PMP entry which cover
> > the page size.
> >
> > Signed-off-by: Zong Li <zong.li@sifive.com>
> > ---
> >  target/riscv/cpu_helper.c | 10 ++++++--
> >  target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
> >  target/riscv/pmp.h        |  2 ++
> >  3 files changed, 62 insertions(+), 2 deletions(-)
> >
> > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> > index 2f337e418c..fd1d373b6f 100644
> > --- a/target/riscv/cpu_helper.c
> > +++ b/target/riscv/cpu_helper.c
> > @@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> >      bool first_stage_error = true;
> >      int ret = TRANSLATE_FAIL;
> >      int mode = mmu_idx;
> > +    target_ulong tlb_size = 0;
> >
> >      env->guest_phys_fault_addr = 0;
> >
> > @@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> >      }
> >
> >      if (ret == TRANSLATE_SUCCESS) {
> > -        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> > -                     prot, mmu_idx, TARGET_PAGE_SIZE);
> > +        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
> > +            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
> > +                         prot, mmu_idx, tlb_size);
> > +        } else {
> > +            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> > +                         prot, mmu_idx, TARGET_PAGE_SIZE);
> > +        }
> >          return true;
> >      } else if (probe) {
> >          return false;
> > diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> > index aeba796484..adadf6e9ba 100644
> > --- a/target/riscv/pmp.c
> > +++ b/target/riscv/pmp.c
> > @@ -393,3 +393,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
> >
> >      return val;
> >  }
> > +
> > +/*
> > + * Calculate the TLB size if the start address or the end address of
> > + * PMP entry is presented in thie TLB page.
> > + */
> > +static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
> > +    target_ulong tlb_sa, target_ulong tlb_ea)
> > +{
> > +    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
> > +    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
> > +
> > +    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
> > +        return pmp_ea - pmp_sa + 1;
> > +    }
> > +
> > +    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
> > +        return tlb_ea - pmp_sa + 1;
> > +    }
> > +
> > +    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
> > +        return pmp_ea - tlb_sa + 1;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +/*
> > + * Check is there a PMP entry whcih range covers this page. If so,
>
> s/whcih/which/g
>
> I fixed this when apply it.
>
> > + * try to find the minimum granularity for the TLB size.
> > + */
> > +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> > +    target_ulong *tlb_size)
> > +{
> > +    int i;
> > +    target_ulong val;
> > +    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
> > +
> > +    for (i = 0; i < MAX_RISCV_PMPS; i++) {
> > +        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
> > +        if (val) {
> > +            if (*tlb_size == 0 || *tlb_size > val) {
> > +                *tlb_size = val;
> > +            }
> > +        }
> > +    }
> > +
> > +    if (*tlb_size != 0) {
> > +        return true;
> > +    }
> > +
> > +    return false;
> > +}
> > diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
> > index 8e19793132..c70f2ea4c4 100644
> > --- a/target/riscv/pmp.h
> > +++ b/target/riscv/pmp.h
> > @@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
> >  target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
> >  bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
> >      target_ulong size, pmp_priv_t priv, target_ulong mode);
> > +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> > +    target_ulong *tlb_size);
>
> The indentation is wrong here (as it is in the rest of the file). I
> just fixed this up as well as the others when I applied it.
>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>
> I have applied patch 3 and 4 of this series. Patch 1 has already been
> applied and patch 2 no longer applies due to a different fix, sorry
> about that.
>

Thanks for your reviewing and applying. I don't follow with you about
patch 2, could you please also forward the information or the fix to
me?

> Alistair
>
> >
> >  #endif
> > --
> > 2.27.0
> >
> >


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries.
@ 2020-08-13  3:04       ` Zong Li
  0 siblings, 0 replies; 16+ messages in thread
From: Zong Li @ 2020-08-13  3:04 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Wed, Aug 12, 2020 at 11:21 PM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Tue, Jul 28, 2020 at 1:29 AM Zong Li <zong.li@sifive.com> wrote:
> >
> > The minimum granularity of PMP is 4 bytes, it is small than 4KB page
> > size, therefore, the pmp checking would be ignored if its range doesn't
> > start from the alignment of one page. This patch detects the pmp entries
> > and sets the small page size to TLB if there is a PMP entry which cover
> > the page size.
> >
> > Signed-off-by: Zong Li <zong.li@sifive.com>
> > ---
> >  target/riscv/cpu_helper.c | 10 ++++++--
> >  target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
> >  target/riscv/pmp.h        |  2 ++
> >  3 files changed, 62 insertions(+), 2 deletions(-)
> >
> > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> > index 2f337e418c..fd1d373b6f 100644
> > --- a/target/riscv/cpu_helper.c
> > +++ b/target/riscv/cpu_helper.c
> > @@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> >      bool first_stage_error = true;
> >      int ret = TRANSLATE_FAIL;
> >      int mode = mmu_idx;
> > +    target_ulong tlb_size = 0;
> >
> >      env->guest_phys_fault_addr = 0;
> >
> > @@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> >      }
> >
> >      if (ret == TRANSLATE_SUCCESS) {
> > -        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> > -                     prot, mmu_idx, TARGET_PAGE_SIZE);
> > +        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
> > +            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
> > +                         prot, mmu_idx, tlb_size);
> > +        } else {
> > +            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> > +                         prot, mmu_idx, TARGET_PAGE_SIZE);
> > +        }
> >          return true;
> >      } else if (probe) {
> >          return false;
> > diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> > index aeba796484..adadf6e9ba 100644
> > --- a/target/riscv/pmp.c
> > +++ b/target/riscv/pmp.c
> > @@ -393,3 +393,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
> >
> >      return val;
> >  }
> > +
> > +/*
> > + * Calculate the TLB size if the start address or the end address of
> > + * PMP entry is presented in thie TLB page.
> > + */
> > +static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
> > +    target_ulong tlb_sa, target_ulong tlb_ea)
> > +{
> > +    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
> > +    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
> > +
> > +    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
> > +        return pmp_ea - pmp_sa + 1;
> > +    }
> > +
> > +    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
> > +        return tlb_ea - pmp_sa + 1;
> > +    }
> > +
> > +    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
> > +        return pmp_ea - tlb_sa + 1;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +/*
> > + * Check is there a PMP entry whcih range covers this page. If so,
>
> s/whcih/which/g
>
> I fixed this when apply it.
>
> > + * try to find the minimum granularity for the TLB size.
> > + */
> > +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> > +    target_ulong *tlb_size)
> > +{
> > +    int i;
> > +    target_ulong val;
> > +    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
> > +
> > +    for (i = 0; i < MAX_RISCV_PMPS; i++) {
> > +        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
> > +        if (val) {
> > +            if (*tlb_size == 0 || *tlb_size > val) {
> > +                *tlb_size = val;
> > +            }
> > +        }
> > +    }
> > +
> > +    if (*tlb_size != 0) {
> > +        return true;
> > +    }
> > +
> > +    return false;
> > +}
> > diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
> > index 8e19793132..c70f2ea4c4 100644
> > --- a/target/riscv/pmp.h
> > +++ b/target/riscv/pmp.h
> > @@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
> >  target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
> >  bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
> >      target_ulong size, pmp_priv_t priv, target_ulong mode);
> > +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> > +    target_ulong *tlb_size);
>
> The indentation is wrong here (as it is in the rest of the file). I
> just fixed this up as well as the others when I applied it.
>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>
> I have applied patch 3 and 4 of this series. Patch 1 has already been
> applied and patch 2 no longer applies due to a different fix, sorry
> about that.
>

Thanks for your reviewing and applying. I don't follow with you about
patch 2, could you please also forward the information or the fix to
me?

> Alistair
>
> >
> >  #endif
> > --
> > 2.27.0
> >
> >


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries.
  2020-08-13  3:04       ` Zong Li
@ 2020-08-13 15:02         ` Alistair Francis
  -1 siblings, 0 replies; 16+ messages in thread
From: Alistair Francis @ 2020-08-13 15:02 UTC (permalink / raw)
  To: Zong Li
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt, Bin Meng

On Wed, Aug 12, 2020 at 8:04 PM Zong Li <zong.li@sifive.com> wrote:
>
> On Wed, Aug 12, 2020 at 11:21 PM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Tue, Jul 28, 2020 at 1:29 AM Zong Li <zong.li@sifive.com> wrote:
> > >
> > > The minimum granularity of PMP is 4 bytes, it is small than 4KB page
> > > size, therefore, the pmp checking would be ignored if its range doesn't
> > > start from the alignment of one page. This patch detects the pmp entries
> > > and sets the small page size to TLB if there is a PMP entry which cover
> > > the page size.
> > >
> > > Signed-off-by: Zong Li <zong.li@sifive.com>
> > > ---
> > >  target/riscv/cpu_helper.c | 10 ++++++--
> > >  target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
> > >  target/riscv/pmp.h        |  2 ++
> > >  3 files changed, 62 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> > > index 2f337e418c..fd1d373b6f 100644
> > > --- a/target/riscv/cpu_helper.c
> > > +++ b/target/riscv/cpu_helper.c
> > > @@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> > >      bool first_stage_error = true;
> > >      int ret = TRANSLATE_FAIL;
> > >      int mode = mmu_idx;
> > > +    target_ulong tlb_size = 0;
> > >
> > >      env->guest_phys_fault_addr = 0;
> > >
> > > @@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> > >      }
> > >
> > >      if (ret == TRANSLATE_SUCCESS) {
> > > -        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> > > -                     prot, mmu_idx, TARGET_PAGE_SIZE);
> > > +        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
> > > +            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
> > > +                         prot, mmu_idx, tlb_size);
> > > +        } else {
> > > +            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> > > +                         prot, mmu_idx, TARGET_PAGE_SIZE);
> > > +        }
> > >          return true;
> > >      } else if (probe) {
> > >          return false;
> > > diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> > > index aeba796484..adadf6e9ba 100644
> > > --- a/target/riscv/pmp.c
> > > +++ b/target/riscv/pmp.c
> > > @@ -393,3 +393,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
> > >
> > >      return val;
> > >  }
> > > +
> > > +/*
> > > + * Calculate the TLB size if the start address or the end address of
> > > + * PMP entry is presented in thie TLB page.
> > > + */
> > > +static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
> > > +    target_ulong tlb_sa, target_ulong tlb_ea)
> > > +{
> > > +    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
> > > +    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
> > > +
> > > +    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
> > > +        return pmp_ea - pmp_sa + 1;
> > > +    }
> > > +
> > > +    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
> > > +        return tlb_ea - pmp_sa + 1;
> > > +    }
> > > +
> > > +    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
> > > +        return pmp_ea - tlb_sa + 1;
> > > +    }
> > > +
> > > +    return 0;
> > > +}
> > > +
> > > +/*
> > > + * Check is there a PMP entry whcih range covers this page. If so,
> >
> > s/whcih/which/g
> >
> > I fixed this when apply it.
> >
> > > + * try to find the minimum granularity for the TLB size.
> > > + */
> > > +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> > > +    target_ulong *tlb_size)
> > > +{
> > > +    int i;
> > > +    target_ulong val;
> > > +    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
> > > +
> > > +    for (i = 0; i < MAX_RISCV_PMPS; i++) {
> > > +        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
> > > +        if (val) {
> > > +            if (*tlb_size == 0 || *tlb_size > val) {
> > > +                *tlb_size = val;
> > > +            }
> > > +        }
> > > +    }
> > > +
> > > +    if (*tlb_size != 0) {
> > > +        return true;
> > > +    }
> > > +
> > > +    return false;
> > > +}
> > > diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
> > > index 8e19793132..c70f2ea4c4 100644
> > > --- a/target/riscv/pmp.h
> > > +++ b/target/riscv/pmp.h
> > > @@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
> > >  target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
> > >  bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
> > >      target_ulong size, pmp_priv_t priv, target_ulong mode);
> > > +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> > > +    target_ulong *tlb_size);
> >
> > The indentation is wrong here (as it is in the rest of the file). I
> > just fixed this up as well as the others when I applied it.
> >
> > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> >
> > I have applied patch 3 and 4 of this series. Patch 1 has already been
> > applied and patch 2 no longer applies due to a different fix, sorry
> > about that.
> >
>
> Thanks for your reviewing and applying. I don't follow with you about
> patch 2, could you please also forward the information or the fix to
> me?

This patch fixes the same problem and I find it a little simpler:
https://patchew.org/QEMU/20200812223045.96803-1-alistair.francis@wdc.com/20200812223045.96803-10-alistair.francis@wdc.com/

Alistair

>
> > Alistair
> >
> > >
> > >  #endif
> > > --
> > > 2.27.0
> > >
> > >


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries.
@ 2020-08-13 15:02         ` Alistair Francis
  0 siblings, 0 replies; 16+ messages in thread
From: Alistair Francis @ 2020-08-13 15:02 UTC (permalink / raw)
  To: Zong Li
  Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Wed, Aug 12, 2020 at 8:04 PM Zong Li <zong.li@sifive.com> wrote:
>
> On Wed, Aug 12, 2020 at 11:21 PM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Tue, Jul 28, 2020 at 1:29 AM Zong Li <zong.li@sifive.com> wrote:
> > >
> > > The minimum granularity of PMP is 4 bytes, it is small than 4KB page
> > > size, therefore, the pmp checking would be ignored if its range doesn't
> > > start from the alignment of one page. This patch detects the pmp entries
> > > and sets the small page size to TLB if there is a PMP entry which cover
> > > the page size.
> > >
> > > Signed-off-by: Zong Li <zong.li@sifive.com>
> > > ---
> > >  target/riscv/cpu_helper.c | 10 ++++++--
> > >  target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
> > >  target/riscv/pmp.h        |  2 ++
> > >  3 files changed, 62 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> > > index 2f337e418c..fd1d373b6f 100644
> > > --- a/target/riscv/cpu_helper.c
> > > +++ b/target/riscv/cpu_helper.c
> > > @@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> > >      bool first_stage_error = true;
> > >      int ret = TRANSLATE_FAIL;
> > >      int mode = mmu_idx;
> > > +    target_ulong tlb_size = 0;
> > >
> > >      env->guest_phys_fault_addr = 0;
> > >
> > > @@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> > >      }
> > >
> > >      if (ret == TRANSLATE_SUCCESS) {
> > > -        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> > > -                     prot, mmu_idx, TARGET_PAGE_SIZE);
> > > +        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
> > > +            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
> > > +                         prot, mmu_idx, tlb_size);
> > > +        } else {
> > > +            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
> > > +                         prot, mmu_idx, TARGET_PAGE_SIZE);
> > > +        }
> > >          return true;
> > >      } else if (probe) {
> > >          return false;
> > > diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> > > index aeba796484..adadf6e9ba 100644
> > > --- a/target/riscv/pmp.c
> > > +++ b/target/riscv/pmp.c
> > > @@ -393,3 +393,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
> > >
> > >      return val;
> > >  }
> > > +
> > > +/*
> > > + * Calculate the TLB size if the start address or the end address of
> > > + * PMP entry is presented in thie TLB page.
> > > + */
> > > +static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
> > > +    target_ulong tlb_sa, target_ulong tlb_ea)
> > > +{
> > > +    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
> > > +    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
> > > +
> > > +    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
> > > +        return pmp_ea - pmp_sa + 1;
> > > +    }
> > > +
> > > +    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
> > > +        return tlb_ea - pmp_sa + 1;
> > > +    }
> > > +
> > > +    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
> > > +        return pmp_ea - tlb_sa + 1;
> > > +    }
> > > +
> > > +    return 0;
> > > +}
> > > +
> > > +/*
> > > + * Check is there a PMP entry whcih range covers this page. If so,
> >
> > s/whcih/which/g
> >
> > I fixed this when apply it.
> >
> > > + * try to find the minimum granularity for the TLB size.
> > > + */
> > > +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> > > +    target_ulong *tlb_size)
> > > +{
> > > +    int i;
> > > +    target_ulong val;
> > > +    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
> > > +
> > > +    for (i = 0; i < MAX_RISCV_PMPS; i++) {
> > > +        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
> > > +        if (val) {
> > > +            if (*tlb_size == 0 || *tlb_size > val) {
> > > +                *tlb_size = val;
> > > +            }
> > > +        }
> > > +    }
> > > +
> > > +    if (*tlb_size != 0) {
> > > +        return true;
> > > +    }
> > > +
> > > +    return false;
> > > +}
> > > diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
> > > index 8e19793132..c70f2ea4c4 100644
> > > --- a/target/riscv/pmp.h
> > > +++ b/target/riscv/pmp.h
> > > @@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
> > >  target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
> > >  bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
> > >      target_ulong size, pmp_priv_t priv, target_ulong mode);
> > > +bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
> > > +    target_ulong *tlb_size);
> >
> > The indentation is wrong here (as it is in the rest of the file). I
> > just fixed this up as well as the others when I applied it.
> >
> > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> >
> > I have applied patch 3 and 4 of this series. Patch 1 has already been
> > applied and patch 2 no longer applies due to a different fix, sorry
> > about that.
> >
>
> Thanks for your reviewing and applying. I don't follow with you about
> patch 2, could you please also forward the information or the fix to
> me?

This patch fixes the same problem and I find it a little simpler:
https://patchew.org/QEMU/20200812223045.96803-1-alistair.francis@wdc.com/20200812223045.96803-10-alistair.francis@wdc.com/

Alistair

>
> > Alistair
> >
> > >
> > >  #endif
> > > --
> > > 2.27.0
> > >
> > >


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-08-13 15:13 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-28  8:26 [PATCH v6 0/4] Fix some PMP implementations Zong Li
2020-07-28  8:26 ` [PATCH v6 1/4] target/riscv: Fix the range of pmpcfg of CSR funcion table Zong Li
2020-07-28  8:26   ` Zong Li
2020-07-28  8:26 ` [PATCH v6 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
2020-07-28  8:26   ` Zong Li
2020-07-28  8:26 ` [PATCH v6 3/4] target/riscv: Fix the translation of physical address Zong Li
2020-07-28 23:55   ` Alistair Francis
2020-07-28 23:55     ` Alistair Francis
2020-07-28  8:26 ` [PATCH v6 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li
2020-08-05  2:43   ` Zong Li
2020-08-12 15:11   ` Alistair Francis
2020-08-12 15:11     ` Alistair Francis
2020-08-13  3:04     ` Zong Li
2020-08-13  3:04       ` Zong Li
2020-08-13 15:02       ` Alistair Francis
2020-08-13 15:02         ` Alistair Francis

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.