All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/4] Fix some PMP implementations
@ 2020-07-25 15:03 Zong Li
  2020-07-25 15:03   ` Zong Li
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Zong Li @ 2020-07-25 15:03 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

This patch set contains the fixes for wrong index of pmpcfg CSR on rv64,
and the pmp range in CSR function table. After 3rd version of this patch
series, we also fix the PMP issues such as wrong physical address
translation and ignoring PMP checking.

Changed in v5:
 - Pick the suggestion which was lost in last version.

Changed in v4:
 - Refine the implementation. Suggested by Bin Meng.
 - Add fix for PMP checking was ignored.

Changed in v3:
 - Refine the implementation. Suggested by Bin Meng.
 - Add fix for wrong pphysical address translation.

Changed in v2:
 - Move out the shifting operation from loop. Suggested by Bin Meng.

Zong Li (4):
  target/riscv: Fix the range of pmpcfg of CSR funcion table
  target/riscv/pmp.c: Fix the index offset on RV64
  target/riscv: Fix the translation of physical address
  target/riscv: Change the TLB page size depends on PMP entries.

 target/riscv/cpu_helper.c | 13 ++++++--
 target/riscv/csr.c        |  2 +-
 target/riscv/pmp.c        | 63 ++++++++++++++++++++++++++++++++++++++-
 target/riscv/pmp.h        |  2 ++
 4 files changed, 75 insertions(+), 5 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v5 1/4] target/riscv: Fix the range of pmpcfg of CSR funcion table
  2020-07-25 15:03 [PATCH v5 0/4] Fix some PMP implementations Zong Li
@ 2020-07-25 15:03   ` Zong Li
  2020-07-25 15:03 ` [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 14+ messages in thread
From: Zong Li @ 2020-07-25 15:03 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Bin Meng, Alistair Francis, Zong Li

The range of Physical Memory Protection should be from CSR_PMPCFG0
to CSR_PMPCFG3, not to CSR_PMPADDR9.

Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
---
 target/riscv/csr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index ac01c835e1..6a96a01b1c 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1353,7 +1353,7 @@ static riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
     [CSR_MTINST] =              { hmode,   read_mtinst,      write_mtinst     },
 
     /* Physical Memory Protection */
-    [CSR_PMPCFG0  ... CSR_PMPADDR9] =  { pmp,   read_pmpcfg,  write_pmpcfg   },
+    [CSR_PMPCFG0  ... CSR_PMPCFG3]   = { pmp,   read_pmpcfg,  write_pmpcfg   },
     [CSR_PMPADDR0 ... CSR_PMPADDR15] = { pmp,   read_pmpaddr, write_pmpaddr  },
 
     /* Performance Counters */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 1/4] target/riscv: Fix the range of pmpcfg of CSR funcion table
@ 2020-07-25 15:03   ` Zong Li
  0 siblings, 0 replies; 14+ messages in thread
From: Zong Li @ 2020-07-25 15:03 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li, Alistair Francis, Bin Meng

The range of Physical Memory Protection should be from CSR_PMPCFG0
to CSR_PMPCFG3, not to CSR_PMPADDR9.

Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
---
 target/riscv/csr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index ac01c835e1..6a96a01b1c 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1353,7 +1353,7 @@ static riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
     [CSR_MTINST] =              { hmode,   read_mtinst,      write_mtinst     },
 
     /* Physical Memory Protection */
-    [CSR_PMPCFG0  ... CSR_PMPADDR9] =  { pmp,   read_pmpcfg,  write_pmpcfg   },
+    [CSR_PMPCFG0  ... CSR_PMPCFG3]   = { pmp,   read_pmpcfg,  write_pmpcfg   },
     [CSR_PMPADDR0 ... CSR_PMPADDR15] = { pmp,   read_pmpaddr, write_pmpaddr  },
 
     /* Performance Counters */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64
  2020-07-25 15:03 [PATCH v5 0/4] Fix some PMP implementations Zong Li
  2020-07-25 15:03   ` Zong Li
@ 2020-07-25 15:03 ` Zong Li
  2020-07-27  7:06     ` Bin Meng
  2020-07-27 22:26     ` Alistair Francis
  2020-07-25 15:03 ` [PATCH v5 3/4] target/riscv: Fix the translation of physical address Zong Li
  2020-07-25 15:03 ` [PATCH v5 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li
  3 siblings, 2 replies; 14+ messages in thread
From: Zong Li @ 2020-07-25 15:03 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
implementation, the second parameter of pmp_write_cfg is
"reg_index * sizeof(target_ulong)", and we get the the result
which is started from 16 if reg_index is 2, but we expect that
it should be started from 8. Separate the implementation for
RV32 and RV64 respectively.

Signed-off-by: Zong Li <zong.li@sifive.com>
---
 target/riscv/pmp.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 2a2b9f5363..aeba796484 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -318,6 +318,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
         return;
     }
 
+#if defined(TARGET_RISCV64)
+    reg_index >>= 1;
+#endif
+
     for (i = 0; i < sizeof(target_ulong); i++) {
         cfg_val = (val >> 8 * i)  & 0xff;
         pmp_write_cfg(env, (reg_index * sizeof(target_ulong)) + i,
@@ -335,11 +339,16 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
     target_ulong cfg_val = 0;
     target_ulong val = 0;
 
+    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
+
+#if defined(TARGET_RISCV64)
+    reg_index >>= 1;
+#endif
+
     for (i = 0; i < sizeof(target_ulong); i++) {
         val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
         cfg_val |= (val << (i * 8));
     }
-    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
 
     return cfg_val;
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 3/4] target/riscv: Fix the translation of physical address
  2020-07-25 15:03 [PATCH v5 0/4] Fix some PMP implementations Zong Li
  2020-07-25 15:03   ` Zong Li
  2020-07-25 15:03 ` [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
@ 2020-07-25 15:03 ` Zong Li
  2020-07-27 22:39     ` Alistair Francis
  2020-07-25 15:03 ` [PATCH v5 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li
  3 siblings, 1 reply; 14+ messages in thread
From: Zong Li @ 2020-07-25 15:03 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

The real physical address should add the 12 bits page offset. It also
causes the PMP wrong checking due to the minimum granularity of PMP is
4 byte, but we always get the physical address which is 4KB alignment,
that means, we always use the start address of the page to check PMP for
all addresses which in the same page.

Signed-off-by: Zong Li <zong.li@sifive.com>
---
 target/riscv/cpu_helper.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 75d2ae3434..08b069f0c9 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -543,7 +543,8 @@ restart:
             /* for superpage mappings, make a fake leaf PTE for the TLB's
                benefit. */
             target_ulong vpn = addr >> PGSHIFT;
-            *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
+            *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
+                        (addr & ~TARGET_PAGE_MASK);
 
             /* set permissions on the TLB entry */
             if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 4/4] target/riscv: Change the TLB page size depends on PMP entries.
  2020-07-25 15:03 [PATCH v5 0/4] Fix some PMP implementations Zong Li
                   ` (2 preceding siblings ...)
  2020-07-25 15:03 ` [PATCH v5 3/4] target/riscv: Fix the translation of physical address Zong Li
@ 2020-07-25 15:03 ` Zong Li
  3 siblings, 0 replies; 14+ messages in thread
From: Zong Li @ 2020-07-25 15:03 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

The minimum granularity of PMP is 4 bytes, it is small than 4KB page
size, therefore, the pmp checking would be ignored if its range doesn't
start from the alignment of one page. This patch detects the pmp entries
and sets the small page size to TLB if there is a PMP entry which cover
the page size.

Signed-off-by: Zong Li <zong.li@sifive.com>
---
 target/riscv/cpu_helper.c | 10 ++++++--
 target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
 target/riscv/pmp.h        |  2 ++
 3 files changed, 62 insertions(+), 2 deletions(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 08b069f0c9..b3013bc91e 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     bool first_stage_error = true;
     int ret = TRANSLATE_FAIL;
     int mode = mmu_idx;
+    target_ulong tlb_size = 0;
 
     env->guest_phys_fault_addr = 0;
 
@@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     }
 
     if (ret == TRANSLATE_SUCCESS) {
-        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
-                     prot, mmu_idx, TARGET_PAGE_SIZE);
+        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
+            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
+                         prot, mmu_idx, tlb_size);
+        } else {
+            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
+                         prot, mmu_idx, TARGET_PAGE_SIZE);
+        }
         return true;
     } else if (probe) {
         return false;
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index aeba796484..adadf6e9ba 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -393,3 +393,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
 
     return val;
 }
+
+/*
+ * Calculate the TLB size if the start address or the end address of
+ * PMP entry is presented in thie TLB page.
+ */
+static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
+    target_ulong tlb_sa, target_ulong tlb_ea)
+{
+    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
+    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
+
+    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
+        return pmp_ea - pmp_sa + 1;
+    }
+
+    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
+        return tlb_ea - pmp_sa + 1;
+    }
+
+    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
+        return pmp_ea - tlb_sa + 1;
+    }
+
+    return 0;
+}
+
+/*
+ * Check is there a PMP entry whcih range covers this page. If so,
+ * try to find the minimum granularity for the TLB size.
+ */
+bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
+    target_ulong *tlb_size)
+{
+    int i;
+    target_ulong val;
+    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
+
+    for (i = 0; i < MAX_RISCV_PMPS; i++) {
+        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
+        if (val) {
+            if (*tlb_size == 0 || *tlb_size > val) {
+                *tlb_size = val;
+            }
+        }
+    }
+
+    if (*tlb_size != 0) {
+        return true;
+    }
+
+    return false;
+}
diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
index 8e19793132..c70f2ea4c4 100644
--- a/target/riscv/pmp.h
+++ b/target/riscv/pmp.h
@@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
 target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
 bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
     target_ulong size, pmp_priv_t priv, target_ulong mode);
+bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
+    target_ulong *tlb_size);
 
 #endif
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64
  2020-07-25 15:03 ` [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
@ 2020-07-27  7:06     ` Bin Meng
  2020-07-27 22:26     ` Alistair Francis
  1 sibling, 0 replies; 14+ messages in thread
From: Bin Meng @ 2020-07-27  7:06 UTC (permalink / raw)
  To: Zong Li
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt

On Sat, Jul 25, 2020 at 11:03 PM Zong Li <zong.li@sifive.com> wrote:
>
> On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
> entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
> implementation, the second parameter of pmp_write_cfg is
> "reg_index * sizeof(target_ulong)", and we get the the result
> which is started from 16 if reg_index is 2, but we expect that
> it should be started from 8. Separate the implementation for
> RV32 and RV64 respectively.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>
> ---
>  target/riscv/pmp.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>

Reviewed-by: Bin Meng <bin.meng@windriver.com>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64
@ 2020-07-27  7:06     ` Bin Meng
  0 siblings, 0 replies; 14+ messages in thread
From: Bin Meng @ 2020-07-27  7:06 UTC (permalink / raw)
  To: Zong Li
  Cc: Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Sat, Jul 25, 2020 at 11:03 PM Zong Li <zong.li@sifive.com> wrote:
>
> On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
> entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
> implementation, the second parameter of pmp_write_cfg is
> "reg_index * sizeof(target_ulong)", and we get the the result
> which is started from 16 if reg_index is 2, but we expect that
> it should be started from 8. Separate the implementation for
> RV32 and RV64 respectively.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>
> ---
>  target/riscv/pmp.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>

Reviewed-by: Bin Meng <bin.meng@windriver.com>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64
  2020-07-25 15:03 ` [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
@ 2020-07-27 22:26     ` Alistair Francis
  2020-07-27 22:26     ` Alistair Francis
  1 sibling, 0 replies; 14+ messages in thread
From: Alistair Francis @ 2020-07-27 22:26 UTC (permalink / raw)
  To: Zong Li
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt, Bin Meng

On Sat, Jul 25, 2020 at 8:04 AM Zong Li <zong.li@sifive.com> wrote:
>
> On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
> entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
> implementation, the second parameter of pmp_write_cfg is
> "reg_index * sizeof(target_ulong)", and we get the the result
> which is started from 16 if reg_index is 2, but we expect that
> it should be started from 8. Separate the implementation for
> RV32 and RV64 respectively.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  target/riscv/pmp.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index 2a2b9f5363..aeba796484 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -318,6 +318,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
>          return;
>      }
>
> +#if defined(TARGET_RISCV64)
> +    reg_index >>= 1;
> +#endif
> +
>      for (i = 0; i < sizeof(target_ulong); i++) {
>          cfg_val = (val >> 8 * i)  & 0xff;
>          pmp_write_cfg(env, (reg_index * sizeof(target_ulong)) + i,
> @@ -335,11 +339,16 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
>      target_ulong cfg_val = 0;
>      target_ulong val = 0;
>
> +    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
> +
> +#if defined(TARGET_RISCV64)
> +    reg_index >>= 1;
> +#endif
> +
>      for (i = 0; i < sizeof(target_ulong); i++) {
>          val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
>          cfg_val |= (val << (i * 8));
>      }
> -    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
>
>      return cfg_val;
>  }
> --
> 2.27.0
>
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64
@ 2020-07-27 22:26     ` Alistair Francis
  0 siblings, 0 replies; 14+ messages in thread
From: Alistair Francis @ 2020-07-27 22:26 UTC (permalink / raw)
  To: Zong Li
  Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Sat, Jul 25, 2020 at 8:04 AM Zong Li <zong.li@sifive.com> wrote:
>
> On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
> entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
> implementation, the second parameter of pmp_write_cfg is
> "reg_index * sizeof(target_ulong)", and we get the the result
> which is started from 16 if reg_index is 2, but we expect that
> it should be started from 8. Separate the implementation for
> RV32 and RV64 respectively.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  target/riscv/pmp.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index 2a2b9f5363..aeba796484 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -318,6 +318,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
>          return;
>      }
>
> +#if defined(TARGET_RISCV64)
> +    reg_index >>= 1;
> +#endif
> +
>      for (i = 0; i < sizeof(target_ulong); i++) {
>          cfg_val = (val >> 8 * i)  & 0xff;
>          pmp_write_cfg(env, (reg_index * sizeof(target_ulong)) + i,
> @@ -335,11 +339,16 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
>      target_ulong cfg_val = 0;
>      target_ulong val = 0;
>
> +    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
> +
> +#if defined(TARGET_RISCV64)
> +    reg_index >>= 1;
> +#endif
> +
>      for (i = 0; i < sizeof(target_ulong); i++) {
>          val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
>          cfg_val |= (val << (i * 8));
>      }
> -    trace_pmpcfg_csr_read(env->mhartid, reg_index, cfg_val);
>
>      return cfg_val;
>  }
> --
> 2.27.0
>
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] target/riscv: Fix the translation of physical address
  2020-07-25 15:03 ` [PATCH v5 3/4] target/riscv: Fix the translation of physical address Zong Li
@ 2020-07-27 22:39     ` Alistair Francis
  0 siblings, 0 replies; 14+ messages in thread
From: Alistair Francis @ 2020-07-27 22:39 UTC (permalink / raw)
  To: Zong Li
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt, Bin Meng

On Sat, Jul 25, 2020 at 8:05 AM Zong Li <zong.li@sifive.com> wrote:
>
> The real physical address should add the 12 bits page offset. It also
> causes the PMP wrong checking due to the minimum granularity of PMP is
> 4 byte, but we always get the physical address which is 4KB alignment,
> that means, we always use the start address of the page to check PMP for
> all addresses which in the same page.

So riscv_cpu_tlb_fill() will clear these bits when calling
tlb_set_page(), so this won't have an impact on actual translation
(although it will change in input address for 2-stage translation, but
that seems fine).

Your point about PMP seems correct as we allow a smaller then page
granularity this seems like the right approach.

Can you edit riscv_cpu_get_phys_page_debug() to mask these bits out at
the end? Otherwise we will break what callers to
cpu_get_phys_page_attrs_debug() expect.

Alistair

>
> Signed-off-by: Zong Li <zong.li@sifive.com>
> ---
>  target/riscv/cpu_helper.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 75d2ae3434..08b069f0c9 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -543,7 +543,8 @@ restart:
>              /* for superpage mappings, make a fake leaf PTE for the TLB's
>                 benefit. */
>              target_ulong vpn = addr >> PGSHIFT;
> -            *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
> +            *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
> +                        (addr & ~TARGET_PAGE_MASK);
>
>              /* set permissions on the TLB entry */
>              if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
> --
> 2.27.0
>
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] target/riscv: Fix the translation of physical address
@ 2020-07-27 22:39     ` Alistair Francis
  0 siblings, 0 replies; 14+ messages in thread
From: Alistair Francis @ 2020-07-27 22:39 UTC (permalink / raw)
  To: Zong Li
  Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Sat, Jul 25, 2020 at 8:05 AM Zong Li <zong.li@sifive.com> wrote:
>
> The real physical address should add the 12 bits page offset. It also
> causes the PMP wrong checking due to the minimum granularity of PMP is
> 4 byte, but we always get the physical address which is 4KB alignment,
> that means, we always use the start address of the page to check PMP for
> all addresses which in the same page.

So riscv_cpu_tlb_fill() will clear these bits when calling
tlb_set_page(), so this won't have an impact on actual translation
(although it will change in input address for 2-stage translation, but
that seems fine).

Your point about PMP seems correct as we allow a smaller then page
granularity this seems like the right approach.

Can you edit riscv_cpu_get_phys_page_debug() to mask these bits out at
the end? Otherwise we will break what callers to
cpu_get_phys_page_attrs_debug() expect.

Alistair

>
> Signed-off-by: Zong Li <zong.li@sifive.com>
> ---
>  target/riscv/cpu_helper.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 75d2ae3434..08b069f0c9 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -543,7 +543,8 @@ restart:
>              /* for superpage mappings, make a fake leaf PTE for the TLB's
>                 benefit. */
>              target_ulong vpn = addr >> PGSHIFT;
> -            *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
> +            *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
> +                        (addr & ~TARGET_PAGE_MASK);
>
>              /* set permissions on the TLB entry */
>              if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
> --
> 2.27.0
>
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] target/riscv: Fix the translation of physical address
  2020-07-27 22:39     ` Alistair Francis
@ 2020-07-28  2:32       ` Zong Li
  -1 siblings, 0 replies; 14+ messages in thread
From: Zong Li @ 2020-07-28  2:32 UTC (permalink / raw)
  To: Alistair Francis
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt, Bin Meng

On Tue, Jul 28, 2020 at 6:49 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Sat, Jul 25, 2020 at 8:05 AM Zong Li <zong.li@sifive.com> wrote:
> >
> > The real physical address should add the 12 bits page offset. It also
> > causes the PMP wrong checking due to the minimum granularity of PMP is
> > 4 byte, but we always get the physical address which is 4KB alignment,
> > that means, we always use the start address of the page to check PMP for
> > all addresses which in the same page.
>
> So riscv_cpu_tlb_fill() will clear these bits when calling
> tlb_set_page(), so this won't have an impact on actual translation
> (although it will change in input address for 2-stage translation, but
> that seems fine).
>
> Your point about PMP seems correct as we allow a smaller then page
> granularity this seems like the right approach.
>
> Can you edit riscv_cpu_get_phys_page_debug() to mask these bits out at
> the end? Otherwise we will break what callers to
> cpu_get_phys_page_attrs_debug() expect.
>

OK, I checked that already, the callers would add these bits again,
because they expect to get the address for the page. Thanks for your
reviewing, modify it in the next version.

> Alistair
>
> >
> > Signed-off-by: Zong Li <zong.li@sifive.com>
> > ---
> >  target/riscv/cpu_helper.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> > index 75d2ae3434..08b069f0c9 100644
> > --- a/target/riscv/cpu_helper.c
> > +++ b/target/riscv/cpu_helper.c
> > @@ -543,7 +543,8 @@ restart:
> >              /* for superpage mappings, make a fake leaf PTE for the TLB's
> >                 benefit. */
> >              target_ulong vpn = addr >> PGSHIFT;
> > -            *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
> > +            *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
> > +                        (addr & ~TARGET_PAGE_MASK);
> >
> >              /* set permissions on the TLB entry */
> >              if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
> > --
> > 2.27.0
> >
> >


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] target/riscv: Fix the translation of physical address
@ 2020-07-28  2:32       ` Zong Li
  0 siblings, 0 replies; 14+ messages in thread
From: Zong Li @ 2020-07-28  2:32 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Tue, Jul 28, 2020 at 6:49 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Sat, Jul 25, 2020 at 8:05 AM Zong Li <zong.li@sifive.com> wrote:
> >
> > The real physical address should add the 12 bits page offset. It also
> > causes the PMP wrong checking due to the minimum granularity of PMP is
> > 4 byte, but we always get the physical address which is 4KB alignment,
> > that means, we always use the start address of the page to check PMP for
> > all addresses which in the same page.
>
> So riscv_cpu_tlb_fill() will clear these bits when calling
> tlb_set_page(), so this won't have an impact on actual translation
> (although it will change in input address for 2-stage translation, but
> that seems fine).
>
> Your point about PMP seems correct as we allow a smaller then page
> granularity this seems like the right approach.
>
> Can you edit riscv_cpu_get_phys_page_debug() to mask these bits out at
> the end? Otherwise we will break what callers to
> cpu_get_phys_page_attrs_debug() expect.
>

OK, I checked that already, the callers would add these bits again,
because they expect to get the address for the page. Thanks for your
reviewing, modify it in the next version.

> Alistair
>
> >
> > Signed-off-by: Zong Li <zong.li@sifive.com>
> > ---
> >  target/riscv/cpu_helper.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> > index 75d2ae3434..08b069f0c9 100644
> > --- a/target/riscv/cpu_helper.c
> > +++ b/target/riscv/cpu_helper.c
> > @@ -543,7 +543,8 @@ restart:
> >              /* for superpage mappings, make a fake leaf PTE for the TLB's
> >                 benefit. */
> >              target_ulong vpn = addr >> PGSHIFT;
> > -            *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
> > +            *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
> > +                        (addr & ~TARGET_PAGE_MASK);
> >
> >              /* set permissions on the TLB entry */
> >              if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
> > --
> > 2.27.0
> >
> >


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2020-07-28  2:33 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-25 15:03 [PATCH v5 0/4] Fix some PMP implementations Zong Li
2020-07-25 15:03 ` [PATCH v5 1/4] target/riscv: Fix the range of pmpcfg of CSR funcion table Zong Li
2020-07-25 15:03   ` Zong Li
2020-07-25 15:03 ` [PATCH v5 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
2020-07-27  7:06   ` Bin Meng
2020-07-27  7:06     ` Bin Meng
2020-07-27 22:26   ` Alistair Francis
2020-07-27 22:26     ` Alistair Francis
2020-07-25 15:03 ` [PATCH v5 3/4] target/riscv: Fix the translation of physical address Zong Li
2020-07-27 22:39   ` Alistair Francis
2020-07-27 22:39     ` Alistair Francis
2020-07-28  2:32     ` Zong Li
2020-07-28  2:32       ` Zong Li
2020-07-25 15:03 ` [PATCH v5 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.