All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/4] Fix some PMP implementations
@ 2020-07-24  9:08 Zong Li
  2020-07-24  9:08   ` Zong Li
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Zong Li @ 2020-07-24  9:08 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

This patch set contains the fixes for wrong index of pmpcfg CSR on rv64,
and the pmp range in CSR function table. After 3rd version of this patch
series, we also fix the PMP issues such as wrong physical address
translation and ignoring PMP checking.

Chagned in v4:
 - Refine the implementation. Suggested by Bin Meng.
 - Add fix for PMP checking was ignored.

Changed in v3:
 - Refine the implementation. Suggested by Bin Meng.
 - Add fix for wrong pphysical address translation.

Changed in v2:
 - Move out the shifting operation from loop. Suggested by Bin Meng.

Zong Li (4):
  target/riscv: Fix the range of pmpcfg of CSR funcion table
  target/riscv/pmp.c: Fix the index offset on RV64
  target/riscv: Fix the translation of physical address
  target/riscv: Change the TLB page size depends on PMP entries.

 target/riscv/cpu_helper.c | 13 +++++++--
 target/riscv/csr.c        |  2 +-
 target/riscv/pmp.c        | 60 +++++++++++++++++++++++++++++++++++++++
 target/riscv/pmp.h        |  2 ++
 4 files changed, 73 insertions(+), 4 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v4 1/4] target/riscv: Fix the range of pmpcfg of CSR funcion table
  2020-07-24  9:08 [PATCH v4 0/4] Fix some PMP implementations Zong Li
@ 2020-07-24  9:08   ` Zong Li
  2020-07-24  9:08 ` [PATCH v4 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Zong Li @ 2020-07-24  9:08 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Bin Meng, Alistair Francis, Zong Li

The range of Physical Memory Protection should be from CSR_PMPCFG0
to CSR_PMPCFG3, not to CSR_PMPADDR9.

Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
---
 target/riscv/csr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index ac01c835e1..6a96a01b1c 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1353,7 +1353,7 @@ static riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
     [CSR_MTINST] =              { hmode,   read_mtinst,      write_mtinst     },
 
     /* Physical Memory Protection */
-    [CSR_PMPCFG0  ... CSR_PMPADDR9] =  { pmp,   read_pmpcfg,  write_pmpcfg   },
+    [CSR_PMPCFG0  ... CSR_PMPCFG3]   = { pmp,   read_pmpcfg,  write_pmpcfg   },
     [CSR_PMPADDR0 ... CSR_PMPADDR15] = { pmp,   read_pmpaddr, write_pmpaddr  },
 
     /* Performance Counters */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v4 1/4] target/riscv: Fix the range of pmpcfg of CSR funcion table
@ 2020-07-24  9:08   ` Zong Li
  0 siblings, 0 replies; 10+ messages in thread
From: Zong Li @ 2020-07-24  9:08 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li, Alistair Francis, Bin Meng

The range of Physical Memory Protection should be from CSR_PMPCFG0
to CSR_PMPCFG3, not to CSR_PMPADDR9.

Signed-off-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bin.meng@windriver.com>
---
 target/riscv/csr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index ac01c835e1..6a96a01b1c 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1353,7 +1353,7 @@ static riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
     [CSR_MTINST] =              { hmode,   read_mtinst,      write_mtinst     },
 
     /* Physical Memory Protection */
-    [CSR_PMPCFG0  ... CSR_PMPADDR9] =  { pmp,   read_pmpcfg,  write_pmpcfg   },
+    [CSR_PMPCFG0  ... CSR_PMPCFG3]   = { pmp,   read_pmpcfg,  write_pmpcfg   },
     [CSR_PMPADDR0 ... CSR_PMPADDR15] = { pmp,   read_pmpaddr, write_pmpaddr  },
 
     /* Performance Counters */
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v4 2/4] target/riscv/pmp.c: Fix the index offset on RV64
  2020-07-24  9:08 [PATCH v4 0/4] Fix some PMP implementations Zong Li
  2020-07-24  9:08   ` Zong Li
@ 2020-07-24  9:08 ` Zong Li
  2020-07-24  9:22     ` Bin Meng
  2020-07-24  9:08 ` [PATCH v4 3/4] target/riscv: Fix the translation of physical address Zong Li
  2020-07-24  9:08 ` [PATCH v4 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li
  3 siblings, 1 reply; 10+ messages in thread
From: Zong Li @ 2020-07-24  9:08 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
implementation, the second parameter of pmp_write_cfg is
"reg_index * sizeof(target_ulong)", and we get the the result
which is started from 16 if reg_index is 2, but we expect that
it should be started from 8. Separate the implementation for
RV32 and RV64 respectively.

Signed-off-by: Zong Li <zong.li@sifive.com>
---
 target/riscv/pmp.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 2a2b9f5363..e0161d6aab 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -310,6 +310,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
     int i;
     uint8_t cfg_val;
 
+#if defined(TARGET_RISCV64)
+    reg_index >>= 1;
+#endif
+
     trace_pmpcfg_csr_write(env->mhartid, reg_index, val);
 
     if ((reg_index & 1) && (sizeof(target_ulong) == 8)) {
@@ -335,6 +339,10 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
     target_ulong cfg_val = 0;
     target_ulong val = 0;
 
+#if defined(TARGET_RISCV64)
+    reg_index >>= 1;
+#endif
+
     for (i = 0; i < sizeof(target_ulong); i++) {
         val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
         cfg_val |= (val << (i * 8));
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v4 3/4] target/riscv: Fix the translation of physical address
  2020-07-24  9:08 [PATCH v4 0/4] Fix some PMP implementations Zong Li
  2020-07-24  9:08   ` Zong Li
  2020-07-24  9:08 ` [PATCH v4 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
@ 2020-07-24  9:08 ` Zong Li
  2020-07-24  9:08 ` [PATCH v4 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li
  3 siblings, 0 replies; 10+ messages in thread
From: Zong Li @ 2020-07-24  9:08 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

The real physical address should add the 12 bits page offset. It also
causes the PMP wrong checking due to the minimum granularity of PMP is
4 byte, but we always get the physical address which is 4KB alignment,
that means, we always use the start address of the page to check PMP for
all addresses which in the same page.

Signed-off-by: Zong Li <zong.li@sifive.com>
---
 target/riscv/cpu_helper.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 75d2ae3434..08b069f0c9 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -543,7 +543,8 @@ restart:
             /* for superpage mappings, make a fake leaf PTE for the TLB's
                benefit. */
             target_ulong vpn = addr >> PGSHIFT;
-            *physical = (ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT;
+            *physical = ((ppn | (vpn & ((1L << ptshift) - 1))) << PGSHIFT) |
+                        (addr & ~TARGET_PAGE_MASK);
 
             /* set permissions on the TLB entry */
             if ((pte & PTE_R) || ((pte & PTE_X) && mxr)) {
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v4 4/4] target/riscv: Change the TLB page size depends on PMP entries.
  2020-07-24  9:08 [PATCH v4 0/4] Fix some PMP implementations Zong Li
                   ` (2 preceding siblings ...)
  2020-07-24  9:08 ` [PATCH v4 3/4] target/riscv: Fix the translation of physical address Zong Li
@ 2020-07-24  9:08 ` Zong Li
  3 siblings, 0 replies; 10+ messages in thread
From: Zong Li @ 2020-07-24  9:08 UTC (permalink / raw)
  To: palmer, Alistair.Francis, bmeng.cn, sagark, kbastian, qemu-riscv,
	qemu-devel
  Cc: Zong Li

The minimum granularity of PMP is 4 bytes, it is small than 4KB page
size, therefore, the pmp checking would be ignored if its range doesn't
start from the alignment of one page. This patch detects the pmp entries
and sets the small page size to TLB if there is a PMP entry which cover
the page size.

Signed-off-by: Zong Li <zong.li@sifive.com>
---
 target/riscv/cpu_helper.c | 10 ++++++--
 target/riscv/pmp.c        | 52 +++++++++++++++++++++++++++++++++++++++
 target/riscv/pmp.h        |  2 ++
 3 files changed, 62 insertions(+), 2 deletions(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 08b069f0c9..b3013bc91e 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -693,6 +693,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     bool first_stage_error = true;
     int ret = TRANSLATE_FAIL;
     int mode = mmu_idx;
+    target_ulong tlb_size = 0;
 
     env->guest_phys_fault_addr = 0;
 
@@ -784,8 +785,13 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     }
 
     if (ret == TRANSLATE_SUCCESS) {
-        tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
-                     prot, mmu_idx, TARGET_PAGE_SIZE);
+        if (pmp_is_range_in_tlb(env, pa & TARGET_PAGE_MASK, &tlb_size)) {
+            tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1),
+                         prot, mmu_idx, tlb_size);
+        } else {
+            tlb_set_page(cs, address & TARGET_PAGE_MASK, pa & TARGET_PAGE_MASK,
+                         prot, mmu_idx, TARGET_PAGE_SIZE);
+        }
         return true;
     } else if (probe) {
         return false;
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index e0161d6aab..a040cdd285 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -392,3 +392,55 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
 
     return val;
 }
+
+/*
+ * Calculate the TLB size if the start address or the end address of
+ * PMP entry is presented in thie TLB page.
+ */
+static target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
+    target_ulong tlb_sa, target_ulong tlb_ea)
+{
+    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
+    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
+
+    if (pmp_sa >= tlb_sa && pmp_ea <= tlb_ea) {
+        return pmp_ea - pmp_sa + 1;
+    }
+
+    if (pmp_sa >= tlb_sa && pmp_sa <= tlb_ea && pmp_ea >= tlb_ea) {
+        return tlb_ea - pmp_sa + 1;
+    }
+
+    if (pmp_ea <= tlb_ea && pmp_ea >= tlb_sa && pmp_sa <= tlb_sa) {
+        return pmp_ea - tlb_sa + 1;
+    }
+
+    return 0;
+}
+
+/*
+ * Check is there a PMP entry whcih range covers this page. If so,
+ * try to find the minimum granularity for the TLB size.
+ */
+bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
+    target_ulong *tlb_size)
+{
+    int i;
+    target_ulong val;
+    target_ulong tlb_ea = (tlb_sa + TARGET_PAGE_SIZE - 1);
+
+    for (i = 0; i < MAX_RISCV_PMPS; i++) {
+        val = pmp_get_tlb_size(env, i, tlb_sa, tlb_ea);
+        if (val) {
+            if (*tlb_size == 0 || *tlb_size > val) {
+                *tlb_size = val;
+            }
+        }
+    }
+
+    if (*tlb_size != 0) {
+        return true;
+    }
+
+    return false;
+}
diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
index 8e19793132..c70f2ea4c4 100644
--- a/target/riscv/pmp.h
+++ b/target/riscv/pmp.h
@@ -60,5 +60,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
 target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
 bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
     target_ulong size, pmp_priv_t priv, target_ulong mode);
+bool pmp_is_range_in_tlb(CPURISCVState *env, hwaddr tlb_sa,
+    target_ulong *tlb_size);
 
 #endif
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 2/4] target/riscv/pmp.c: Fix the index offset on RV64
  2020-07-24  9:08 ` [PATCH v4 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
@ 2020-07-24  9:22     ` Bin Meng
  0 siblings, 0 replies; 10+ messages in thread
From: Bin Meng @ 2020-07-24  9:22 UTC (permalink / raw)
  To: Zong Li
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt

Hi Zong,

On Fri, Jul 24, 2020 at 5:08 PM Zong Li <zong.li@sifive.com> wrote:
>
> On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
> entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
> implementation, the second parameter of pmp_write_cfg is
> "reg_index * sizeof(target_ulong)", and we get the the result
> which is started from 16 if reg_index is 2, but we expect that
> it should be started from 8. Separate the implementation for
> RV32 and RV64 respectively.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>
> ---
>  target/riscv/pmp.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index 2a2b9f5363..e0161d6aab 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -310,6 +310,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
>      int i;
>      uint8_t cfg_val;
>
> +#if defined(TARGET_RISCV64)
> +    reg_index >>= 1;
> +#endif
> +
>      trace_pmpcfg_csr_write(env->mhartid, reg_index, val);
>
>      if ((reg_index & 1) && (sizeof(target_ulong) == 8)) {
> @@ -335,6 +339,10 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
>      target_ulong cfg_val = 0;
>      target_ulong val = 0;
>
> +#if defined(TARGET_RISCV64)
> +    reg_index >>= 1;
> +#endif
> +
>      for (i = 0; i < sizeof(target_ulong); i++) {
>          val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
>          cfg_val |= (val << (i * 8));
> --

It seems you missed to address my review comments in v3? reg_index
should be shifted after we call the trace function.

Regards,
Bin


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 2/4] target/riscv/pmp.c: Fix the index offset on RV64
@ 2020-07-24  9:22     ` Bin Meng
  0 siblings, 0 replies; 10+ messages in thread
From: Bin Meng @ 2020-07-24  9:22 UTC (permalink / raw)
  To: Zong Li
  Cc: Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

Hi Zong,

On Fri, Jul 24, 2020 at 5:08 PM Zong Li <zong.li@sifive.com> wrote:
>
> On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
> entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
> implementation, the second parameter of pmp_write_cfg is
> "reg_index * sizeof(target_ulong)", and we get the the result
> which is started from 16 if reg_index is 2, but we expect that
> it should be started from 8. Separate the implementation for
> RV32 and RV64 respectively.
>
> Signed-off-by: Zong Li <zong.li@sifive.com>
> ---
>  target/riscv/pmp.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index 2a2b9f5363..e0161d6aab 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -310,6 +310,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
>      int i;
>      uint8_t cfg_val;
>
> +#if defined(TARGET_RISCV64)
> +    reg_index >>= 1;
> +#endif
> +
>      trace_pmpcfg_csr_write(env->mhartid, reg_index, val);
>
>      if ((reg_index & 1) && (sizeof(target_ulong) == 8)) {
> @@ -335,6 +339,10 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
>      target_ulong cfg_val = 0;
>      target_ulong val = 0;
>
> +#if defined(TARGET_RISCV64)
> +    reg_index >>= 1;
> +#endif
> +
>      for (i = 0; i < sizeof(target_ulong); i++) {
>          val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
>          cfg_val |= (val << (i * 8));
> --

It seems you missed to address my review comments in v3? reg_index
should be shifted after we call the trace function.

Regards,
Bin


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 2/4] target/riscv/pmp.c: Fix the index offset on RV64
  2020-07-24  9:22     ` Bin Meng
@ 2020-07-25 15:06       ` Zong Li
  -1 siblings, 0 replies; 10+ messages in thread
From: Zong Li @ 2020-07-25 15:06 UTC (permalink / raw)
  To: Bin Meng
  Cc: open list:RISC-V, Sagar Karandikar, Bastian Koppelmann,
	qemu-devel@nongnu.org Developers, Alistair Francis,
	Palmer Dabbelt

On Fri, Jul 24, 2020 at 5:22 PM Bin Meng <bmeng.cn@gmail.com> wrote:
>
> Hi Zong,
>
> On Fri, Jul 24, 2020 at 5:08 PM Zong Li <zong.li@sifive.com> wrote:
> >
> > On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
> > entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
> > implementation, the second parameter of pmp_write_cfg is
> > "reg_index * sizeof(target_ulong)", and we get the the result
> > which is started from 16 if reg_index is 2, but we expect that
> > it should be started from 8. Separate the implementation for
> > RV32 and RV64 respectively.
> >
> > Signed-off-by: Zong Li <zong.li@sifive.com>
> > ---
> >  target/riscv/pmp.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> >
> > diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> > index 2a2b9f5363..e0161d6aab 100644
> > --- a/target/riscv/pmp.c
> > +++ b/target/riscv/pmp.c
> > @@ -310,6 +310,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
> >      int i;
> >      uint8_t cfg_val;
> >
> > +#if defined(TARGET_RISCV64)
> > +    reg_index >>= 1;
> > +#endif
> > +
> >      trace_pmpcfg_csr_write(env->mhartid, reg_index, val);
> >
> >      if ((reg_index & 1) && (sizeof(target_ulong) == 8)) {
> > @@ -335,6 +339,10 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
> >      target_ulong cfg_val = 0;
> >      target_ulong val = 0;
> >
> > +#if defined(TARGET_RISCV64)
> > +    reg_index >>= 1;
> > +#endif
> > +
> >      for (i = 0; i < sizeof(target_ulong); i++) {
> >          val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
> >          cfg_val |= (val << (i * 8));
> > --
>
> It seems you missed to address my review comments in v3? reg_index
> should be shifted after we call the trace function.
>

Sorry for that, there was something wrong in my local tree, I have
been posting the 5th version patches, and hope it picks the suggestion
already. Thanks.

> Regards,
> Bin


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 2/4] target/riscv/pmp.c: Fix the index offset on RV64
@ 2020-07-25 15:06       ` Zong Li
  0 siblings, 0 replies; 10+ messages in thread
From: Zong Li @ 2020-07-25 15:06 UTC (permalink / raw)
  To: Bin Meng
  Cc: Palmer Dabbelt, Alistair Francis, Sagar Karandikar,
	Bastian Koppelmann, open list:RISC-V,
	qemu-devel@nongnu.org Developers

On Fri, Jul 24, 2020 at 5:22 PM Bin Meng <bmeng.cn@gmail.com> wrote:
>
> Hi Zong,
>
> On Fri, Jul 24, 2020 at 5:08 PM Zong Li <zong.li@sifive.com> wrote:
> >
> > On RV64, the reg_index is 2 (pmpcfg2 CSR) after the seventh pmp
> > entry, it is not 1 (pmpcfg1 CSR) like RV32. In the original
> > implementation, the second parameter of pmp_write_cfg is
> > "reg_index * sizeof(target_ulong)", and we get the the result
> > which is started from 16 if reg_index is 2, but we expect that
> > it should be started from 8. Separate the implementation for
> > RV32 and RV64 respectively.
> >
> > Signed-off-by: Zong Li <zong.li@sifive.com>
> > ---
> >  target/riscv/pmp.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> >
> > diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> > index 2a2b9f5363..e0161d6aab 100644
> > --- a/target/riscv/pmp.c
> > +++ b/target/riscv/pmp.c
> > @@ -310,6 +310,10 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
> >      int i;
> >      uint8_t cfg_val;
> >
> > +#if defined(TARGET_RISCV64)
> > +    reg_index >>= 1;
> > +#endif
> > +
> >      trace_pmpcfg_csr_write(env->mhartid, reg_index, val);
> >
> >      if ((reg_index & 1) && (sizeof(target_ulong) == 8)) {
> > @@ -335,6 +339,10 @@ target_ulong pmpcfg_csr_read(CPURISCVState *env, uint32_t reg_index)
> >      target_ulong cfg_val = 0;
> >      target_ulong val = 0;
> >
> > +#if defined(TARGET_RISCV64)
> > +    reg_index >>= 1;
> > +#endif
> > +
> >      for (i = 0; i < sizeof(target_ulong); i++) {
> >          val = pmp_read_cfg(env, (reg_index * sizeof(target_ulong)) + i);
> >          cfg_val |= (val << (i * 8));
> > --
>
> It seems you missed to address my review comments in v3? reg_index
> should be shifted after we call the trace function.
>

Sorry for that, there was something wrong in my local tree, I have
been posting the 5th version patches, and hope it picks the suggestion
already. Thanks.

> Regards,
> Bin


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-07-25 15:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-24  9:08 [PATCH v4 0/4] Fix some PMP implementations Zong Li
2020-07-24  9:08 ` [PATCH v4 1/4] target/riscv: Fix the range of pmpcfg of CSR funcion table Zong Li
2020-07-24  9:08   ` Zong Li
2020-07-24  9:08 ` [PATCH v4 2/4] target/riscv/pmp.c: Fix the index offset on RV64 Zong Li
2020-07-24  9:22   ` Bin Meng
2020-07-24  9:22     ` Bin Meng
2020-07-25 15:06     ` Zong Li
2020-07-25 15:06       ` Zong Li
2020-07-24  9:08 ` [PATCH v4 3/4] target/riscv: Fix the translation of physical address Zong Li
2020-07-24  9:08 ` [PATCH v4 4/4] target/riscv: Change the TLB page size depends on PMP entries Zong Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.