qemu-riscv.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/12] target/riscv: Fix PMP related problem
@ 2023-05-17  9:15 Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 01/12] target/riscv: Update pmp_get_tlb_size() Weiwei Li
                   ` (12 more replies)
  0 siblings, 13 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

This patchset originally tries to fix the PMP bypass problem issue https://gitlab.com/qemu-project/qemu/-/issues/1542:

* TLB will be cached if the matched PMP entry cover the whole page.  However PMP entries with higher priority may cover part of the page (but not match the access address), which means different regions in this page may have different permission rights. So it also cannot be cached (patch 1).
* Writing to pmpaddr and MML/MMWP didn't trigger tlb flush (patch 7 and  9). 
* We set the tlb_size to 1 to make the TLB_INVALID_MASK set, and and the next access will again go through tlb_fill. However, this way will not work in tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check (original patch 11).

The port is available here:
https://github.com/plctlab/plct-qemu/tree/plct-pmp-fix-v6

v6:

* Update comments in Patch 1
* Remove the merged Patch 11

v5:

* Mov the original Patch 6 to Patch 3
* add Patch 4 to change the return type of pmp_hart_has_privs() to bool 
* add Patch 5 to make RLB/MML/MMWP bits writable only when Smepmp is enabled
* add Patch 6 to remove unused paramters in pmp_hart_has_privs_default()
* add Patch 7 to flush tlb when MMWP or MML bits are changed
* add Patch 8 to update the next rule addr in pmpaddr_csr_write()
* add Patch 13 to deny access if access is partially inside the PMP entry

v4:

* Update comments for Patch 1, and move partial check code from Patch 2 to Patch 1

* Restore log message change in Patch 2

* Update commit message and the way to improve the problem in Patch 6

v3:

* Ignore disabled PMP entry in pmp_get_tlb_size() in Patch 1

* Drop Patch 5, since tb jmp cache have been flushed in tlb_flush, so flush tb seems unnecessary.

* Fix commit message problems in Patch 8 (Patch 7 in new patchset)

v2:

* Update commit message for patch 1
* Add default tlb_size when pmp is diabled or there is no rules and only get the tlb size when translation success in patch 2
* Update get_page_addr_code_hostp instead of probe_access_internal to fix the cached host address for instruction fetch in patch 6
* Add patch 7 to make the short up really work in pmp_hart_has_privs
* Add patch 8 to use pmp_update_rule_addr() and pmp_update_rule_nums() separately

Weiwei Li (12):
  target/riscv: Update pmp_get_tlb_size()
  target/riscv: Move pmp_get_tlb_size apart from
    get_physical_address_pmp
  target/riscv: Make the short cut really work in pmp_hart_has_privs
  target/riscv: Change the return type of pmp_hart_has_privs() to bool
  target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is
    enabled
  target/riscv: Remove unused paramters in pmp_hart_has_privs_default()
  target/riscv: Flush TLB when MMWP or MML bits are changed
  target/riscv: Update the next rule addr in pmpaddr_csr_write()
  target/riscv: Flush TLB when pmpaddr is updated
  target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes
  target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write
  target/riscv: Deny access if access is partially inside the PMP entry

 target/riscv/cpu_helper.c |  27 ++---
 target/riscv/pmp.c        | 203 ++++++++++++++++++++++----------------
 target/riscv/pmp.h        |  11 +--
 3 files changed, 135 insertions(+), 106 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v6 01/12] target/riscv: Update pmp_get_tlb_size()
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 02/12] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp Weiwei Li
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

PMP entries before (including) the matched PMP entry may only cover partial
of the TLB page, and this may split the page into regions with different
permissions. Such as for PMP0 (0x80000008~0x8000000F, R) and PMP1 (0x80000000~
0x80000FFF, RWX), write access to 0x80000000 will match PMP1. However we cannot
cache the translation result in the TLB since this will make the write access
to 0x80000008 bypass the check of PMP0. So we should check all of them instead
of the matched PMP entry in pmp_get_tlb_size() and set the tlb_size to 1 in
this case.
Set tlb_size to TARGET_PAGE_SIZE if PMP is not support or there is no PMP rules.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/cpu_helper.c |  7 ++--
 target/riscv/pmp.c        | 69 ++++++++++++++++++++++++++++++---------
 target/riscv/pmp.h        |  3 +-
 3 files changed, 57 insertions(+), 22 deletions(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 57d04385f1..de585b14cc 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -714,11 +714,8 @@ static int get_physical_address_pmp(CPURISCVState *env, int *prot,
     }
 
     *prot = pmp_priv_to_page_prot(pmp_priv);
-    if ((tlb_size != NULL) && pmp_index != MAX_RISCV_PMPS) {
-        target_ulong tlb_sa = addr & ~(TARGET_PAGE_SIZE - 1);
-        target_ulong tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
-
-        *tlb_size = pmp_get_tlb_size(env, pmp_index, tlb_sa, tlb_ea);
+    if (tlb_size != NULL) {
+        *tlb_size = pmp_get_tlb_size(env, addr);
     }
 
     return TRANSLATE_SUCCESS;
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 1f5aca42e8..406cff74f2 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -601,28 +601,67 @@ target_ulong mseccfg_csr_read(CPURISCVState *env)
 }
 
 /*
- * Calculate the TLB size if the start address or the end address of
- * PMP entry is presented in the TLB page.
+ * Calculate the TLB size.
+ * It's possible that PMP regions only cover partial of the TLB page, and
+ * this may split the page into regions with different permissions.
+ * For example if PMP0 is (0x80000008~0x8000000F, R) and PMP1 is (0x80000000
+ * ~0x80000FFF, RWX), then region 0x80000008~0x8000000F has R permission, and
+ * the other regions in this page have RWX permissions.
+ * A write access to 0x80000000 will match PMP1. However we cannot cache the
+ * translation result in the TLB since this will make the write access to
+ * 0x80000008 bypass the check of PMP0.
+ * To avoid this we return a size of 1 (which means no caching) if the PMP
+ * region only covers partial of the TLB page.
  */
-target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
-                              target_ulong tlb_sa, target_ulong tlb_ea)
+target_ulong pmp_get_tlb_size(CPURISCVState *env, target_ulong addr)
 {
-    target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
-    target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
+    target_ulong pmp_sa;
+    target_ulong pmp_ea;
+    target_ulong tlb_sa = addr & ~(TARGET_PAGE_SIZE - 1);
+    target_ulong tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
+    int i;
 
-    if (pmp_sa <= tlb_sa && pmp_ea >= tlb_ea) {
+    /*
+     * If PMP is not supported or there are no PMP rules, the TLB page will not
+     * be split into regions with different permissions by PMP so we set the
+     * size to TARGET_PAGE_SIZE.
+     */
+    if (!riscv_cpu_cfg(env)->pmp || !pmp_get_num_rules(env)) {
         return TARGET_PAGE_SIZE;
-    } else {
+    }
+
+    for (i = 0; i < MAX_RISCV_PMPS; i++) {
+        if (pmp_get_a_field(env->pmp_state.pmp[i].cfg_reg) == PMP_AMATCH_OFF) {
+            continue;
+        }
+
+        pmp_sa = env->pmp_state.addr[i].sa;
+        pmp_ea = env->pmp_state.addr[i].ea;
+
         /*
-         * At this point we have a tlb_size that is the smallest possible size
-         * That fits within a TARGET_PAGE_SIZE and the PMP region.
-         *
-         * If the size is less then TARGET_PAGE_SIZE we drop the size to 1.
-         * This means the result isn't cached in the TLB and is only used for
-         * a single translation.
+         * Only the first PMP entry that covers (whole or partial of) the TLB
+         * page really matters:
+         * If it covers the whole TLB page, set the size to TARGET_PAGE_SIZE,
+         * since the following PMP entries have lower priority and will not
+         * affect the permissions of the page.
+         * If it only covers partial of the TLB page, set the size to 1 since
+         * the allowed permissions of the region may be different from other
+         * region of the page.
          */
-        return 1;
+        if (pmp_sa <= tlb_sa && pmp_ea >= tlb_ea) {
+            return TARGET_PAGE_SIZE;
+        } else if ((pmp_sa >= tlb_sa && pmp_sa <= tlb_ea) ||
+                   (pmp_ea >= tlb_sa && pmp_ea <= tlb_ea)) {
+            return 1;
+        }
     }
+
+    /*
+     * If no PMP entry matches the TLB page, the TLB page will also not be
+     * split into regions with different permissions by PMP so we set the size
+     * to TARGET_PAGE_SIZE.
+     */
+    return TARGET_PAGE_SIZE;
 }
 
 /*
diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
index b296ea1fc6..0a7e24750b 100644
--- a/target/riscv/pmp.h
+++ b/target/riscv/pmp.h
@@ -76,8 +76,7 @@ int pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
                        target_ulong size, pmp_priv_t privs,
                        pmp_priv_t *allowed_privs,
                        target_ulong mode);
-target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
-                              target_ulong tlb_sa, target_ulong tlb_ea);
+target_ulong pmp_get_tlb_size(CPURISCVState *env, target_ulong addr);
 void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index);
 void pmp_update_rule_nums(CPURISCVState *env);
 uint32_t pmp_get_num_rules(CPURISCVState *env);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 02/12] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 01/12] target/riscv: Update pmp_get_tlb_size() Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 03/12] target/riscv: Make the short cut really work in pmp_hart_has_privs Weiwei Li
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

pmp_get_tlb_size can be separated from get_physical_address_pmp and is only
needed when ret == TRANSLATE_SUCCESS.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/cpu_helper.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index de585b14cc..15eafb342d 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -687,14 +687,11 @@ void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv)
  *
  * @env: CPURISCVState
  * @prot: The returned protection attributes
- * @tlb_size: TLB page size containing addr. It could be modified after PMP
- *            permission checking. NULL if not set TLB page for addr.
  * @addr: The physical address to be checked permission
  * @access_type: The type of MMU access
  * @mode: Indicates current privilege level.
  */
-static int get_physical_address_pmp(CPURISCVState *env, int *prot,
-                                    target_ulong *tlb_size, hwaddr addr,
+static int get_physical_address_pmp(CPURISCVState *env, int *prot, hwaddr addr,
                                     int size, MMUAccessType access_type,
                                     int mode)
 {
@@ -714,9 +711,6 @@ static int get_physical_address_pmp(CPURISCVState *env, int *prot,
     }
 
     *prot = pmp_priv_to_page_prot(pmp_priv);
-    if (tlb_size != NULL) {
-        *tlb_size = pmp_get_tlb_size(env, addr);
-    }
 
     return TRANSLATE_SUCCESS;
 }
@@ -905,7 +899,7 @@ restart:
         }
 
         int pmp_prot;
-        int pmp_ret = get_physical_address_pmp(env, &pmp_prot, NULL, pte_addr,
+        int pmp_ret = get_physical_address_pmp(env, &pmp_prot, pte_addr,
                                                sizeof(target_ulong),
                                                MMU_DATA_LOAD, PRV_S);
         if (pmp_ret != TRANSLATE_SUCCESS) {
@@ -1301,8 +1295,9 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
             prot &= prot2;
 
             if (ret == TRANSLATE_SUCCESS) {
-                ret = get_physical_address_pmp(env, &prot_pmp, &tlb_size, pa,
+                ret = get_physical_address_pmp(env, &prot_pmp, pa,
                                                size, access_type, mode);
+                tlb_size = pmp_get_tlb_size(env, pa);
 
                 qemu_log_mask(CPU_LOG_MMU,
                               "%s PMP address=" HWADDR_FMT_plx " ret %d prot"
@@ -1334,8 +1329,9 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                       __func__, address, ret, pa, prot);
 
         if (ret == TRANSLATE_SUCCESS) {
-            ret = get_physical_address_pmp(env, &prot_pmp, &tlb_size, pa,
+            ret = get_physical_address_pmp(env, &prot_pmp, pa,
                                            size, access_type, mode);
+            tlb_size = pmp_get_tlb_size(env, pa);
 
             qemu_log_mask(CPU_LOG_MMU,
                           "%s PMP address=" HWADDR_FMT_plx " ret %d prot"
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 03/12] target/riscv: Make the short cut really work in pmp_hart_has_privs
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 01/12] target/riscv: Update pmp_get_tlb_size() Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 02/12] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 04/12] target/riscv: Change the return type of pmp_hart_has_privs() to bool Weiwei Li
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

Return the result directly for short cut, since We needn't do the
following check on the PMP entries if there is no PMP rules.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/pmp.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 406cff74f2..9cc5c0e9e8 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -316,6 +316,7 @@ int pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
                                        allowed_privs, mode)) {
             ret = MAX_RISCV_PMPS;
         }
+        return ret;
     }
 
     if (size == 0) {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 04/12] target/riscv: Change the return type of pmp_hart_has_privs() to bool
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (2 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 03/12] target/riscv: Make the short cut really work in pmp_hart_has_privs Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 05/12] target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is enabled Weiwei Li
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

We no longer need the pmp_index for matched PMP entry now.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/cpu_helper.c |  8 ++++----
 target/riscv/pmp.c        | 32 +++++++++++++-------------------
 target/riscv/pmp.h        |  8 ++++----
 3 files changed, 21 insertions(+), 27 deletions(-)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 15eafb342d..7c736c256e 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -696,16 +696,16 @@ static int get_physical_address_pmp(CPURISCVState *env, int *prot, hwaddr addr,
                                     int mode)
 {
     pmp_priv_t pmp_priv;
-    int pmp_index = -1;
+    bool pmp_has_privs;
 
     if (!riscv_cpu_cfg(env)->pmp) {
         *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
         return TRANSLATE_SUCCESS;
     }
 
-    pmp_index = pmp_hart_has_privs(env, addr, size, 1 << access_type,
-                                   &pmp_priv, mode);
-    if (pmp_index < 0) {
+    pmp_has_privs = pmp_hart_has_privs(env, addr, size, 1 << access_type,
+                                       &pmp_priv, mode);
+    if (!pmp_has_privs) {
         *prot = 0;
         return TRANSLATE_PMP_FAIL;
     }
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 9cc5c0e9e8..1d42cfb5f8 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -296,27 +296,23 @@ static bool pmp_hart_has_privs_default(CPURISCVState *env, target_ulong addr,
 
 /*
  * Check if the address has required RWX privs to complete desired operation
- * Return PMP rule index if a pmp rule match
- * Return MAX_RISCV_PMPS if default match
- * Return negtive value if no match
+ * Return true if a pmp rule match or default match
+ * Return false if no match
  */
-int pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
-                       target_ulong size, pmp_priv_t privs,
-                       pmp_priv_t *allowed_privs, target_ulong mode)
+bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
+                        target_ulong size, pmp_priv_t privs,
+                        pmp_priv_t *allowed_privs, target_ulong mode)
 {
     int i = 0;
-    int ret = -1;
+    bool ret = false;
     int pmp_size = 0;
     target_ulong s = 0;
     target_ulong e = 0;
 
     /* Short cut if no rules */
     if (0 == pmp_get_num_rules(env)) {
-        if (pmp_hart_has_privs_default(env, addr, size, privs,
-                                       allowed_privs, mode)) {
-            ret = MAX_RISCV_PMPS;
-        }
-        return ret;
+        return pmp_hart_has_privs_default(env, addr, size, privs,
+                                          allowed_privs, mode);
     }
 
     if (size == 0) {
@@ -345,7 +341,7 @@ int pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
         if ((s + e) == 1) {
             qemu_log_mask(LOG_GUEST_ERROR,
                           "pmp violation - access is partially inside\n");
-            ret = -1;
+            ret = false;
             break;
         }
 
@@ -453,17 +449,15 @@ int pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
              * defined with PMP must be used. We shouldn't fallback on
              * finding default privileges.
              */
-            ret = i;
+            ret = true;
             break;
         }
     }
 
     /* No rule matched */
-    if (ret == -1) {
-        if (pmp_hart_has_privs_default(env, addr, size, privs,
-                                       allowed_privs, mode)) {
-            ret = MAX_RISCV_PMPS;
-        }
+    if (!ret) {
+        ret = pmp_hart_has_privs_default(env, addr, size, privs,
+                                         allowed_privs, mode);
     }
 
     return ret;
diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
index 0a7e24750b..cf5c99f8e6 100644
--- a/target/riscv/pmp.h
+++ b/target/riscv/pmp.h
@@ -72,10 +72,10 @@ target_ulong mseccfg_csr_read(CPURISCVState *env);
 void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
                        target_ulong val);
 target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index);
-int pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
-                       target_ulong size, pmp_priv_t privs,
-                       pmp_priv_t *allowed_privs,
-                       target_ulong mode);
+bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
+                        target_ulong size, pmp_priv_t privs,
+                        pmp_priv_t *allowed_privs,
+                        target_ulong mode);
 target_ulong pmp_get_tlb_size(CPURISCVState *env, target_ulong addr);
 void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index);
 void pmp_update_rule_nums(CPURISCVState *env);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 05/12] target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is enabled
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (3 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 04/12] target/riscv: Change the return type of pmp_hart_has_privs() to bool Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 06/12] target/riscv: Remove unused paramters in pmp_hart_has_privs_default() Weiwei Li
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

RLB/MML/MMWP bits in mseccfg CSR are introduced by Smepmp extension.
So they can only be writable and set to 1s when cfg.epmp is true.
Then we also need't check on epmp in pmp_hart_has_privs_default().

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/pmp.c | 50 ++++++++++++++++++++++++----------------------
 1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 1d42cfb5f8..10a4c7c8f4 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -243,30 +243,28 @@ static bool pmp_hart_has_privs_default(CPURISCVState *env, target_ulong addr,
 {
     bool ret;
 
-    if (riscv_cpu_cfg(env)->epmp) {
-        if (MSECCFG_MMWP_ISSET(env)) {
-            /*
-             * The Machine Mode Whitelist Policy (mseccfg.MMWP) is set
-             * so we default to deny all, even for M-mode.
-             */
+    if (MSECCFG_MMWP_ISSET(env)) {
+        /*
+         * The Machine Mode Whitelist Policy (mseccfg.MMWP) is set
+         * so we default to deny all, even for M-mode.
+         */
+        *allowed_privs = 0;
+        return false;
+    } else if (MSECCFG_MML_ISSET(env)) {
+        /*
+         * The Machine Mode Lockdown (mseccfg.MML) bit is set
+         * so we can only execute code in M-mode with an applicable
+         * rule. Other modes are disabled.
+         */
+        if (mode == PRV_M && !(privs & PMP_EXEC)) {
+            ret = true;
+            *allowed_privs = PMP_READ | PMP_WRITE;
+        } else {
+            ret = false;
             *allowed_privs = 0;
-            return false;
-        } else if (MSECCFG_MML_ISSET(env)) {
-            /*
-             * The Machine Mode Lockdown (mseccfg.MML) bit is set
-             * so we can only execute code in M-mode with an applicable
-             * rule. Other modes are disabled.
-             */
-            if (mode == PRV_M && !(privs & PMP_EXEC)) {
-                ret = true;
-                *allowed_privs = PMP_READ | PMP_WRITE;
-            } else {
-                ret = false;
-                *allowed_privs = 0;
-            }
-
-            return ret;
         }
+
+        return ret;
     }
 
     if (!riscv_cpu_cfg(env)->pmp || (mode == PRV_M)) {
@@ -580,8 +578,12 @@ void mseccfg_csr_write(CPURISCVState *env, target_ulong val)
         }
     }
 
-    /* Sticky bits */
-    val |= (env->mseccfg & (MSECCFG_MMWP | MSECCFG_MML));
+    if (riscv_cpu_cfg(env)->epmp) {
+        /* Sticky bits */
+        val |= (env->mseccfg & (MSECCFG_MMWP | MSECCFG_MML));
+    } else {
+        val &= ~(MSECCFG_MMWP | MSECCFG_MML | MSECCFG_RLB);
+    }
 
     env->mseccfg = val;
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 06/12] target/riscv: Remove unused paramters in pmp_hart_has_privs_default()
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (4 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 05/12] target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is enabled Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 07/12] target/riscv: Flush TLB when MMWP or MML bits are changed Weiwei Li
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

The addr and size parameters in pmp_hart_has_privs_default() are unused.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/pmp.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 10a4c7c8f4..666fbb74b9 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -236,8 +236,7 @@ static int pmp_is_in_range(CPURISCVState *env, int pmp_index,
 /*
  * Check if the address has required RWX privs when no PMP entry is matched.
  */
-static bool pmp_hart_has_privs_default(CPURISCVState *env, target_ulong addr,
-                                       target_ulong size, pmp_priv_t privs,
+static bool pmp_hart_has_privs_default(CPURISCVState *env, pmp_priv_t privs,
                                        pmp_priv_t *allowed_privs,
                                        target_ulong mode)
 {
@@ -309,8 +308,7 @@ bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
 
     /* Short cut if no rules */
     if (0 == pmp_get_num_rules(env)) {
-        return pmp_hart_has_privs_default(env, addr, size, privs,
-                                          allowed_privs, mode);
+        return pmp_hart_has_privs_default(env, privs, allowed_privs, mode);
     }
 
     if (size == 0) {
@@ -454,8 +452,7 @@ bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
 
     /* No rule matched */
     if (!ret) {
-        ret = pmp_hart_has_privs_default(env, addr, size, privs,
-                                         allowed_privs, mode);
+        ret = pmp_hart_has_privs_default(env, privs, allowed_privs, mode);
     }
 
     return ret;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 07/12] target/riscv: Flush TLB when MMWP or MML bits are changed
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (5 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 06/12] target/riscv: Remove unused paramters in pmp_hart_has_privs_default() Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 08/12] target/riscv: Update the next rule addr in pmpaddr_csr_write() Weiwei Li
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

MMWP and MML bits may affect the allowed privs of PMP entries and the
default privs, both of which may change the allowed privs of exsited
TLB entries. So we need flush TLB when they are changed.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/pmp.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 666fbb74b9..8cb8519b0c 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -578,6 +578,9 @@ void mseccfg_csr_write(CPURISCVState *env, target_ulong val)
     if (riscv_cpu_cfg(env)->epmp) {
         /* Sticky bits */
         val |= (env->mseccfg & (MSECCFG_MMWP | MSECCFG_MML));
+        if ((val ^ env->mseccfg) & (MSECCFG_MMWP | MSECCFG_MML)) {
+            tlb_flush(env_cpu(env));
+        }
     } else {
         val &= ~(MSECCFG_MMWP | MSECCFG_MML | MSECCFG_RLB);
     }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 08/12] target/riscv: Update the next rule addr in pmpaddr_csr_write()
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (6 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 07/12] target/riscv: Flush TLB when MMWP or MML bits are changed Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 09/12] target/riscv: Flush TLB when pmpaddr is updated Weiwei Li
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

Currently only the rule addr of the same index of pmpaddr is updated
when pmpaddr CSR is modified. However, the rule addr of next PMP entry
may also be affected if its A field is PMP_AMATCH_TOR. So we should
also update it in this case.

Write to pmpaddr CSR will not affect the rule nums, So we needn't update
call pmp_update_rule_nums()  in pmpaddr_csr_write().

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/pmp.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 8cb8519b0c..efe72afe29 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -507,6 +507,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
                        target_ulong val)
 {
     trace_pmpaddr_csr_write(env->mhartid, addr_index, val);
+    bool is_next_cfg_tor = false;
 
     if (addr_index < MAX_RISCV_PMPS) {
         /*
@@ -515,9 +516,9 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
          */
         if (addr_index + 1 < MAX_RISCV_PMPS) {
             uint8_t pmp_cfg = env->pmp_state.pmp[addr_index + 1].cfg_reg;
+            is_next_cfg_tor = PMP_AMATCH_TOR == pmp_get_a_field(pmp_cfg);
 
-            if (pmp_cfg & PMP_LOCK &&
-                PMP_AMATCH_TOR == pmp_get_a_field(pmp_cfg)) {
+            if (pmp_cfg & PMP_LOCK && is_next_cfg_tor) {
                 qemu_log_mask(LOG_GUEST_ERROR,
                               "ignoring pmpaddr write - pmpcfg + 1 locked\n");
                 return;
@@ -526,7 +527,10 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
 
         if (!pmp_is_locked(env, addr_index)) {
             env->pmp_state.pmp[addr_index].addr_reg = val;
-            pmp_update_rule(env, addr_index);
+            pmp_update_rule_addr(env, addr_index);
+            if (is_next_cfg_tor) {
+                pmp_update_rule_addr(env, addr_index + 1);
+            }
         } else {
             qemu_log_mask(LOG_GUEST_ERROR,
                           "ignoring pmpaddr write - locked\n");
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 09/12] target/riscv: Flush TLB when pmpaddr is updated
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (7 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 08/12] target/riscv: Update the next rule addr in pmpaddr_csr_write() Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 10/12] target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes Weiwei Li
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

TLB should be flushed not only for pmpcfg csr changes, but also for
pmpaddr csr changes.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
---
 target/riscv/pmp.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index efe72afe29..70fb2327f7 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -531,6 +531,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
             if (is_next_cfg_tor) {
                 pmp_update_rule_addr(env, addr_index + 1);
             }
+            tlb_flush(env_cpu(env));
         } else {
             qemu_log_mask(LOG_GUEST_ERROR,
                           "ignoring pmpaddr write - locked\n");
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 10/12] target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (8 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 09/12] target/riscv: Flush TLB when pmpaddr is updated Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 11/12] target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write Weiwei Li
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

TLB needn't be flushed when pmpcfg/pmpaddr don't changes.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
---
 target/riscv/pmp.c | 28 ++++++++++++++++++----------
 1 file changed, 18 insertions(+), 10 deletions(-)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 70fb2327f7..37b25e9b34 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -26,7 +26,7 @@
 #include "trace.h"
 #include "exec/exec-all.h"
 
-static void pmp_write_cfg(CPURISCVState *env, uint32_t addr_index,
+static bool pmp_write_cfg(CPURISCVState *env, uint32_t addr_index,
                           uint8_t val);
 static uint8_t pmp_read_cfg(CPURISCVState *env, uint32_t addr_index);
 static void pmp_update_rule(CPURISCVState *env, uint32_t pmp_index);
@@ -83,7 +83,7 @@ static inline uint8_t pmp_read_cfg(CPURISCVState *env, uint32_t pmp_index)
  * Accessor to set the cfg reg for a specific PMP/HART
  * Bounds checks and relevant lock bit.
  */
-static void pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
+static bool pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
 {
     if (pmp_index < MAX_RISCV_PMPS) {
         bool locked = true;
@@ -119,14 +119,17 @@ static void pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
 
         if (locked) {
             qemu_log_mask(LOG_GUEST_ERROR, "ignoring pmpcfg write - locked\n");
-        } else {
+        } else if (env->pmp_state.pmp[pmp_index].cfg_reg != val) {
             env->pmp_state.pmp[pmp_index].cfg_reg = val;
             pmp_update_rule(env, pmp_index);
+            return true;
         }
     } else {
         qemu_log_mask(LOG_GUEST_ERROR,
                       "ignoring pmpcfg write - out of bounds\n");
     }
+
+    return false;
 }
 
 static void pmp_decode_napot(target_ulong a, target_ulong *sa,
@@ -467,16 +470,19 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
     int i;
     uint8_t cfg_val;
     int pmpcfg_nums = 2 << riscv_cpu_mxl(env);
+    bool modified = false;
 
     trace_pmpcfg_csr_write(env->mhartid, reg_index, val);
 
     for (i = 0; i < pmpcfg_nums; i++) {
         cfg_val = (val >> 8 * i)  & 0xff;
-        pmp_write_cfg(env, (reg_index * 4) + i, cfg_val);
+        modified |= pmp_write_cfg(env, (reg_index * 4) + i, cfg_val);
     }
 
     /* If PMP permission of any addr has been changed, flush TLB pages. */
-    tlb_flush(env_cpu(env));
+    if (modified) {
+        tlb_flush(env_cpu(env));
+    }
 }
 
 
@@ -526,12 +532,14 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
         }
 
         if (!pmp_is_locked(env, addr_index)) {
-            env->pmp_state.pmp[addr_index].addr_reg = val;
-            pmp_update_rule_addr(env, addr_index);
-            if (is_next_cfg_tor) {
-                pmp_update_rule_addr(env, addr_index + 1);
+            if (env->pmp_state.pmp[addr_index].addr_reg != val) {
+                env->pmp_state.pmp[addr_index].addr_reg = val;
+                pmp_update_rule_addr(env, addr_index);
+                if (is_next_cfg_tor) {
+                    pmp_update_rule_addr(env, addr_index + 1);
+                }
+                tlb_flush(env_cpu(env));
             }
-            tlb_flush(env_cpu(env));
         } else {
             qemu_log_mask(LOG_GUEST_ERROR,
                           "ignoring pmpaddr write - locked\n");
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 11/12] target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (9 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 10/12] target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-17  9:15 ` [PATCH v6 12/12] target/riscv: Deny access if access is partially inside the PMP entry Weiwei Li
  2023-05-18  9:46 ` [PATCH v6 00/12] target/riscv: Fix PMP related problem Alistair Francis
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

Use pmp_update_rule_addr() and pmp_update_rule_nums() separately to
update rule nums only once for each pmpcfg_csr_write. Then remove
pmp_update_rule() since it become unused.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/pmp.c | 16 ++--------------
 1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 37b25e9b34..90bde7609c 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -29,7 +29,6 @@
 static bool pmp_write_cfg(CPURISCVState *env, uint32_t addr_index,
                           uint8_t val);
 static uint8_t pmp_read_cfg(CPURISCVState *env, uint32_t addr_index);
-static void pmp_update_rule(CPURISCVState *env, uint32_t pmp_index);
 
 /*
  * Accessor method to extract address matching type 'a field' from cfg reg
@@ -121,7 +120,7 @@ static bool pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
             qemu_log_mask(LOG_GUEST_ERROR, "ignoring pmpcfg write - locked\n");
         } else if (env->pmp_state.pmp[pmp_index].cfg_reg != val) {
             env->pmp_state.pmp[pmp_index].cfg_reg = val;
-            pmp_update_rule(env, pmp_index);
+            pmp_update_rule_addr(env, pmp_index);
             return true;
         }
     } else {
@@ -209,18 +208,6 @@ void pmp_update_rule_nums(CPURISCVState *env)
     }
 }
 
-/*
- * Convert cfg/addr reg values here into simple 'sa' --> start address and 'ea'
- *   end address values.
- *   This function is called relatively infrequently whereas the check that
- *   an address is within a pmp rule is called often, so optimise that one
- */
-static void pmp_update_rule(CPURISCVState *env, uint32_t pmp_index)
-{
-    pmp_update_rule_addr(env, pmp_index);
-    pmp_update_rule_nums(env);
-}
-
 static int pmp_is_in_range(CPURISCVState *env, int pmp_index,
                            target_ulong addr)
 {
@@ -481,6 +468,7 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
 
     /* If PMP permission of any addr has been changed, flush TLB pages. */
     if (modified) {
+        pmp_update_rule_nums(env);
         tlb_flush(env_cpu(env));
     }
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v6 12/12] target/riscv: Deny access if access is partially inside the PMP entry
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (10 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 11/12] target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write Weiwei Li
@ 2023-05-17  9:15 ` Weiwei Li
  2023-05-18  9:46 ` [PATCH v6 00/12] target/riscv: Fix PMP related problem Alistair Francis
  12 siblings, 0 replies; 14+ messages in thread
From: Weiwei Li @ 2023-05-17  9:15 UTC (permalink / raw)
  To: qemu-riscv, qemu-devel
  Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
	richard.henderson, wangjunqiang, lazyparser, Weiwei Li

Access will fail if access is partially inside the PMP entry.
However,only setting ret = false doesn't really mean pmp violation
since pmp_hart_has_privs_default() may return true at the end of
pmp_hart_has_privs().

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 target/riscv/pmp.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 90bde7609c..8b66b1bb7a 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -327,8 +327,8 @@ bool pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
         if ((s + e) == 1) {
             qemu_log_mask(LOG_GUEST_ERROR,
                           "pmp violation - access is partially inside\n");
-            ret = false;
-            break;
+            *allowed_privs = 0;
+            return false;
         }
 
         /* fully inside */
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v6 00/12] target/riscv: Fix PMP related problem
  2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
                   ` (11 preceding siblings ...)
  2023-05-17  9:15 ` [PATCH v6 12/12] target/riscv: Deny access if access is partially inside the PMP entry Weiwei Li
@ 2023-05-18  9:46 ` Alistair Francis
  12 siblings, 0 replies; 14+ messages in thread
From: Alistair Francis @ 2023-05-18  9:46 UTC (permalink / raw)
  To: Weiwei Li
  Cc: qemu-riscv, qemu-devel, palmer, alistair.francis, bin.meng,
	dbarboza, zhiwei_liu, richard.henderson, wangjunqiang,
	lazyparser

On Wed, May 17, 2023 at 7:16 PM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>
> This patchset originally tries to fix the PMP bypass problem issue https://gitlab.com/qemu-project/qemu/-/issues/1542:
>
> * TLB will be cached if the matched PMP entry cover the whole page.  However PMP entries with higher priority may cover part of the page (but not match the access address), which means different regions in this page may have different permission rights. So it also cannot be cached (patch 1).
> * Writing to pmpaddr and MML/MMWP didn't trigger tlb flush (patch 7 and  9).
> * We set the tlb_size to 1 to make the TLB_INVALID_MASK set, and and the next access will again go through tlb_fill. However, this way will not work in tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check (original patch 11).
>
> The port is available here:
> https://github.com/plctlab/plct-qemu/tree/plct-pmp-fix-v6
>
> v6:
>
> * Update comments in Patch 1
> * Remove the merged Patch 11
>
> v5:
>
> * Mov the original Patch 6 to Patch 3
> * add Patch 4 to change the return type of pmp_hart_has_privs() to bool
> * add Patch 5 to make RLB/MML/MMWP bits writable only when Smepmp is enabled
> * add Patch 6 to remove unused paramters in pmp_hart_has_privs_default()
> * add Patch 7 to flush tlb when MMWP or MML bits are changed
> * add Patch 8 to update the next rule addr in pmpaddr_csr_write()
> * add Patch 13 to deny access if access is partially inside the PMP entry
>
> v4:
>
> * Update comments for Patch 1, and move partial check code from Patch 2 to Patch 1
>
> * Restore log message change in Patch 2
>
> * Update commit message and the way to improve the problem in Patch 6
>
> v3:
>
> * Ignore disabled PMP entry in pmp_get_tlb_size() in Patch 1
>
> * Drop Patch 5, since tb jmp cache have been flushed in tlb_flush, so flush tb seems unnecessary.
>
> * Fix commit message problems in Patch 8 (Patch 7 in new patchset)
>
> v2:
>
> * Update commit message for patch 1
> * Add default tlb_size when pmp is diabled or there is no rules and only get the tlb size when translation success in patch 2
> * Update get_page_addr_code_hostp instead of probe_access_internal to fix the cached host address for instruction fetch in patch 6
> * Add patch 7 to make the short up really work in pmp_hart_has_privs
> * Add patch 8 to use pmp_update_rule_addr() and pmp_update_rule_nums() separately
>
> Weiwei Li (12):
>   target/riscv: Update pmp_get_tlb_size()
>   target/riscv: Move pmp_get_tlb_size apart from
>     get_physical_address_pmp
>   target/riscv: Make the short cut really work in pmp_hart_has_privs
>   target/riscv: Change the return type of pmp_hart_has_privs() to bool
>   target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is
>     enabled
>   target/riscv: Remove unused paramters in pmp_hart_has_privs_default()
>   target/riscv: Flush TLB when MMWP or MML bits are changed
>   target/riscv: Update the next rule addr in pmpaddr_csr_write()
>   target/riscv: Flush TLB when pmpaddr is updated
>   target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes
>   target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write
>   target/riscv: Deny access if access is partially inside the PMP entry

Thanks!

Applied to riscv-to-apply.next

Alistair

>
>  target/riscv/cpu_helper.c |  27 ++---
>  target/riscv/pmp.c        | 203 ++++++++++++++++++++++----------------
>  target/riscv/pmp.h        |  11 +--
>  3 files changed, 135 insertions(+), 106 deletions(-)
>
> --
> 2.25.1
>
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-05-18  9:46 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-17  9:15 [PATCH v6 00/12] target/riscv: Fix PMP related problem Weiwei Li
2023-05-17  9:15 ` [PATCH v6 01/12] target/riscv: Update pmp_get_tlb_size() Weiwei Li
2023-05-17  9:15 ` [PATCH v6 02/12] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp Weiwei Li
2023-05-17  9:15 ` [PATCH v6 03/12] target/riscv: Make the short cut really work in pmp_hart_has_privs Weiwei Li
2023-05-17  9:15 ` [PATCH v6 04/12] target/riscv: Change the return type of pmp_hart_has_privs() to bool Weiwei Li
2023-05-17  9:15 ` [PATCH v6 05/12] target/riscv: Make RLB/MML/MMWP bits writable only when Smepmp is enabled Weiwei Li
2023-05-17  9:15 ` [PATCH v6 06/12] target/riscv: Remove unused paramters in pmp_hart_has_privs_default() Weiwei Li
2023-05-17  9:15 ` [PATCH v6 07/12] target/riscv: Flush TLB when MMWP or MML bits are changed Weiwei Li
2023-05-17  9:15 ` [PATCH v6 08/12] target/riscv: Update the next rule addr in pmpaddr_csr_write() Weiwei Li
2023-05-17  9:15 ` [PATCH v6 09/12] target/riscv: Flush TLB when pmpaddr is updated Weiwei Li
2023-05-17  9:15 ` [PATCH v6 10/12] target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes Weiwei Li
2023-05-17  9:15 ` [PATCH v6 11/12] target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write Weiwei Li
2023-05-17  9:15 ` [PATCH v6 12/12] target/riscv: Deny access if access is partially inside the PMP entry Weiwei Li
2023-05-18  9:46 ` [PATCH v6 00/12] target/riscv: Fix PMP related problem Alistair Francis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).