qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] plugins/cache: multicore cache emulation and minor
@ 2021-07-14 17:21 Mahmoud Mandour
  2021-07-14 17:21 ` [PATCH 1/6] plugins/cache: Fixed a bug with destroying FIFO metadata Mahmoud Mandour
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Mahmoud Mandour @ 2021-07-14 17:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, cota

Hello,

This series introduce some minor improvements/bug fixes in the cache
plugins and multicore cache modelling.

One prominent bug is the use-after-free bug induced by linux-user
multithreaded programs. Since plugin_exit is not guaranteed to be called
after all callbacks, it may free data that subsequent callbacks may try
to use.

Trying to uninstall the plugin after exiting does not solve the problem
since it will unregister the callbacks but it won't prevent the
already-fired callback instances from continuing.

To mitigate this issue, the data is NULLified on exitting so that
callbacks can check whether plugin_exit has been called already or not.

Also, raising the levels of warnings (by the time this is sent, it's not
yet upstreamed but it's in the process of getting merged) induced some
warnings, this is fixed in its own patch.

Mahmoud Mandour (6):
  plugins/cache: Fixed a bug with destroying FIFO metadata
  plugins/cache: limited the scope of a mutex lock
  plugins/cache: Fixed a use-after-free bug with multithreaded usermode
  plugins/cache: Supported multicore cache modelling
  docs/devel/tcg-plugins: added cores arg to cache plugin
  plugins/cache: Fixed "function decl. is not a prototype" warnings

 contrib/plugins/cache.c    | 188 +++++++++++++++++++++++++++++--------
 docs/devel/tcg-plugins.rst |  13 ++-
 2 files changed, 155 insertions(+), 46 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/6] plugins/cache: Fixed a bug with destroying FIFO metadata
  2021-07-14 17:21 [PATCH 0/6] plugins/cache: multicore cache emulation and minor Mahmoud Mandour
@ 2021-07-14 17:21 ` Mahmoud Mandour
  2021-07-19  9:21   ` Alex Bennée
  2021-07-14 17:21 ` [PATCH 2/6] plugins/cache: limited the scope of a mutex lock Mahmoud Mandour
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Mahmoud Mandour @ 2021-07-14 17:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, cota, Alex Bennée

This manifests itself when associativity degree is greater than the
number of sets and FIFO is used, otherwise it's also a memory leak
whenever FIFO was used.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 contrib/plugins/cache.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
index bf0d2f6097..4a71602639 100644
--- a/contrib/plugins/cache.c
+++ b/contrib/plugins/cache.c
@@ -200,7 +200,7 @@ static void fifo_destroy(Cache *cache)
 {
     int i;
 
-    for (i = 0; i < cache->assoc; i++) {
+    for (i = 0; i < cache->num_sets; i++) {
         g_queue_free(cache->sets[i].fifo_queue);
     }
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/6] plugins/cache: limited the scope of a mutex lock
  2021-07-14 17:21 [PATCH 0/6] plugins/cache: multicore cache emulation and minor Mahmoud Mandour
  2021-07-14 17:21 ` [PATCH 1/6] plugins/cache: Fixed a bug with destroying FIFO metadata Mahmoud Mandour
@ 2021-07-14 17:21 ` Mahmoud Mandour
  2021-07-19  9:34   ` Alex Bennée
  2021-07-14 17:21 ` [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode Mahmoud Mandour
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Mahmoud Mandour @ 2021-07-14 17:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, cota, Alex Bennée

It's not necessary to lock the address translation portion of the
vcpu_mem_access callback.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 contrib/plugins/cache.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
index 4a71602639..695fb969dc 100644
--- a/contrib/plugins/cache.c
+++ b/contrib/plugins/cache.c
@@ -355,15 +355,14 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
     struct qemu_plugin_hwaddr *hwaddr;
     InsnData *insn;
 
-    g_mutex_lock(&mtx);
     hwaddr = qemu_plugin_get_hwaddr(info, vaddr);
     if (hwaddr && qemu_plugin_hwaddr_is_io(hwaddr)) {
-        g_mutex_unlock(&mtx);
         return;
     }
 
     effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) : vaddr;
 
+    g_mutex_lock(&mtx);
     if (!access_cache(dcache, effective_addr)) {
         insn = (InsnData *) userdata;
         insn->dmisses++;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode
  2021-07-14 17:21 [PATCH 0/6] plugins/cache: multicore cache emulation and minor Mahmoud Mandour
  2021-07-14 17:21 ` [PATCH 1/6] plugins/cache: Fixed a bug with destroying FIFO metadata Mahmoud Mandour
  2021-07-14 17:21 ` [PATCH 2/6] plugins/cache: limited the scope of a mutex lock Mahmoud Mandour
@ 2021-07-14 17:21 ` Mahmoud Mandour
  2021-07-19  9:45   ` Alex Bennée
  2021-07-14 17:21 ` [PATCH 4/6] plugins/cache: Supported multicore cache modelling Mahmoud Mandour
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Mahmoud Mandour @ 2021-07-14 17:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, cota, Alex Bennée

Since callbacks may be interleaved because of multithreaded execution,
we should not make assumptions about `plugin_exit` either. The problem
with `plugin_exit` is that it frees shared data structures (caches and
`miss_ht` hash table). It should not be assumed that callbacks will not
be called after it and hence use already-freed data structures.

This is mitigated in this commit by synchronizing the call to
`plugin_exit` through locking to ensure execlusive access to data
structures, and NULL-ifying those data structures so that subsequent
callbacks can check whether the data strucutres are freed, and if so,
immediately exit.

It's okay to immediately exit and don't account for those callbacks
since they won't be accounted for anyway since `plugin_exit` is already
called once and reported the statistics.

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 contrib/plugins/cache.c | 31 ++++++++++++++++++++++++++++++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
index 695fb969dc..a452aba01c 100644
--- a/contrib/plugins/cache.c
+++ b/contrib/plugins/cache.c
@@ -363,6 +363,11 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
     effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) : vaddr;
 
     g_mutex_lock(&mtx);
+    if (dcache == NULL) {
+        g_mutex_unlock(&mtx);
+        return;
+    }
+
     if (!access_cache(dcache, effective_addr)) {
         insn = (InsnData *) userdata;
         insn->dmisses++;
@@ -380,6 +385,11 @@ static void vcpu_insn_exec(unsigned int vcpu_index, void *userdata)
     g_mutex_lock(&mtx);
     insn_addr = ((InsnData *) userdata)->addr;
 
+    if (icache == NULL) {
+        g_mutex_unlock(&mtx);
+        return;
+    }
+
     if (!access_cache(icache, insn_addr)) {
         insn = (InsnData *) userdata;
         insn->imisses++;
@@ -406,12 +416,24 @@ static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
             effective_addr = (uint64_t) qemu_plugin_insn_vaddr(insn);
         }
 
+        g_mutex_lock(&mtx);
+
+        /*
+         * is the plugin_exit callback called? If so, any further callback
+         * registration is useless as it won't get accounted for after calling
+         * plugin_exit once already, and also will use miss_ht after it's freed
+         */
+        if (miss_ht == NULL) {
+            g_mutex_unlock(&mtx);
+            return;
+        }
+
         /*
          * Instructions might get translated multiple times, we do not create
          * new entries for those instructions. Instead, we fetch the same
          * entry from the hash table and register it for the callback again.
          */
-        g_mutex_lock(&mtx);
+
         data = g_hash_table_lookup(miss_ht, GUINT_TO_POINTER(effective_addr));
         if (data == NULL) {
             data = g_new0(InsnData, 1);
@@ -527,13 +549,20 @@ static void log_top_insns()
 
 static void plugin_exit(qemu_plugin_id_t id, void *p)
 {
+    g_mutex_lock(&mtx);
     log_stats();
     log_top_insns();
 
     cache_free(dcache);
+    dcache = NULL;
+
     cache_free(icache);
+    icache = NULL;
 
     g_hash_table_destroy(miss_ht);
+    miss_ht = NULL;
+
+    g_mutex_unlock(&mtx);
 }
 
 static void policy_init()
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/6] plugins/cache: Supported multicore cache modelling
  2021-07-14 17:21 [PATCH 0/6] plugins/cache: multicore cache emulation and minor Mahmoud Mandour
                   ` (2 preceding siblings ...)
  2021-07-14 17:21 ` [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode Mahmoud Mandour
@ 2021-07-14 17:21 ` Mahmoud Mandour
  2021-07-19 10:52   ` Alex Bennée
  2021-07-14 17:21 ` [PATCH 5/6] docs/devel/tcg-plugins: added cores arg to cache plugin Mahmoud Mandour
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Mahmoud Mandour @ 2021-07-14 17:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, cota, Alex Bennée

Multicore L1 cache modelling is introduced and is supported for both
full system emulation and linux-user.

For full-system emulation, L1 icache and dcache are maintained for each
available core, since this information is exposed to the plugin through
`qemu_plugin_n_vcpus()`.

For linux-user, a static number of cores is assumed (default 1 core, and
can be provided as a plugin argument `cores=N`). Every memory access
goes through one of these caches, this approach is taken as it's
somewhat akin to what happens on real setup, where a program that
dispatches more threads than the available cores, they'll thrash
each other

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 contrib/plugins/cache.c | 156 ++++++++++++++++++++++++++++++----------
 1 file changed, 117 insertions(+), 39 deletions(-)

diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
index a452aba01c..60f7be208b 100644
--- a/contrib/plugins/cache.c
+++ b/contrib/plugins/cache.c
@@ -23,11 +23,6 @@ static GRand *rng;
 static int limit;
 static bool sys;
 
-static uint64_t dmem_accesses;
-static uint64_t dmisses;
-
-static uint64_t imem_accesses;
-static uint64_t imisses;
 
 enum EvictionPolicy {
     LRU,
@@ -90,13 +85,22 @@ typedef struct {
     uint64_t imisses;
 } InsnData;
 
+typedef struct {
+    uint64_t dmem_accesses;
+    uint64_t dmisses;
+    uint64_t imem_accesses;
+    uint64_t imisses;
+} CoreStats;
+
 void (*update_hit)(Cache *cache, int set, int blk);
 void (*update_miss)(Cache *cache, int set, int blk);
 
 void (*metadata_init)(Cache *cache);
 void (*metadata_destroy)(Cache *cache);
 
-Cache *dcache, *icache;
+static int cores;
+CoreStats *stats;
+Cache **dcaches, **icaches;
 
 static int pow_of_two(int num)
 {
@@ -233,10 +237,6 @@ static bool bad_cache_params(int blksize, int assoc, int cachesize)
 
 static Cache *cache_init(int blksize, int assoc, int cachesize)
 {
-    if (bad_cache_params(blksize, assoc, cachesize)) {
-        return NULL;
-    }
-
     Cache *cache;
     int i;
     uint64_t blk_mask;
@@ -263,6 +263,24 @@ static Cache *cache_init(int blksize, int assoc, int cachesize)
     return cache;
 }
 
+static Cache **caches_init(int blksize, int assoc, int cachesize)
+{
+    Cache **caches;
+    int i;
+
+    if (bad_cache_params(blksize, assoc, cachesize)) {
+        return NULL;
+    }
+
+    caches = g_new(Cache *, cores);
+
+    for (i = 0; i < cores; i++) {
+        caches[i] = cache_init(blksize, assoc, cachesize);
+    }
+
+    return caches;
+}
+
 static int get_invalid_block(Cache *cache, uint64_t set)
 {
     int i;
@@ -353,6 +371,7 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
 {
     uint64_t effective_addr;
     struct qemu_plugin_hwaddr *hwaddr;
+    int cache_idx;
     InsnData *insn;
 
     hwaddr = qemu_plugin_get_hwaddr(info, vaddr);
@@ -363,17 +382,24 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
     effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) : vaddr;
 
     g_mutex_lock(&mtx);
-    if (dcache == NULL) {
+    cache_idx = vcpu_index % cores;
+    if (dcaches[cache_idx] == NULL) {
         g_mutex_unlock(&mtx);
         return;
     }
 
-    if (!access_cache(dcache, effective_addr)) {
+    if (!access_cache(dcaches[cache_idx], effective_addr)) {
         insn = (InsnData *) userdata;
         insn->dmisses++;
-        dmisses++;
+        if (cores > 1) {
+            stats[cores].dmisses++;
+        }
+        stats[cache_idx].dmisses++;
+    }
+    if (cores > 1) {
+        stats[cores].dmem_accesses++;
     }
-    dmem_accesses++;
+    stats[cache_idx].dmem_accesses++;
     g_mutex_unlock(&mtx);
 }
 
@@ -381,21 +407,29 @@ static void vcpu_insn_exec(unsigned int vcpu_index, void *userdata)
 {
     uint64_t insn_addr;
     InsnData *insn;
+    uint64_t cache_idx;
 
     g_mutex_lock(&mtx);
     insn_addr = ((InsnData *) userdata)->addr;
 
-    if (icache == NULL) {
+    cache_idx = vcpu_index % cores;
+    if (icaches[cache_idx] == NULL) {
         g_mutex_unlock(&mtx);
         return;
     }
 
-    if (!access_cache(icache, insn_addr)) {
+    if (!access_cache(icaches[cache_idx], insn_addr)) {
         insn = (InsnData *) userdata;
         insn->imisses++;
-        imisses++;
+        if (cores > 1) {
+            stats[cores].imisses++;
+        }
+        stats[cache_idx].imisses++;
     }
-    imem_accesses++;
+    if (cores > 1) {
+        stats[cores].imem_accesses++;
+    }
+    stats[cache_idx].imem_accesses++;
     g_mutex_unlock(&mtx);
 }
 
@@ -475,6 +509,22 @@ static void cache_free(Cache *cache)
     g_free(cache);
 }
 
+/*
+ * caches_free(): free an array of Cache structs.
+ * @caches: caches to free
+ *
+ * Calling cache_free for each cache in @caches, and then NULL-ify them so that
+ * we mark them as freed, other async callbacks can check to see whether a cache
+ * is freed already or not by checking against NULL.
+ */
+static void caches_free(Cache **caches)
+{
+    for (int i = 0; i < cores; i++) {
+        cache_free(caches[i]);
+        caches[i] = NULL;
+    }
+}
+
 static int dcmp(gconstpointer a, gconstpointer b)
 {
     InsnData *insn_a = (InsnData *) a;
@@ -493,19 +543,38 @@ static int icmp(gconstpointer a, gconstpointer b)
 
 static void log_stats()
 {
-    g_autoptr(GString) rep = g_string_new("");
-    g_string_append_printf(rep,
-        "Data accesses: %lu, Misses: %lu\nMiss rate: %lf%%\n\n",
-        dmem_accesses,
-        dmisses,
-        ((double) dmisses / (double) dmem_accesses) * 100.0);
-
-    g_string_append_printf(rep,
-        "Instruction accesses: %lu, Misses: %lu\nMiss rate: %lf%%\n\n",
-        imem_accesses,
-        imisses,
-        ((double) imisses / (double) imem_accesses) * 100.0);
+    int i, iters;
+    CoreStats cs;
+    double dmiss_rate, imiss_rate;
+
+    g_autoptr(GString) rep = g_string_new("core #, data accesses, data misses,"
+                                          " dmiss rate, insn accesses,"
+                                          " insn misses, imiss rate\n");
+
+    /* Only iterate and print a sum row if cores > 1 */
+    iters = cores == 1 ? 1 : cores + 1;
+    for (i = 0; i < iters; i++) {
+        cs = stats[i];
+        dmiss_rate = ((double) cs.dmisses) / (cs.dmem_accesses) * 100.0;
+        imiss_rate = ((double) cs.imisses) / (cs.imem_accesses) * 100.0;
+
+        if (i == cores) {
+            g_string_append_printf(rep, "%-8s", "sum");
+        } else {
+            g_string_append_printf(rep, "%-8d", i);
+        }
+
+        g_string_append_printf(rep, "%-14lu %-12lu %9.4lf%%  %-14lu %-12lu"
+                               " %9.4lf%%\n",
+                               cs.dmem_accesses,
+                               cs.dmisses,
+                               cs.dmem_accesses ? dmiss_rate : 0.0,
+                               cs.imem_accesses,
+                               cs.imisses,
+                               cs.imem_accesses ? imiss_rate : 0.0);
+    }
 
+    g_string_append(rep, "\n");
     qemu_plugin_outs(rep->str);
 }
 
@@ -553,15 +622,14 @@ static void plugin_exit(qemu_plugin_id_t id, void *p)
     log_stats();
     log_top_insns();
 
-    cache_free(dcache);
-    dcache = NULL;
-
-    cache_free(icache);
-    icache = NULL;
+    caches_free(dcaches);
+    caches_free(icaches);
 
     g_hash_table_destroy(miss_ht);
     miss_ht = NULL;
 
+    g_free(stats);
+
     g_mutex_unlock(&mtx);
 }
 
@@ -608,6 +676,8 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
 
     policy = LRU;
 
+    cores = sys ? qemu_plugin_n_vcpus() : 1;
+
     for (i = 0; i < argc; i++) {
         char *opt = argv[i];
         if (g_str_has_prefix(opt, "iblksize=")) {
@@ -624,6 +694,8 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
             dcachesize = g_ascii_strtoll(opt + 11, NULL, 10);
         } else if (g_str_has_prefix(opt, "limit=")) {
             limit = g_ascii_strtoll(opt + 6, NULL, 10);
+        } else if (g_str_has_prefix(opt, "cores=")) {
+            cores = g_ascii_strtoll(opt + 6, NULL, 10);
         } else if (g_str_has_prefix(opt, "evict=")) {
             gchar *p = opt + 6;
             if (g_strcmp0(p, "rand") == 0) {
@@ -644,22 +716,28 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
 
     policy_init();
 
-    dcache = cache_init(dblksize, dassoc, dcachesize);
-    if (!dcache) {
+    dcaches = caches_init(dblksize, dassoc, dcachesize);
+    if (!dcaches) {
         const char *err = cache_config_error(dblksize, dassoc, dcachesize);
         fprintf(stderr, "dcache cannot be constructed from given parameters\n");
         fprintf(stderr, "%s\n", err);
         return -1;
     }
 
-    icache = cache_init(iblksize, iassoc, icachesize);
-    if (!icache) {
+    icaches = caches_init(iblksize, iassoc, icachesize);
+    if (!icaches) {
         const char *err = cache_config_error(iblksize, iassoc, icachesize);
         fprintf(stderr, "icache cannot be constructed from given parameters\n");
         fprintf(stderr, "%s\n", err);
         return -1;
     }
 
+    /*
+     * plus one to save the sum in. If only one core is used then no need to
+     * get an auxiliary struct.
+     */
+    stats = g_new0(CoreStats, cores == 1 ? 1 : cores + 1);
+
     qemu_plugin_register_vcpu_tb_trans_cb(id, vcpu_tb_trans);
     qemu_plugin_register_atexit_cb(id, plugin_exit, NULL);
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/6] docs/devel/tcg-plugins: added cores arg to cache plugin
  2021-07-14 17:21 [PATCH 0/6] plugins/cache: multicore cache emulation and minor Mahmoud Mandour
                   ` (3 preceding siblings ...)
  2021-07-14 17:21 ` [PATCH 4/6] plugins/cache: Supported multicore cache modelling Mahmoud Mandour
@ 2021-07-14 17:21 ` Mahmoud Mandour
  2021-07-14 17:21 ` [PATCH 6/6] plugins/cache: Fixed "function decl. is not a prototype" warnings Mahmoud Mandour
  2021-07-20 12:46 ` [PATCH 0/6] plugins/cache: multicore cache emulation and minor Alex Bennée
  6 siblings, 0 replies; 17+ messages in thread
From: Mahmoud Mandour @ 2021-07-14 17:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, cota, Alex Bennée

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 docs/devel/tcg-plugins.rst | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/docs/devel/tcg-plugins.rst b/docs/devel/tcg-plugins.rst
index 595b8e0ea4..370c11373f 100644
--- a/docs/devel/tcg-plugins.rst
+++ b/docs/devel/tcg-plugins.rst
@@ -330,11 +330,8 @@ configuration when a given working set is run::
 
 will report the following::
 
-    Data accesses: 996479, Misses: 507
-    Miss rate: 0.050879%
-
-    Instruction accesses: 2641737, Misses: 18617
-    Miss rate: 0.704726%
+    core #, data accesses, data misses, dmiss rate, insn accesses, insn misses, imiss rate
+    0       996695         508             0.0510%  2642799        18617           0.7044%
 
     address, data misses, instruction
     0x424f1e (_int_malloc), 109, movq %rax, 8(%rcx)
@@ -378,3 +375,9 @@ The plugin has a number of arguments, all of them are optional:
   Sets the eviction policy to POLICY. Available policies are: :code:`lru`,
   :code:`fifo`, and :code:`rand`. The plugin will use the specified policy for
   both instruction and data caches. (default: POLICY = :code:`lru`)
+
+  * arg="cores=N"
+
+  Sets the number of cores for which we maintain separate icache and dcache.
+  (default: for linux-user, N = 1, for full system emulation: N = cores
+  available to guest)
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/6] plugins/cache: Fixed "function decl. is not a prototype" warnings
  2021-07-14 17:21 [PATCH 0/6] plugins/cache: multicore cache emulation and minor Mahmoud Mandour
                   ` (4 preceding siblings ...)
  2021-07-14 17:21 ` [PATCH 5/6] docs/devel/tcg-plugins: added cores arg to cache plugin Mahmoud Mandour
@ 2021-07-14 17:21 ` Mahmoud Mandour
  2021-07-19 12:38   ` Alex Bennée
  2021-07-20 12:46 ` [PATCH 0/6] plugins/cache: multicore cache emulation and minor Alex Bennée
  6 siblings, 1 reply; 17+ messages in thread
From: Mahmoud Mandour @ 2021-07-14 17:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mahmoud Mandour, cota, Alex Bennée

Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
---
 contrib/plugins/cache.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
index 60f7be208b..f82a8310dc 100644
--- a/contrib/plugins/cache.c
+++ b/contrib/plugins/cache.c
@@ -541,7 +541,7 @@ static int icmp(gconstpointer a, gconstpointer b)
     return insn_a->imisses < insn_b->imisses ? 1 : -1;
 }
 
-static void log_stats()
+static void log_stats(void)
 {
     int i, iters;
     CoreStats cs;
@@ -578,7 +578,7 @@ static void log_stats()
     qemu_plugin_outs(rep->str);
 }
 
-static void log_top_insns()
+static void log_top_insns(void)
 {
     int i;
     GList *curr, *miss_insns;
@@ -633,7 +633,7 @@ static void plugin_exit(qemu_plugin_id_t id, void *p)
     g_mutex_unlock(&mtx);
 }
 
-static void policy_init()
+static void policy_init(void)
 {
     switch (policy) {
     case LRU:
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/6] plugins/cache: Fixed a bug with destroying FIFO metadata
  2021-07-14 17:21 ` [PATCH 1/6] plugins/cache: Fixed a bug with destroying FIFO metadata Mahmoud Mandour
@ 2021-07-19  9:21   ` Alex Bennée
  0 siblings, 0 replies; 17+ messages in thread
From: Alex Bennée @ 2021-07-19  9:21 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: cota, qemu-devel


Mahmoud Mandour <ma.mandourr@gmail.com> writes:

> This manifests itself when associativity degree is greater than the
> number of sets and FIFO is used, otherwise it's also a memory leak
> whenever FIFO was used.
>
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  contrib/plugins/cache.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
> index bf0d2f6097..4a71602639 100644
> --- a/contrib/plugins/cache.c
> +++ b/contrib/plugins/cache.c
> @@ -200,7 +200,7 @@ static void fifo_destroy(Cache *cache)
>  {
>      int i;
>  
> -    for (i = 0; i < cache->assoc; i++) {
> +    for (i = 0; i < cache->num_sets; i++) {
>          g_queue_free(cache->sets[i].fifo_queue);
>      }
>  }


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/6] plugins/cache: limited the scope of a mutex lock
  2021-07-14 17:21 ` [PATCH 2/6] plugins/cache: limited the scope of a mutex lock Mahmoud Mandour
@ 2021-07-19  9:34   ` Alex Bennée
  0 siblings, 0 replies; 17+ messages in thread
From: Alex Bennée @ 2021-07-19  9:34 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: cota, qemu-devel


Mahmoud Mandour <ma.mandourr@gmail.com> writes:

> It's not necessary to lock the address translation portion of the
> vcpu_mem_access callback.
>
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> ---
>  contrib/plugins/cache.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
> index 4a71602639..695fb969dc 100644
> --- a/contrib/plugins/cache.c
> +++ b/contrib/plugins/cache.c
> @@ -355,15 +355,14 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
>      struct qemu_plugin_hwaddr *hwaddr;
>      InsnData *insn;
>  
> -    g_mutex_lock(&mtx);
>      hwaddr = qemu_plugin_get_hwaddr(info, vaddr);
>      if (hwaddr && qemu_plugin_hwaddr_is_io(hwaddr)) {
> -        g_mutex_unlock(&mtx);
>          return;
>      }
>  
>      effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) : vaddr;
>  
> +    g_mutex_lock(&mtx);
>      if (!access_cache(dcache, effective_addr)) {
>          insn = (InsnData *) userdata;
>          insn->dmisses++;

This is fine, but I see an exit leg creeps in later which I think we can
eliminate. I'll comment there:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode
  2021-07-14 17:21 ` [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode Mahmoud Mandour
@ 2021-07-19  9:45   ` Alex Bennée
  2021-07-19 10:46     ` Mahmoud Mandour
  0 siblings, 1 reply; 17+ messages in thread
From: Alex Bennée @ 2021-07-19  9:45 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: cota, qemu-devel


Mahmoud Mandour <ma.mandourr@gmail.com> writes:

> Since callbacks may be interleaved because of multithreaded execution,
> we should not make assumptions about `plugin_exit` either. The problem
> with `plugin_exit` is that it frees shared data structures (caches and
> `miss_ht` hash table). It should not be assumed that callbacks will not
> be called after it and hence use already-freed data structures.

What was your test case for this because I wonder if it would be worth
coming up with one for check-tcg? From what I remember the race is
in between preexit_cleanup and the eventual _exit/exit_group which nixes
all other child threads. Maybe we should be triggering a more graceful
unload here?

> This is mitigated in this commit by synchronizing the call to
> `plugin_exit` through locking to ensure execlusive access to data
> structures, and NULL-ifying those data structures so that subsequent
> callbacks can check whether the data strucutres are freed, and if so,
> immediately exit.
>
> It's okay to immediately exit and don't account for those callbacks
> since they won't be accounted for anyway since `plugin_exit` is already
> called once and reported the statistics.
>
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> ---
>  contrib/plugins/cache.c | 31 ++++++++++++++++++++++++++++++-
>  1 file changed, 30 insertions(+), 1 deletion(-)
>
> diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
> index 695fb969dc..a452aba01c 100644
> --- a/contrib/plugins/cache.c
> +++ b/contrib/plugins/cache.c
> @@ -363,6 +363,11 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
>      effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) : vaddr;
>  
>      g_mutex_lock(&mtx);
> +    if (dcache == NULL) {
> +        g_mutex_unlock(&mtx);
> +        return;
> +    }
> +
>      if (!access_cache(dcache, effective_addr)) {
>          insn = (InsnData *) userdata;
>          insn->dmisses++;
> @@ -380,6 +385,11 @@ static void vcpu_insn_exec(unsigned int vcpu_index, void *userdata)
>      g_mutex_lock(&mtx);
>      insn_addr = ((InsnData *) userdata)->addr;
>  
> +    if (icache == NULL) {
> +        g_mutex_unlock(&mtx);
> +        return;
> +    }
> +
>      if (!access_cache(icache, insn_addr)) {
>          insn = (InsnData *) userdata;
>          insn->imisses++;
> @@ -406,12 +416,24 @@ static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
>              effective_addr = (uint64_t) qemu_plugin_insn_vaddr(insn);
>          }
>  
> +        g_mutex_lock(&mtx);
> +
> +        /*
> +         * is the plugin_exit callback called? If so, any further callback
> +         * registration is useless as it won't get accounted for after calling
> +         * plugin_exit once already, and also will use miss_ht after it's freed
> +         */
> +        if (miss_ht == NULL) {
> +            g_mutex_unlock(&mtx);
> +            return;
> +        }
> +
>          /*
>           * Instructions might get translated multiple times, we do not create
>           * new entries for those instructions. Instead, we fetch the same
>           * entry from the hash table and register it for the callback again.
>           */
> -        g_mutex_lock(&mtx);
> +
>          data = g_hash_table_lookup(miss_ht, GUINT_TO_POINTER(effective_addr));
>          if (data == NULL) {
>              data = g_new0(InsnData, 1);
> @@ -527,13 +549,20 @@ static void log_top_insns()
>  
>  static void plugin_exit(qemu_plugin_id_t id, void *p)
>  {
> +    g_mutex_lock(&mtx);
>      log_stats();
>      log_top_insns();
>  
>      cache_free(dcache);
> +    dcache = NULL;
> +
>      cache_free(icache);
> +    icache = NULL;
>  
>      g_hash_table_destroy(miss_ht);
> +    miss_ht = NULL;
> +
> +    g_mutex_unlock(&mtx);
>  }
>  
>  static void policy_init()


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode
  2021-07-19  9:45   ` Alex Bennée
@ 2021-07-19 10:46     ` Mahmoud Mandour
  2021-07-19 11:06       ` Alex Bennée
  0 siblings, 1 reply; 17+ messages in thread
From: Mahmoud Mandour @ 2021-07-19 10:46 UTC (permalink / raw)
  To: Alex Bennée; +Cc: Emilio G. Cota, open list:All patches CC here

[-- Attachment #1: Type: text/plain, Size: 4925 bytes --]

On Mon, Jul 19, 2021 at 11:48 AM Alex Bennée <alex.bennee@linaro.org> wrote:

>
> Mahmoud Mandour <ma.mandourr@gmail.com> writes:
>
> > Since callbacks may be interleaved because of multithreaded execution,
> > we should not make assumptions about `plugin_exit` either. The problem
> > with `plugin_exit` is that it frees shared data structures (caches and
> > `miss_ht` hash table). It should not be assumed that callbacks will not
> > be called after it and hence use already-freed data structures.
>
> What was your test case for this because I wonder if it would be worth
> coming up with one for check-tcg?


I think just any ad-hoc multithreaded execution will evoke the race pretty
much
consistently.


> From what I remember the race is
> in between preexit_cleanup and the eventual _exit/exit_group which nixes
> all other child threads. Maybe we should be triggering a more graceful
> unload here?
>

I think so. This remedies the bug for this particular plugin and I think
there
would be a better solution of course. However, I just can't ever get
plugin_exit
callback to be called more than once so I think that's probably not the
problem?

The problem is that we *use* the data in translation/mem_access/exec
callbacks
after a plugin_exit call is already called (this can be easily verified by
having a
boolean set to true once plugin_exit is called and then g_assert this
boolean is
false in the callbacks)


> > This is mitigated in this commit by synchronizing the call to
> > `plugin_exit` through locking to ensure execlusive access to data
> > structures, and NULL-ifying those data structures so that subsequent
> > callbacks can check whether the data strucutres are freed, and if so,
> > immediately exit.
> >
> > It's okay to immediately exit and don't account for those callbacks
> > since they won't be accounted for anyway since `plugin_exit` is already
> > called once and reported the statistics.
> >
> > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> > ---
> >  contrib/plugins/cache.c | 31 ++++++++++++++++++++++++++++++-
> >  1 file changed, 30 insertions(+), 1 deletion(-)
> >
> > diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
> > index 695fb969dc..a452aba01c 100644
> > --- a/contrib/plugins/cache.c
> > +++ b/contrib/plugins/cache.c
> > @@ -363,6 +363,11 @@ static void vcpu_mem_access(unsigned int
> vcpu_index, qemu_plugin_meminfo_t info,
> >      effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) :
> vaddr;
> >
> >      g_mutex_lock(&mtx);
> > +    if (dcache == NULL) {
> > +        g_mutex_unlock(&mtx);
> > +        return;
> > +    }
> > +
> >      if (!access_cache(dcache, effective_addr)) {
> >          insn = (InsnData *) userdata;
> >          insn->dmisses++;
> > @@ -380,6 +385,11 @@ static void vcpu_insn_exec(unsigned int vcpu_index,
> void *userdata)
> >      g_mutex_lock(&mtx);
> >      insn_addr = ((InsnData *) userdata)->addr;
> >
> > +    if (icache == NULL) {
> > +        g_mutex_unlock(&mtx);
> > +        return;
> > +    }
> > +
> >      if (!access_cache(icache, insn_addr)) {
> >          insn = (InsnData *) userdata;
> >          insn->imisses++;
> > @@ -406,12 +416,24 @@ static void vcpu_tb_trans(qemu_plugin_id_t id,
> struct qemu_plugin_tb *tb)
> >              effective_addr = (uint64_t) qemu_plugin_insn_vaddr(insn);
> >          }
> >
> > +        g_mutex_lock(&mtx);
> > +
> > +        /*
> > +         * is the plugin_exit callback called? If so, any further
> callback
> > +         * registration is useless as it won't get accounted for after
> calling
> > +         * plugin_exit once already, and also will use miss_ht after
> it's freed
> > +         */
> > +        if (miss_ht == NULL) {
> > +            g_mutex_unlock(&mtx);
> > +            return;
> > +        }
> > +
> >          /*
> >           * Instructions might get translated multiple times, we do not
> create
> >           * new entries for those instructions. Instead, we fetch the
> same
> >           * entry from the hash table and register it for the callback
> again.
> >           */
> > -        g_mutex_lock(&mtx);
> > +
> >          data = g_hash_table_lookup(miss_ht,
> GUINT_TO_POINTER(effective_addr));
> >          if (data == NULL) {
> >              data = g_new0(InsnData, 1);
> > @@ -527,13 +549,20 @@ static void log_top_insns()
> >
> >  static void plugin_exit(qemu_plugin_id_t id, void *p)
> >  {
> > +    g_mutex_lock(&mtx);
> >      log_stats();
> >      log_top_insns();
> >
> >      cache_free(dcache);
> > +    dcache = NULL;
> > +
> >      cache_free(icache);
> > +    icache = NULL;
> >
> >      g_hash_table_destroy(miss_ht);
> > +    miss_ht = NULL;
> > +
> > +    g_mutex_unlock(&mtx);
> >  }
> >
> >  static void policy_init()
>
>
> --
> Alex Bennée
>

[-- Attachment #2: Type: text/html, Size: 6537 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/6] plugins/cache: Supported multicore cache modelling
  2021-07-14 17:21 ` [PATCH 4/6] plugins/cache: Supported multicore cache modelling Mahmoud Mandour
@ 2021-07-19 10:52   ` Alex Bennée
  0 siblings, 0 replies; 17+ messages in thread
From: Alex Bennée @ 2021-07-19 10:52 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: cota, qemu-devel


Mahmoud Mandour <ma.mandourr@gmail.com> writes:

> Multicore L1 cache modelling is introduced and is supported for both
> full system emulation and linux-user.
>
> For full-system emulation, L1 icache and dcache are maintained for each
> available core, since this information is exposed to the plugin through
> `qemu_plugin_n_vcpus()`.
>
> For linux-user, a static number of cores is assumed (default 1 core, and
> can be provided as a plugin argument `cores=N`). Every memory access
> goes through one of these caches, this approach is taken as it's
> somewhat akin to what happens on real setup, where a program that
> dispatches more threads than the available cores, they'll thrash
> each other
>
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> ---
>  contrib/plugins/cache.c | 156 ++++++++++++++++++++++++++++++----------
>  1 file changed, 117 insertions(+), 39 deletions(-)
>
> diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
> index a452aba01c..60f7be208b 100644
> --- a/contrib/plugins/cache.c
> +++ b/contrib/plugins/cache.c
> @@ -23,11 +23,6 @@ static GRand *rng;
>  static int limit;
>  static bool sys;
>  
> -static uint64_t dmem_accesses;
> -static uint64_t dmisses;
> -
> -static uint64_t imem_accesses;
> -static uint64_t imisses;
>  
>  enum EvictionPolicy {
>      LRU,
> @@ -90,13 +85,22 @@ typedef struct {
>      uint64_t imisses;
>  } InsnData;
>  
> +typedef struct {
> +    uint64_t dmem_accesses;
> +    uint64_t dmisses;
> +    uint64_t imem_accesses;
> +    uint64_t imisses;
> +} CoreStats;
> +
>  void (*update_hit)(Cache *cache, int set, int blk);
>  void (*update_miss)(Cache *cache, int set, int blk);
>  
>  void (*metadata_init)(Cache *cache);
>  void (*metadata_destroy)(Cache *cache);
>  
> -Cache *dcache, *icache;
> +static int cores;
> +CoreStats *stats;
> +Cache **dcaches, **icaches;
>  
>  static int pow_of_two(int num)
>  {
> @@ -233,10 +237,6 @@ static bool bad_cache_params(int blksize, int assoc, int cachesize)
>  
>  static Cache *cache_init(int blksize, int assoc, int cachesize)
>  {
> -    if (bad_cache_params(blksize, assoc, cachesize)) {
> -        return NULL;
> -    }
> -
>      Cache *cache;
>      int i;
>      uint64_t blk_mask;
> @@ -263,6 +263,24 @@ static Cache *cache_init(int blksize, int assoc, int cachesize)
>      return cache;
>  }
>  
> +static Cache **caches_init(int blksize, int assoc, int cachesize)
> +{
> +    Cache **caches;
> +    int i;
> +
> +    if (bad_cache_params(blksize, assoc, cachesize)) {
> +        return NULL;
> +    }
> +
> +    caches = g_new(Cache *, cores);
> +
> +    for (i = 0; i < cores; i++) {
> +        caches[i] = cache_init(blksize, assoc, cachesize);
> +    }
> +
> +    return caches;
> +}
> +
>  static int get_invalid_block(Cache *cache, uint64_t set)
>  {
>      int i;
> @@ -353,6 +371,7 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
>  {
>      uint64_t effective_addr;
>      struct qemu_plugin_hwaddr *hwaddr;
> +    int cache_idx;
>      InsnData *insn;
>  
>      hwaddr = qemu_plugin_get_hwaddr(info, vaddr);
> @@ -363,17 +382,24 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
>      effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) : vaddr;
>  
>      g_mutex_lock(&mtx);
> -    if (dcache == NULL) {
> +    cache_idx = vcpu_index % cores;
> +    if (dcaches[cache_idx] == NULL) {
>          g_mutex_unlock(&mtx);
>          return;
>      }

I was hoping we could avoid this with some careful atomic/memory barrier
use given dcaches is setup on init. However it's the exit case that gets
us which further increases my desire to fix the exit race in linux-user
and make sure callbacks don't occur in this case.

>  
> -    if (!access_cache(dcache, effective_addr)) {
> +    if (!access_cache(dcaches[cache_idx], effective_addr)) {
>          insn = (InsnData *) userdata;
>          insn->dmisses++;
> -        dmisses++;
> +        if (cores > 1) {
> +            stats[cores].dmisses++;
> +        }
> +        stats[cache_idx].dmisses++;
> +    }
> +    if (cores > 1) {
> +        stats[cores].dmem_accesses++;
>      }
> -    dmem_accesses++;
> +    stats[cache_idx].dmem_accesses++;
>      g_mutex_unlock(&mtx);
>  }
>  
> @@ -381,21 +407,29 @@ static void vcpu_insn_exec(unsigned int vcpu_index, void *userdata)
>  {
>      uint64_t insn_addr;
>      InsnData *insn;
> +    uint64_t cache_idx;
>  
>      g_mutex_lock(&mtx);
>      insn_addr = ((InsnData *) userdata)->addr;
>  
> -    if (icache == NULL) {
> +    cache_idx = vcpu_index % cores;
> +    if (icaches[cache_idx] == NULL) {
>          g_mutex_unlock(&mtx);
>          return;
>      }
>  
> -    if (!access_cache(icache, insn_addr)) {
> +    if (!access_cache(icaches[cache_idx], insn_addr)) {
>          insn = (InsnData *) userdata;
>          insn->imisses++;
> -        imisses++;
> +        if (cores > 1) {
> +            stats[cores].imisses++;
> +        }
> +        stats[cache_idx].imisses++;
>      }
> -    imem_accesses++;
> +    if (cores > 1) {
> +        stats[cores].imem_accesses++;
> +    }

I'd much rather avoid summing the stats during execution than at the end
when we can add things simply.

> +    stats[cache_idx].imem_accesses++;
>      g_mutex_unlock(&mtx);
>  }
>  
> @@ -475,6 +509,22 @@ static void cache_free(Cache *cache)
>      g_free(cache);
>  }
>  
> +/*
> + * caches_free(): free an array of Cache structs.
> + * @caches: caches to free
> + *
> + * Calling cache_free for each cache in @caches, and then NULL-ify them so that
> + * we mark them as freed, other async callbacks can check to see whether a cache
> + * is freed already or not by checking against NULL.
> + */
> +static void caches_free(Cache **caches)
> +{
> +    for (int i = 0; i < cores; i++) {
> +        cache_free(caches[i]);
> +        caches[i] = NULL;
> +    }
> +}
> +
>  static int dcmp(gconstpointer a, gconstpointer b)
>  {
>      InsnData *insn_a = (InsnData *) a;
> @@ -493,19 +543,38 @@ static int icmp(gconstpointer a, gconstpointer b)
>  
>  static void log_stats()
>  {
> -    g_autoptr(GString) rep = g_string_new("");
> -    g_string_append_printf(rep,
> -        "Data accesses: %lu, Misses: %lu\nMiss rate: %lf%%\n\n",
> -        dmem_accesses,
> -        dmisses,
> -        ((double) dmisses / (double) dmem_accesses) * 100.0);
> -
> -    g_string_append_printf(rep,
> -        "Instruction accesses: %lu, Misses: %lu\nMiss rate: %lf%%\n\n",
> -        imem_accesses,
> -        imisses,
> -        ((double) imisses / (double) imem_accesses) * 100.0);
> +    int i, iters;
> +    CoreStats cs;
> +    double dmiss_rate, imiss_rate;
> +
> +    g_autoptr(GString) rep = g_string_new("core #, data accesses, data misses,"
> +                                          " dmiss rate, insn accesses,"
> +                                          " insn misses, imiss rate\n");
> +
> +    /* Only iterate and print a sum row if cores > 1 */
> +    iters = cores == 1 ? 1 : cores + 1;
> +    for (i = 0; i < iters; i++) {
> +        cs = stats[i];
> +        dmiss_rate = ((double) cs.dmisses) / (cs.dmem_accesses) * 100.0;
> +        imiss_rate = ((double) cs.imisses) / (cs.imem_accesses) * 100.0;
> +
> +        if (i == cores) {
> +            g_string_append_printf(rep, "%-8s", "sum");
> +        } else {
> +            g_string_append_printf(rep, "%-8d", i);
> +        }
> +
> +        g_string_append_printf(rep, "%-14lu %-12lu %9.4lf%%  %-14lu %-12lu"
> +                               " %9.4lf%%\n",
> +                               cs.dmem_accesses,
> +                               cs.dmisses,
> +                               cs.dmem_accesses ? dmiss_rate : 0.0,
> +                               cs.imem_accesses,
> +                               cs.imisses,
> +                               cs.imem_accesses ? imiss_rate : 0.0);
> +    }
>  
> +    g_string_append(rep, "\n");
>      qemu_plugin_outs(rep->str);
>  }
>  
> @@ -553,15 +622,14 @@ static void plugin_exit(qemu_plugin_id_t id, void *p)
>      log_stats();
>      log_top_insns();
>  
> -    cache_free(dcache);
> -    dcache = NULL;
> -
> -    cache_free(icache);
> -    icache = NULL;
> +    caches_free(dcaches);
> +    caches_free(icaches);
>  
>      g_hash_table_destroy(miss_ht);
>      miss_ht = NULL;
>  
> +    g_free(stats);
> +
>      g_mutex_unlock(&mtx);
>  }
>  
> @@ -608,6 +676,8 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
>  
>      policy = LRU;
>  
> +    cores = sys ? qemu_plugin_n_vcpus() : 1;
> +
>      for (i = 0; i < argc; i++) {
>          char *opt = argv[i];
>          if (g_str_has_prefix(opt, "iblksize=")) {
> @@ -624,6 +694,8 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
>              dcachesize = g_ascii_strtoll(opt + 11, NULL, 10);
>          } else if (g_str_has_prefix(opt, "limit=")) {
>              limit = g_ascii_strtoll(opt + 6, NULL, 10);
> +        } else if (g_str_has_prefix(opt, "cores=")) {
> +            cores = g_ascii_strtoll(opt + 6, NULL, 10);
>          } else if (g_str_has_prefix(opt, "evict=")) {
>              gchar *p = opt + 6;
>              if (g_strcmp0(p, "rand") == 0) {
> @@ -644,22 +716,28 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
>  
>      policy_init();
>  
> -    dcache = cache_init(dblksize, dassoc, dcachesize);
> -    if (!dcache) {
> +    dcaches = caches_init(dblksize, dassoc, dcachesize);
> +    if (!dcaches) {
>          const char *err = cache_config_error(dblksize, dassoc, dcachesize);
>          fprintf(stderr, "dcache cannot be constructed from given parameters\n");
>          fprintf(stderr, "%s\n", err);
>          return -1;
>      }
>  
> -    icache = cache_init(iblksize, iassoc, icachesize);
> -    if (!icache) {
> +    icaches = caches_init(iblksize, iassoc, icachesize);
> +    if (!icaches) {
>          const char *err = cache_config_error(iblksize, iassoc, icachesize);
>          fprintf(stderr, "icache cannot be constructed from given parameters\n");
>          fprintf(stderr, "%s\n", err);
>          return -1;
>      }
>  
> +    /*
> +     * plus one to save the sum in. If only one core is used then no need to
> +     * get an auxiliary struct.
> +     */
> +    stats = g_new0(CoreStats, cores == 1 ? 1 : cores + 1);
> +

See above, lets keep the sum as a post processing step.

>      qemu_plugin_register_vcpu_tb_trans_cb(id, vcpu_tb_trans);
>      qemu_plugin_register_atexit_cb(id, plugin_exit, NULL);


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode
  2021-07-19 10:46     ` Mahmoud Mandour
@ 2021-07-19 11:06       ` Alex Bennée
  2021-07-19 11:28         ` Mahmoud Mandour
  0 siblings, 1 reply; 17+ messages in thread
From: Alex Bennée @ 2021-07-19 11:06 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: Emilio G. Cota, open list:All patches CC here


Mahmoud Mandour <ma.mandourr@gmail.com> writes:

> On Mon, Jul 19, 2021 at 11:48 AM Alex Bennée <alex.bennee@linaro.org> wrote:
>
>  Mahmoud Mandour <ma.mandourr@gmail.com> writes:
>
>  > Since callbacks may be interleaved because of multithreaded execution,
>  > we should not make assumptions about `plugin_exit` either. The problem
>  > with `plugin_exit` is that it frees shared data structures (caches and
>  > `miss_ht` hash table). It should not be assumed that callbacks will not
>  > be called after it and hence use already-freed data structures.
>
>  What was your test case for this because I wonder if it would be worth
>  coming up with one for check-tcg? 
>
> I think just any ad-hoc multithreaded execution will evoke the race pretty much 
> consistently.

I haven't managed to trigger it with testthread but of course my
libcache is trying to to defend against it.

>  
>  From what I remember the race is
>  in between preexit_cleanup and the eventual _exit/exit_group which nixes
>  all other child threads. Maybe we should be triggering a more graceful
>  unload here?
>
> I think so. This remedies the bug for this particular plugin and I think there
> would be a better solution of course. However, I just can't ever get plugin_exit
> callback to be called more than once so I think that's probably not the problem?
>
> The problem is that we *use* the data in translation/mem_access/exec callbacks
> after a plugin_exit call is already called (this can be easily verified by having a 
> boolean set to true once plugin_exit is called and then g_assert this boolean is 
> false in the callbacks)

We have mechanisms for safely unloading plugins during running so I
think we should be able to do something cleanly here. I'll cook up an
RFC.

>
>  > This is mitigated in this commit by synchronizing the call to
>  > `plugin_exit` through locking to ensure execlusive access to data
>  > structures, and NULL-ifying those data structures so that subsequent
>  > callbacks can check whether the data strucutres are freed, and if so,
>  > immediately exit.
>  >
>  > It's okay to immediately exit and don't account for those callbacks
>  > since they won't be accounted for anyway since `plugin_exit` is already
>  > called once and reported the statistics.
>  >
>  > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
>  > ---
>  >  contrib/plugins/cache.c | 31 ++++++++++++++++++++++++++++++-
>  >  1 file changed, 30 insertions(+), 1 deletion(-)
>  >
>  > diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
>  > index 695fb969dc..a452aba01c 100644
>  > --- a/contrib/plugins/cache.c
>  > +++ b/contrib/plugins/cache.c
>  > @@ -363,6 +363,11 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
>  >      effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) : vaddr;
>  >  
>  >      g_mutex_lock(&mtx);
>  > +    if (dcache == NULL) {
>  > +        g_mutex_unlock(&mtx);
>  > +        return;
>  > +    }
>  > +
>  >      if (!access_cache(dcache, effective_addr)) {
>  >          insn = (InsnData *) userdata;
>  >          insn->dmisses++;
>  > @@ -380,6 +385,11 @@ static void vcpu_insn_exec(unsigned int vcpu_index, void *userdata)
>  >      g_mutex_lock(&mtx);
>  >      insn_addr = ((InsnData *) userdata)->addr;
>  >  
>  > +    if (icache == NULL) {
>  > +        g_mutex_unlock(&mtx);
>  > +        return;
>  > +    }
>  > +
>  >      if (!access_cache(icache, insn_addr)) {
>  >          insn = (InsnData *) userdata;
>  >          insn->imisses++;
>  > @@ -406,12 +416,24 @@ static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
>  >              effective_addr = (uint64_t) qemu_plugin_insn_vaddr(insn);
>  >          }
>  >  
>  > +        g_mutex_lock(&mtx);
>  > +
>  > +        /*
>  > +         * is the plugin_exit callback called? If so, any further callback
>  > +         * registration is useless as it won't get accounted for after calling
>  > +         * plugin_exit once already, and also will use miss_ht after it's freed
>  > +         */
>  > +        if (miss_ht == NULL) {
>  > +            g_mutex_unlock(&mtx);
>  > +            return;
>  > +        }
>  > +
>  >          /*
>  >           * Instructions might get translated multiple times, we do not create
>  >           * new entries for those instructions. Instead, we fetch the same
>  >           * entry from the hash table and register it for the callback again.
>  >           */
>  > -        g_mutex_lock(&mtx);
>  > +
>  >          data = g_hash_table_lookup(miss_ht, GUINT_TO_POINTER(effective_addr));
>  >          if (data == NULL) {
>  >              data = g_new0(InsnData, 1);
>  > @@ -527,13 +549,20 @@ static void log_top_insns()
>  >  
>  >  static void plugin_exit(qemu_plugin_id_t id, void *p)
>  >  {
>  > +    g_mutex_lock(&mtx);
>  >      log_stats();
>  >      log_top_insns();
>  >  
>  >      cache_free(dcache);
>  > +    dcache = NULL;
>  > +
>  >      cache_free(icache);
>  > +    icache = NULL;
>  >  
>  >      g_hash_table_destroy(miss_ht);
>  > +    miss_ht = NULL;
>  > +
>  > +    g_mutex_unlock(&mtx);
>  >  }
>  >  
>  >  static void policy_init()
>
>  -- 
>  Alex Bennée


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode
  2021-07-19 11:06       ` Alex Bennée
@ 2021-07-19 11:28         ` Mahmoud Mandour
  2021-07-19 12:48           ` Alex Bennée
  0 siblings, 1 reply; 17+ messages in thread
From: Mahmoud Mandour @ 2021-07-19 11:28 UTC (permalink / raw)
  To: Alex Bennée; +Cc: Emilio G. Cota, open list:All patches CC here

[-- Attachment #1: Type: text/plain, Size: 6448 bytes --]

On Mon, Jul 19, 2021 at 1:08 PM Alex Bennée <alex.bennee@linaro.org> wrote:

>
> Mahmoud Mandour <ma.mandourr@gmail.com> writes:
>
> > On Mon, Jul 19, 2021 at 11:48 AM Alex Bennée <alex.bennee@linaro.org>
> wrote:
> >
> >  Mahmoud Mandour <ma.mandourr@gmail.com> writes:
> >
> >  > Since callbacks may be interleaved because of multithreaded execution,
> >  > we should not make assumptions about `plugin_exit` either. The problem
> >  > with `plugin_exit` is that it frees shared data structures (caches and
> >  > `miss_ht` hash table). It should not be assumed that callbacks will
> not
> >  > be called after it and hence use already-freed data structures.
> >
> >  What was your test case for this because I wonder if it would be worth
> >  coming up with one for check-tcg?
> >
> > I think just any ad-hoc multithreaded execution will evoke the race
> pretty much
> > consistently.
>
> I haven't managed to trigger it with testthread but of course my
> libcache is trying to to defend against it.
>

https://pastebin.com/X4Xazemh
That's a minimal program that reproduces the bug consistently for me (to my
observation, a simple
program that creates a bunch of threads that immediately exit does not
produce the bug as frequently)

You can also make hotblocks produce a similar crash (but make sure to add a
g_hash_table_destroy(hotblocks)
at the end of plugin_exit.)


>
> >
> >  From what I remember the race is
> >  in between preexit_cleanup and the eventual _exit/exit_group which nixes
> >  all other child threads. Maybe we should be triggering a more graceful
> >  unload here?
> >
> > I think so. This remedies the bug for this particular plugin and I think
> there
> > would be a better solution of course. However, I just can't ever get
> plugin_exit
> > callback to be called more than once so I think that's probably not the
> problem?
> >
> > The problem is that we *use* the data in translation/mem_access/exec
> callbacks
> > after a plugin_exit call is already called (this can be easily verified
> by having a
> > boolean set to true once plugin_exit is called and then g_assert this
> boolean is
> > false in the callbacks)
>
> We have mechanisms for safely unloading plugins during running so I
> think we should be able to do something cleanly here. I'll cook up an
> RFC.
>

I'll be waiting for it. Note that as I think I mentioned in the cover
letter, it's that plugin_uninstall
is probably problematic since it unregisters callbacks but already-fired
callbacks will continue executing.
So calling plugin_uninstall at the end of plugin_exit does not relieve the
bug...


>
> >
> >  > This is mitigated in this commit by synchronizing the call to
> >  > `plugin_exit` through locking to ensure execlusive access to data
> >  > structures, and NULL-ifying those data structures so that subsequent
> >  > callbacks can check whether the data strucutres are freed, and if so,
> >  > immediately exit.
> >  >
> >  > It's okay to immediately exit and don't account for those callbacks
> >  > since they won't be accounted for anyway since `plugin_exit` is
> already
> >  > called once and reported the statistics.
> >  >
> >  > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> >  > ---
> >  >  contrib/plugins/cache.c | 31 ++++++++++++++++++++++++++++++-
> >  >  1 file changed, 30 insertions(+), 1 deletion(-)
> >  >
> >  > diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
> >  > index 695fb969dc..a452aba01c 100644
> >  > --- a/contrib/plugins/cache.c
> >  > +++ b/contrib/plugins/cache.c
> >  > @@ -363,6 +363,11 @@ static void vcpu_mem_access(unsigned int
> vcpu_index, qemu_plugin_meminfo_t info,
> >  >      effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) :
> vaddr;
> >  >
> >  >      g_mutex_lock(&mtx);
> >  > +    if (dcache == NULL) {
> >  > +        g_mutex_unlock(&mtx);
> >  > +        return;
> >  > +    }
> >  > +
> >  >      if (!access_cache(dcache, effective_addr)) {
> >  >          insn = (InsnData *) userdata;
> >  >          insn->dmisses++;
> >  > @@ -380,6 +385,11 @@ static void vcpu_insn_exec(unsigned int
> vcpu_index, void *userdata)
> >  >      g_mutex_lock(&mtx);
> >  >      insn_addr = ((InsnData *) userdata)->addr;
> >  >
> >  > +    if (icache == NULL) {
> >  > +        g_mutex_unlock(&mtx);
> >  > +        return;
> >  > +    }
> >  > +
> >  >      if (!access_cache(icache, insn_addr)) {
> >  >          insn = (InsnData *) userdata;
> >  >          insn->imisses++;
> >  > @@ -406,12 +416,24 @@ static void vcpu_tb_trans(qemu_plugin_id_t id,
> struct qemu_plugin_tb *tb)
> >  >              effective_addr = (uint64_t) qemu_plugin_insn_vaddr(insn);
> >  >          }
> >  >
> >  > +        g_mutex_lock(&mtx);
> >  > +
> >  > +        /*
> >  > +         * is the plugin_exit callback called? If so, any further
> callback
> >  > +         * registration is useless as it won't get accounted for
> after calling
> >  > +         * plugin_exit once already, and also will use miss_ht after
> it's freed
> >  > +         */
> >  > +        if (miss_ht == NULL) {
> >  > +            g_mutex_unlock(&mtx);
> >  > +            return;
> >  > +        }
> >  > +
> >  >          /*
> >  >           * Instructions might get translated multiple times, we do
> not create
> >  >           * new entries for those instructions. Instead, we fetch the
> same
> >  >           * entry from the hash table and register it for the
> callback again.
> >  >           */
> >  > -        g_mutex_lock(&mtx);
> >  > +
> >  >          data = g_hash_table_lookup(miss_ht,
> GUINT_TO_POINTER(effective_addr));
> >  >          if (data == NULL) {
> >  >              data = g_new0(InsnData, 1);
> >  > @@ -527,13 +549,20 @@ static void log_top_insns()
> >  >
> >  >  static void plugin_exit(qemu_plugin_id_t id, void *p)
> >  >  {
> >  > +    g_mutex_lock(&mtx);
> >  >      log_stats();
> >  >      log_top_insns();
> >  >
> >  >      cache_free(dcache);
> >  > +    dcache = NULL;
> >  > +
> >  >      cache_free(icache);
> >  > +    icache = NULL;
> >  >
> >  >      g_hash_table_destroy(miss_ht);
> >  > +    miss_ht = NULL;
> >  > +
> >  > +    g_mutex_unlock(&mtx);
> >  >  }
> >  >
> >  >  static void policy_init()
> >
> >  --
> >  Alex Bennée
>
>
> --
> Alex Bennée
>

[-- Attachment #2: Type: text/html, Size: 8756 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 6/6] plugins/cache: Fixed "function decl. is not a prototype" warnings
  2021-07-14 17:21 ` [PATCH 6/6] plugins/cache: Fixed "function decl. is not a prototype" warnings Mahmoud Mandour
@ 2021-07-19 12:38   ` Alex Bennée
  0 siblings, 0 replies; 17+ messages in thread
From: Alex Bennée @ 2021-07-19 12:38 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: cota, qemu-devel


Mahmoud Mandour <ma.mandourr@gmail.com> writes:

> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode
  2021-07-19 11:28         ` Mahmoud Mandour
@ 2021-07-19 12:48           ` Alex Bennée
  0 siblings, 0 replies; 17+ messages in thread
From: Alex Bennée @ 2021-07-19 12:48 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: Emilio G. Cota, open list:All patches CC here


Mahmoud Mandour <ma.mandourr@gmail.com> writes:

> On Mon, Jul 19, 2021 at 1:08 PM Alex Bennée <alex.bennee@linaro.org> wrote:
>
>  Mahmoud Mandour <ma.mandourr@gmail.com> writes:
>
>  > On Mon, Jul 19, 2021 at 11:48 AM Alex Bennée <alex.bennee@linaro.org> wrote:
>  >
>  >  Mahmoud Mandour <ma.mandourr@gmail.com> writes:
>  >
>  >  > Since callbacks may be interleaved because of multithreaded execution,
>  >  > we should not make assumptions about `plugin_exit` either. The problem
>  >  > with `plugin_exit` is that it frees shared data structures (caches and
>  >  > `miss_ht` hash table). It should not be assumed that callbacks will not
>  >  > be called after it and hence use already-freed data structures.
>  >
>  >  What was your test case for this because I wonder if it would be worth
>  >  coming up with one for check-tcg? 
>  >
>  > I think just any ad-hoc multithreaded execution will evoke the race pretty much 
>  > consistently.
>
>  I haven't managed to trigger it with testthread but of course my
>  libcache is trying to to defend against it.
>
> https://pastebin.com/X4Xazemh
> That's a minimal program that reproduces the bug consistently for me (to my observation, a simple 
> program that creates a bunch of threads that immediately exit does not produce the bug as frequently)
>
> You can also make hotblocks produce a similar crash (but make sure to add a g_hash_table_destroy(hotblocks)
> at the end of plugin_exit.)
>

Thanks, I'll give that a try.

>  
>  >  
>  >  From what I remember the race is
>  >  in between preexit_cleanup and the eventual _exit/exit_group which nixes
>  >  all other child threads. Maybe we should be triggering a more graceful
>  >  unload here?
>  >
>  > I think so. This remedies the bug for this particular plugin and I think there
>  > would be a better solution of course. However, I just can't ever get plugin_exit
>  > callback to be called more than once so I think that's probably not the problem?
>  >
>  > The problem is that we *use* the data in translation/mem_access/exec callbacks
>  > after a plugin_exit call is already called (this can be easily verified by having a 
>  > boolean set to true once plugin_exit is called and then g_assert this boolean is 
>  > false in the callbacks)
>
>  We have mechanisms for safely unloading plugins during running so I
>  think we should be able to do something cleanly here. I'll cook up an
>  RFC.
>
> I'll be waiting for it. Note that as I think I mentioned in the cover letter, it's that plugin_uninstall
> is probably problematic since it unregisters callbacks but already-fired callbacks will continue executing.
> So calling plugin_uninstall at the end of plugin_exit does not relieve
> the bug...

That's OK - the plugin_uninstall contract explicitly says that callbacks
may occur until the callback is called.

>  
>  
>  >
>  >  > This is mitigated in this commit by synchronizing the call to
>  >  > `plugin_exit` through locking to ensure execlusive access to data
>  >  > structures, and NULL-ifying those data structures so that subsequent
>  >  > callbacks can check whether the data strucutres are freed, and if so,
>  >  > immediately exit.
>  >  >
>  >  > It's okay to immediately exit and don't account for those callbacks
>  >  > since they won't be accounted for anyway since `plugin_exit` is already
>  >  > called once and reported the statistics.
>  >  >
>  >  > Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
>  >  > ---
>  >  >  contrib/plugins/cache.c | 31 ++++++++++++++++++++++++++++++-
>  >  >  1 file changed, 30 insertions(+), 1 deletion(-)
>  >  >
>  >  > diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
>  >  > index 695fb969dc..a452aba01c 100644
>  >  > --- a/contrib/plugins/cache.c
>  >  > +++ b/contrib/plugins/cache.c
>  >  > @@ -363,6 +363,11 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
>  >  >      effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) : vaddr;
>  >  >  
>  >  >      g_mutex_lock(&mtx);
>  >  > +    if (dcache == NULL) {
>  >  > +        g_mutex_unlock(&mtx);
>  >  > +        return;
>  >  > +    }
>  >  > +
>  >  >      if (!access_cache(dcache, effective_addr)) {
>  >  >          insn = (InsnData *) userdata;
>  >  >          insn->dmisses++;
>  >  > @@ -380,6 +385,11 @@ static void vcpu_insn_exec(unsigned int vcpu_index, void *userdata)
>  >  >      g_mutex_lock(&mtx);
>  >  >      insn_addr = ((InsnData *) userdata)->addr;
>  >  >  
>  >  > +    if (icache == NULL) {
>  >  > +        g_mutex_unlock(&mtx);
>  >  > +        return;
>  >  > +    }
>  >  > +
>  >  >      if (!access_cache(icache, insn_addr)) {
>  >  >          insn = (InsnData *) userdata;
>  >  >          insn->imisses++;
>  >  > @@ -406,12 +416,24 @@ static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
>  >  >              effective_addr = (uint64_t) qemu_plugin_insn_vaddr(insn);
>  >  >          }
>  >  >  
>  >  > +        g_mutex_lock(&mtx);
>  >  > +
>  >  > +        /*
>  >  > +         * is the plugin_exit callback called? If so, any further callback
>  >  > +         * registration is useless as it won't get accounted for after calling
>  >  > +         * plugin_exit once already, and also will use miss_ht after it's freed
>  >  > +         */
>  >  > +        if (miss_ht == NULL) {
>  >  > +            g_mutex_unlock(&mtx);
>  >  > +            return;
>  >  > +        }
>  >  > +
>  >  >          /*
>  >  >           * Instructions might get translated multiple times, we do not create
>  >  >           * new entries for those instructions. Instead, we fetch the same
>  >  >           * entry from the hash table and register it for the callback again.
>  >  >           */
>  >  > -        g_mutex_lock(&mtx);
>  >  > +
>  >  >          data = g_hash_table_lookup(miss_ht, GUINT_TO_POINTER(effective_addr));
>  >  >          if (data == NULL) {
>  >  >              data = g_new0(InsnData, 1);
>  >  > @@ -527,13 +549,20 @@ static void log_top_insns()
>  >  >  
>  >  >  static void plugin_exit(qemu_plugin_id_t id, void *p)
>  >  >  {
>  >  > +    g_mutex_lock(&mtx);
>  >  >      log_stats();
>  >  >      log_top_insns();
>  >  >  
>  >  >      cache_free(dcache);
>  >  > +    dcache = NULL;
>  >  > +
>  >  >      cache_free(icache);
>  >  > +    icache = NULL;
>  >  >  
>  >  >      g_hash_table_destroy(miss_ht);
>  >  > +    miss_ht = NULL;
>  >  > +
>  >  > +    g_mutex_unlock(&mtx);
>  >  >  }
>  >  >  
>  >  >  static void policy_init()
>  >
>  >  -- 
>  >  Alex Bennée
>
>  -- 
>  Alex Bennée


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/6] plugins/cache: multicore cache emulation and minor
  2021-07-14 17:21 [PATCH 0/6] plugins/cache: multicore cache emulation and minor Mahmoud Mandour
                   ` (5 preceding siblings ...)
  2021-07-14 17:21 ` [PATCH 6/6] plugins/cache: Fixed "function decl. is not a prototype" warnings Mahmoud Mandour
@ 2021-07-20 12:46 ` Alex Bennée
  6 siblings, 0 replies; 17+ messages in thread
From: Alex Bennée @ 2021-07-20 12:46 UTC (permalink / raw)
  To: Mahmoud Mandour; +Cc: cota, qemu-devel


Mahmoud Mandour <ma.mandourr@gmail.com> writes:

> Hello,
>
> This series introduce some minor improvements/bug fixes in the cache
> plugins and multicore cache modelling.

Queued patches 1,2 and 6 for-6.1/fixes-for-rc1 as they are bug fixes, thanks.

-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-07-20 12:48 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-14 17:21 [PATCH 0/6] plugins/cache: multicore cache emulation and minor Mahmoud Mandour
2021-07-14 17:21 ` [PATCH 1/6] plugins/cache: Fixed a bug with destroying FIFO metadata Mahmoud Mandour
2021-07-19  9:21   ` Alex Bennée
2021-07-14 17:21 ` [PATCH 2/6] plugins/cache: limited the scope of a mutex lock Mahmoud Mandour
2021-07-19  9:34   ` Alex Bennée
2021-07-14 17:21 ` [PATCH 3/6] plugins/cache: Fixed a use-after-free bug with multithreaded usermode Mahmoud Mandour
2021-07-19  9:45   ` Alex Bennée
2021-07-19 10:46     ` Mahmoud Mandour
2021-07-19 11:06       ` Alex Bennée
2021-07-19 11:28         ` Mahmoud Mandour
2021-07-19 12:48           ` Alex Bennée
2021-07-14 17:21 ` [PATCH 4/6] plugins/cache: Supported multicore cache modelling Mahmoud Mandour
2021-07-19 10:52   ` Alex Bennée
2021-07-14 17:21 ` [PATCH 5/6] docs/devel/tcg-plugins: added cores arg to cache plugin Mahmoud Mandour
2021-07-14 17:21 ` [PATCH 6/6] plugins/cache: Fixed "function decl. is not a prototype" warnings Mahmoud Mandour
2021-07-19 12:38   ` Alex Bennée
2021-07-20 12:46 ` [PATCH 0/6] plugins/cache: multicore cache emulation and minor Alex Bennée

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).