All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jarmo Tiitto <jarmo.tiitto@gmail.com>
To: samitolvanen@google.com
Cc: wcw@google.com, nathan@kernel.org, ndesaulniers@google.com,
	linux-kernel@vger.kernel.org, clang-built-linux@googlegroups.com,
	Jarmo Tiitto <jarmo.tiitto@gmail.com>
Subject: [PATCH 4/6] pgo: modules Enable __llvm_profile_instrument_target() for modules
Date: Fri, 28 May 2021 23:10:06 +0300	[thread overview]
Message-ID: <20210528201006.459292-1-jarmo.tiitto@gmail.com> (raw)

Enable allocate_node() for modules.

Before this patch __llvm_profile_instrument_target() profiled all
code which was instrumented, including modules.
Any module profiling was however effectively disabled by allocate_node()
if llvm_prf_data instance didn't point into core section.

Handle profiling data that orginates from modules by iterating
prf_mod_list and checking in what module the llvm_prf_data instance
points into.
If matching module is found the node is allocated from that module.
Each module has then its own current_node index.
The list iteration is protected by rcu here to avoid extra mutex.

Signed-off-by: Jarmo Tiitto <jarmo.tiitto@gmail.com>
---
 kernel/pgo/instrument.c | 65 +++++++++++++++++++++++++++++++++--------
 1 file changed, 53 insertions(+), 12 deletions(-)

diff --git a/kernel/pgo/instrument.c b/kernel/pgo/instrument.c
index 98cfa11a7b76..a95c86d668b5 100644
--- a/kernel/pgo/instrument.c
+++ b/kernel/pgo/instrument.c
@@ -31,7 +31,7 @@
  * ensures that we don't try to serialize data that's only partially updated.
  */
 static DEFINE_SPINLOCK(pgo_lock);
-static int current_node;
+static int current_node = 0;
 
 unsigned long prf_lock(void)
 {
@@ -55,17 +55,58 @@ void prf_unlock(unsigned long flags)
 static struct llvm_prf_value_node *allocate_node(struct llvm_prf_data *p,
 						 u32 index, u64 value)
 {
-	if (&__llvm_prf_vnds_start[current_node + 1] >= __llvm_prf_vnds_end)
-		return NULL; /* Out of nodes */
-
-	current_node++;
-
-	/* Make sure the node is entirely within the section */
-	if (&__llvm_prf_vnds_start[current_node] >= __llvm_prf_vnds_end ||
-	    &__llvm_prf_vnds_start[current_node + 1] > __llvm_prf_vnds_end)
-		return NULL;
-
-	return &__llvm_prf_vnds_start[current_node];
+    struct prf_mod_private_data *pmod;
+    struct llvm_prf_data *start = __llvm_prf_data_start;
+    struct llvm_prf_data *end = __llvm_prf_data_end;
+    struct module * mod;
+    struct llvm_prf_value_node * vnds = __llvm_prf_vnds_start;
+    struct llvm_prf_value_node * vnds_end = __llvm_prf_vnds_end;
+
+    if(start <= p && p < end) {
+        /* vmlinux core node */
+        if (&vnds[current_node + 1] >= vnds_end)
+            return NULL; /* Out of nodes */
+
+        current_node++;
+
+        /* Make sure the node is entirely within the section
+         */
+        if (&vnds[current_node] >= vnds_end ||
+            &vnds[current_node + 1] > vnds_end)
+            return NULL;
+
+        return &vnds[current_node];
+
+    } else {
+        /* maybe an module node
+         * find in what module section p points into and
+         * then allocate from that module
+         */
+        rcu_read_lock();
+        list_for_each_entry_rcu(pmod,&prf_mod_list,link) {
+            mod = READ_ONCE(pmod->mod);
+            if(mod) {
+                /* get section bounds */
+                start = mod->prf_data;
+                end = mod->prf_data + mod->prf_data_size;
+                if(start <= p && p < end)
+                {
+                    vnds = mod->prf_vnds;
+                    vnds_end = mod->prf_vnds + mod->prf_vnds_size;
+                    if (&vnds[pmod->current_node + 1] < vnds_end) {
+                        pmod->current_node++;
+
+                        vnds = &vnds[pmod->current_node];
+                        rcu_read_unlock();
+                        return vnds;
+                        break;
+                    }
+                }
+            }
+        }
+        rcu_read_unlock();
+        return NULL; /* Out of nodes */
+    }
 }
 
 /*
-- 
2.31.1


                 reply	other threads:[~2021-05-28 20:10 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210528201006.459292-1-jarmo.tiitto@gmail.com \
    --to=jarmo.tiitto@gmail.com \
    --cc=clang-built-linux@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nathan@kernel.org \
    --cc=ndesaulniers@google.com \
    --cc=samitolvanen@google.com \
    --cc=wcw@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.