dwarves.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bill Wendling <morbo@google.com>
To: Yonghong Song <yhs@fb.com>
Cc: dwarves@vger.kernel.org, bpf <bpf@vger.kernel.org>,
	Arnaldo Carvalho de Melo <arnaldo.melo@gmail.com>
Subject: Re: [RFC 0/1] Combining CUs into a single hash table
Date: Sun, 14 Mar 2021 00:28:44 -0800	[thread overview]
Message-ID: <CAGG=3QUYzMNBwoOY9q739wKDVzuevZSjC=KPBdrQW9fXRCnvjQ@mail.gmail.com> (raw)
In-Reply-To: <86bcb5c4-b3c8-e41f-96ec-800caf57f585@fb.com>

[-- Attachment #1: Type: text/plain, Size: 3000 bytes --]

On Sat, Mar 13, 2021 at 11:05 PM Yonghong Song <yhs@fb.com> wrote:
> On 2/23/21 12:44 PM, Bill Wendling wrote:
> > Bump for exposure.
> >
> > On Fri, Feb 12, 2021 at 1:16 PM Bill Wendling <morbo@google.com> wrote:
> >>
> >> Hey gang,
> >>
> >> I would like your feedback on this patch.
> >>
> >> This patch creates one hash table that all CUs share. The impetus for this
> >> patch is to support clang's LTO (Link-Time Optimizations). Currently, pahole
> >> can't handle the DWARF data that clang produces, because the CUs may refer to
> >> tags in other CUs (all of the code having been squozen together).
>
> Hi, Bill,
>
> LTO build support is now in linus tree 5.12 rc2 and also merged in
> latest bpf-next. I tried thin-LTO build and it is fine with latest
> trunk llvm (llvm13) until it hits pahole and it stuck there (pahole
> 1.20) probably some kind of infinite loop in pahole as pahole is
> not ready to handle lto dwarf yet.
>
> I then applied this patch on top of master pahole (1.20) and pahole
> seg faulted. I did not debug. Have you hit the same issue?
> How did you make pahole work with LTO built kernel?
>
Hi Yonghong,

I haven't tried this very much with top-of-tree Linux, but it's quite
possible that there's a segfaulting issue I haven't come across yet.
Make sure that you're using pahole v1.20, because it supports clang's
penchant for assigning some objects "null" names.

This patch is the first step in my attempt to get pahole working with
LTO. There's a follow-up patch that I'll attach to this email that
gets me through the compilation. It's not been heavily tested or
reviewed (it's in my local tree), so caveat emptor. I would love to
have people test it to see if it helps or just makes things worse.

Cheers!
-bw

> Thanks!
>
> Yonghong
>
> >>
> >> One solution I found is to process the CUs in two steps:
> >>
> >>    1. add the CUs into a single hash table, and
> >>    2. perform the recoding and finalization steps in a a separate step.
> >>
> >> The issue I'm facing with this patch is that it balloons the runtime from
> >> ~11.11s to ~14.27s. It looks like the underlying cause is that some (but not
> >> all) hash buckets have thousands of entries each. I've bumped up the
> >> HASHTAGS__BITS from 15 to 16, which helped a little. Bumping it up to 17 or
> >> above causes a failure.
> >>
> >> A couple of things I thought of may help. We could increase the number of
> >> buckets, which would help with distribution. As I mentioned though, that seemed
> >> to cause a failure. Another option is to store the bucket entries in a
> >> non-list, e.g. binary search tree.
> >>
> >> I wanted to get your opinions before I trod down one of these roads.
> >>
> >> Share and enjoy!
> >> -bw
> >>
> >> Bill Wendling (1):
> >>    dwarf_loader: have all CUs use a single hash table
> >>
> >>   dwarf_loader.c | 45 +++++++++++++++++++++++++++++++++------------
> >>   1 file changed, 33 insertions(+), 12 deletions(-)
> >>
> >> --
> >> 2.30.0.478.g8a0d178c01-goog
> >>

[-- Attachment #2: pahole.patch --]
[-- Type: application/octet-stream, Size: 2966 bytes --]

commit 866fac58f88d501ca23131830679d1f96625dda8
Author: Bill Wendling <morbo@google.com>
Date:   Fri Feb 12 14:05:19 2021 -0800

    dwarf_loader: perform the recoding and finalization separately
    
    Clang's LTO produces DWARF data where a CU may refer to tags in other
    CU. This means that we need all tags from every CU available during
    recoding and finalization. So we gather the tag data in one phase and
    use it in the following phase.
    
    Signed-off-by: Bill Wendling <morbo@google.com>

diff --git a/dwarf_loader.c b/dwarf_loader.c
index 2b0d619..e83b247 100644
--- a/dwarf_loader.c
+++ b/dwarf_loader.c
@@ -2261,14 +2261,6 @@ static int die__process(Dwarf_Die *die, struct cu *cu)
 	return 0;
 }
 
-static int die__process_and_recode(Dwarf_Die *die, struct cu *cu)
-{
-	int ret = die__process(die, cu);
-	if (ret != 0)
-		return ret;
-	return cu__recode_dwarf_types(cu);
-}
-
 static int class_member__cache_byte_size(struct tag *tag, struct cu *cu,
 					 void *cookie)
 {
@@ -2498,6 +2490,20 @@ static int cus__load_module(struct cus *cus, struct conf_load *conf,
 		}
 	}
 
+	/*
+	 * CUs may refer to tags and types located in other CUs. To support
+	 * this, we process the CUs in two steps.
+	 *
+	 *   - Collect the CUs and adds their types and tags entries into
+	 *     hashes shared between all CUs.
+	 *   - Then recode and finalize the CUs.
+	 */
+
+	/* A temporary list of all CU objects. */
+	struct cus *dcus = cus__new();
+	if (dcus == NULL)
+		return DWARF_CB_ABORT;
+
 	while (dwarf_nextcu(dw, off, &noff, &cuhl, NULL, &pointer_size,
 			    &offset_size) == 0) {
 		Dwarf_Die die_mem;
@@ -2528,24 +2534,41 @@ static int cus__load_module(struct cus *cus, struct conf_load *conf,
 		}
 		cu->little_endian = ehdr.e_ident[EI_DATA] == ELFDATA2LSB;
 
-		struct dwarf_cu dcu;
-
-		dwarf_cu__init(&dcu);
-		dcu.cu = cu;
-		dcu.type_unit = type_cu ? &type_dcu : NULL;
-		cu->priv = &dcu;
-		cu->dfops = &dwarf__ops;
-
-		if (die__process_and_recode(cu_die, cu) != 0)
+		struct dwarf_cu *dcu = malloc(sizeof(struct dwarf_cu));
+		if (dcu == NULL)
 			return DWARF_CB_ABORT;
 
-		if (finalize_cu_immediately(cus, cu, &dcu, conf)
-		    == LSK__STOP_LOADING)
+		dwarf_cu__init(dcu);
+		dcu->cu = cu;
+		dcu->type_unit = type_cu ? &type_dcu : NULL;
+		cu->priv = dcu;
+		cu->dfops = &dwarf__ops;
+
+		cus__add(dcus, cu);
+
+		if (die__process(cu_die, cu) != LSK__KEEPIT)
 			return DWARF_CB_ABORT;
 
 		off = noff;
 	}
 
+	/* Recode and finalize the CUs. */
+	struct cu *pos, *n;
+	list_for_each_entry_safe(pos, n, &dcus->cus, node) {
+		struct cu *cu = pos;
+		struct dwarf_cu *dcu = (struct dwarf_cu *)cu->priv;
+
+		if (cu__recode_dwarf_types(cu) != LSK__KEEPIT)
+			return DWARF_CB_ABORT;
+
+		if (finalize_cu_immediately(cus, cu, dcu, conf)
+		    == LSK__STOP_LOADING)
+			return DWARF_CB_ABORT;
+	}
+
+	/* We no longer need this list of CU objects. */
+	free(dcus);
+
 	if (type_lsk == LSK__DELETE)
 		cu__delete(type_cu);
 

  reply	other threads:[~2021-03-14  8:29 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-12 21:16 [RFC 0/1] Combining CUs into a single hash table Bill Wendling
2021-02-12 21:16 ` [RFC PATCH 1/1] dwarf_loader: have all CUs use " Bill Wendling
2021-02-23 20:44 ` [RFC 0/1] Combining CUs into " Bill Wendling
2021-02-23 20:54   ` Arnaldo Carvalho de Melo
2021-03-14  7:05   ` Yonghong Song
2021-03-14  8:28     ` Bill Wendling [this message]
2021-03-14 23:33       ` Yonghong Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGG=3QUYzMNBwoOY9q739wKDVzuevZSjC=KPBdrQW9fXRCnvjQ@mail.gmail.com' \
    --to=morbo@google.com \
    --cc=arnaldo.melo@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=dwarves@vger.kernel.org \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).