linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sohaib Mohamed <sohaib.amhmd@gmail.com>
To: Ian Rogers <irogers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@redhat.com>, Namhyung Kim <namhyung@kernel.org>,
	Pierre Gondois <Pierre.Gondois@arm.com>,
	linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] perf bench: Fix memory leaks.
Date: Wed, 10 Nov 2021 04:24:56 +0200	[thread overview]
Message-ID: <20211110022456.yd5m5on7v2jbqyzg@pc> (raw)
In-Reply-To: <CAP-5=fUc1uPti1MN3BZJ1JFus=ZYU66+gjh3bW30pUkE2sUxBQ@mail.gmail.com>

On Mon, Nov 08, 2021 at 10:57:25AM -0800, Ian Rogers wrote:
> On Sun, Nov 7, 2021 at 8:49 PM Sohaib Mohamed <sohaib.amhmd@gmail.com> wrote:
> >
> > ASan reports memory leaks while running:
> >
> > $ perf bench sched all
> >
> > Signed-off-by: Sohaib Mohamed <sohaib.amhmd@gmail.com>
>
> Acked-by: Ian Rogers <irogers@google.com>
>
> I think you can add:
> Fixes: e27454cc6352c ("perf bench: Add sched-messaging.c: Benchmark
> for scheduler and IPC mechanisms based on hackbench")
>
> This will then get the fix backported to older stable perf commands.

I just added these two lines to version 2:
https://lore.kernel.org/linux-perf-users/20211110022012.16620-1-sohaib.amhmd@gmail.com/

Thanks,
Sohaib

>
> Thanks,
> Ian
>
> > ---
> >  tools/perf/bench/sched-messaging.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/tools/perf/bench/sched-messaging.c b/tools/perf/bench/sched-messaging.c
> > index 488f6e6ba1a5..fa0ff4ce2b74 100644
> > --- a/tools/perf/bench/sched-messaging.c
> > +++ b/tools/perf/bench/sched-messaging.c
> > @@ -223,6 +223,8 @@ static unsigned int group(pthread_t *pth,
> >                 snd_ctx->out_fds[i] = fds[1];
> >                 if (!thread_mode)
> >                         close(fds[0]);
> > +
> > +               free(ctx);
> >         }
> >
> >         /* Now we have all the fds, fork the senders */
> > @@ -239,6 +241,8 @@ static unsigned int group(pthread_t *pth,
> >                 for (i = 0; i < num_fds; i++)
> >                         close(snd_ctx->out_fds[i]);
> >
> > +       free(snd_ctx);
> > +
> >         /* Return number of children to reap */
> >         return num_fds * 2;
> >  }
> > --
> > 2.25.1
> >

      reply	other threads:[~2021-11-10  2:25 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-08  4:48 [PATCH] perf bench: Fix memory leaks Sohaib Mohamed
2021-11-08 18:57 ` Ian Rogers
2021-11-10  2:24   ` Sohaib Mohamed [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211110022456.yd5m5on7v2jbqyzg@pc \
    --to=sohaib.amhmd@gmail.com \
    --cc=Pierre.Gondois@arm.com \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=irogers@google.com \
    --cc=jolsa@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).