linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Namhyung Kim <namhyung@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: kan.liang@linux.intel.com, mingo@kernel.org,
	linux-kernel@vger.kernel.org, eranian@google.com,
	irogers@google.com, gmx@google.com, acme@kernel.org,
	jolsa@redhat.com, ak@linux.intel.com, benh@kernel.crashing.org,
	paulus@samba.org, mpe@ellerman.id.au
Subject: Re: [PATCH V2 3/3] perf: Optimize sched_task() in a context switch
Date: Wed, 2 Dec 2020 23:40:49 +0900	[thread overview]
Message-ID: <X8encVJSgbXVLGvT@google.com> (raw)
In-Reply-To: <20201201172903.GT3040@hirez.programming.kicks-ass.net>

Hi Peter and Kan,

On Tue, Dec 01, 2020 at 06:29:03PM +0100, Peter Zijlstra wrote:
> On Mon, Nov 30, 2020 at 11:38:42AM -0800, kan.liang@linux.intel.com wrote:
> > From: Kan Liang <kan.liang@linux.intel.com>
> > 
> > Some calls to sched_task() in a context switch can be avoided. For
> > example, large PEBS only requires flushing the buffer in context switch
> > out. The current code still invokes the sched_task() for large PEBS in
> > context switch in.
> 
> I still hate this one, how's something like this then?
> Which I still don't really like.. but at least its simpler.
> 
> (completely untested, may contain spurious edits, might ICE the
> compiler and set your pets on fire if it doesn't)

I've tested Kan's v2 patches and it worked well.  Will test your
version (with the fix in the other email) too.


> 
> And given this is an optimization, can we actually measure it to improve
> matters?

I just checked perf bench sched pipe result.  Without perf record
running, it usually takes less than 7 seconds.  Note that this (and
below) is a median value of 10 runs.

  # perf bench sched pipe
  # Running 'sched/pipe' benchmark:
  # Executed 1000000 pipe operations between two processes

     Total time: 6.875 [sec]

       6.875700 usecs/op
         145439 ops/sec


And I ran it again with perf record like below.  This is a result when
I applied the patch 1 and 2 only.

  # perf record -aB -c 100001 -e cycles:pp perf bench sched pipe
  # Running 'sched/pipe' benchmark:
  # Executed 1000000 pipe operations between two processes

     Total time: 8.198 [sec]

       8.198952 usecs/op
         121966 ops/sec
  [ perf record: Woken up 10 times to write data ]
  [ perf record: Captured and wrote 4.972 MB perf.data ]


With patch 3 applied, the total time went down a little bit.

  # perf record -aB -c 100001 -e cycles:pp perf bench sched pipe
  # Running 'sched/pipe' benchmark:
  # Executed 1000000 pipe operations between two processes

     Total time: 7.785 [sec]

       7.785119 usecs/op
         128450 ops/sec
  [ perf record: Woken up 12 times to write data ]
  [ perf record: Captured and wrote 4.622 MB perf.data ]


Thanks,
Namhyung

  parent reply	other threads:[~2020-12-02 14:42 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-30 19:38 [PATCH V2 1/3] perf/core: Flush PMU internal buffers for per-CPU events kan.liang
2020-11-30 19:38 ` [PATCH V2 2/3] perf/x86/intel: Set PERF_ATTACH_SCHED_CB for large PEBS and LBR kan.liang
2021-03-01 10:16   ` [tip: perf/urgent] " tip-bot2 for Kan Liang
2021-03-06 11:54   ` tip-bot2 for Kan Liang
2020-11-30 19:38 ` [PATCH V2 3/3] perf: Optimize sched_task() in a context switch kan.liang
2020-12-01 17:29   ` Peter Zijlstra
2020-12-02 10:26     ` Peter Zijlstra
2020-12-02 14:40     ` Namhyung Kim [this message]
2020-12-04  7:14     ` Namhyung Kim
2020-12-10  7:13       ` Namhyung Kim
2020-12-10 13:52         ` Liang, Kan
2020-12-10 14:25           ` Peter Zijlstra
2021-01-18  7:04             ` Namhyung Kim
2021-01-27  4:41               ` Namhyung Kim
2021-02-22  9:46                 ` Namhyung Kim
2020-12-01 17:21 ` [PATCH V2 1/3] perf/core: Flush PMU internal buffers for per-CPU events Peter Zijlstra
2021-03-01 10:16 ` [tip: perf/urgent] " tip-bot2 for Kan Liang
2021-03-06 11:54 ` tip-bot2 for Kan Liang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=X8encVJSgbXVLGvT@google.com \
    --to=namhyung@kernel.org \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=benh@kernel.crashing.org \
    --cc=eranian@google.com \
    --cc=gmx@google.com \
    --cc=irogers@google.com \
    --cc=jolsa@redhat.com \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=mpe@ellerman.id.au \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).