From: Peter Zijlstra <peterz@infradead.org>
To: David Vernet <void@manifault.com>
Cc: linux-kernel@vger.kernel.org, mingo@redhat.com,
juri.lelli@redhat.com, vincent.guittot@linaro.org,
dietmar.eggemann@arm.com, rostedt@goodmis.org,
bsegall@google.com, mgorman@suse.de, bristot@redhat.com,
vschneid@redhat.com, gautham.shenoy@amd.com,
kprateek.nayak@amd.com, aaron.lu@intel.com, clm@meta.com,
tj@kernel.org, roman.gushchin@linux.dev, kernel-team@meta.com
Subject: Re: [PATCH v2 7/7] sched: Move shared_runq to __{enqueue,dequeue}_entity()
Date: Tue, 11 Jul 2023 12:51:36 +0200 [thread overview]
Message-ID: <20230711105136.GH3062772@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20230710200342.358255-8-void@manifault.com>
Ufff.. so I see how you ended up with the series in this form, but I
typically prefer to have less back and forth. Perhaps fold back at least
this last patch?
next prev parent reply other threads:[~2023-07-11 10:52 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-10 20:03 [PATCH v2 0/7] sched: Implement shared runqueue in CFS David Vernet
2023-07-10 20:03 ` [PATCH v2 1/7] sched: Expose move_queued_task() from core.c David Vernet
2023-07-10 20:03 ` [PATCH v2 2/7] sched: Move is_cpu_allowed() into sched.h David Vernet
2023-07-10 20:03 ` [PATCH v2 3/7] sched: Check cpu_active() earlier in newidle_balance() David Vernet
2023-07-10 20:03 ` [PATCH v2 4/7] sched/fair: Add SHARED_RUNQ sched feature and skeleton calls David Vernet
2023-07-11 9:45 ` Peter Zijlstra
2023-07-11 16:19 ` David Vernet
2023-07-12 8:39 ` Abel Wu
2023-07-12 21:34 ` David Vernet
2023-07-10 20:03 ` [PATCH v2 5/7] sched: Implement shared runqueue in CFS David Vernet
2023-07-11 10:18 ` Peter Zijlstra
2023-07-11 16:26 ` David Vernet
2023-07-12 6:00 ` Gautham R. Shenoy
2023-07-12 19:13 ` David Vernet
2023-07-12 10:47 ` Abel Wu
2023-07-12 22:16 ` David Vernet
2023-07-13 3:43 ` Abel Wu
2023-07-13 4:05 ` David Vernet
2023-07-13 7:58 ` Aaron Lu
2023-07-13 8:29 ` Peter Zijlstra
2023-07-10 20:03 ` [PATCH v2 6/7] sched: Shard per-LLC shared runqueues David Vernet
2023-07-11 10:49 ` Peter Zijlstra
2023-07-11 19:57 ` David Vernet
2023-07-12 10:06 ` Gautham R. Shenoy
2023-07-12 12:22 ` Peter Zijlstra
2023-07-10 20:03 ` [PATCH v2 7/7] sched: Move shared_runq to __{enqueue,dequeue}_entity() David Vernet
2023-07-11 10:51 ` Peter Zijlstra [this message]
2023-07-11 16:30 ` David Vernet
2023-07-11 11:42 ` [PATCH v2 0/7] sched: Implement shared runqueue in CFS Peter Zijlstra
2023-07-11 21:33 ` David Vernet
2023-07-21 9:12 ` Gautham R. Shenoy
2023-07-25 20:22 ` David Vernet
2023-08-02 6:32 ` Gautham R. Shenoy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230711105136.GH3062772@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=aaron.lu@intel.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=clm@meta.com \
--cc=dietmar.eggemann@arm.com \
--cc=gautham.shenoy@amd.com \
--cc=juri.lelli@redhat.com \
--cc=kernel-team@meta.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=roman.gushchin@linux.dev \
--cc=rostedt@goodmis.org \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=void@manifault.com \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).