From: Will Deacon <will@kernel.org> To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon <will@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, Marc Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Peter Zijlstra <peterz@infradead.org>, Morten Rasmussen <morten.rasmussen@arm.com>, Qais Yousef <qais.yousef@arm.com>, Suren Baghdasaryan <surenb@google.com>, Quentin Perret <qperret@google.com>, Tejun Heo <tj@kernel.org>, Johannes Weiner <hannes@cmpxchg.org>, Ingo Molnar <mingo@redhat.com>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, "Rafael J. Wysocki" <rjw@rjwysocki.net>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Daniel Bristot de Oliveira <bristot@redhat.com>, Valentin Schneider <valentin.schneider@arm.com>, Mark Rutland <mark.rutland@arm.com>, kernel-team@android.com Subject: [PATCH v9 08/20] cpuset: Cleanup cpuset_cpus_allowed_fallback() use in select_fallback_rq() Date: Tue, 8 Jun 2021 19:03:01 +0100 [thread overview] Message-ID: <20210608180313.11502-9-will@kernel.org> (raw) In-Reply-To: <20210608180313.11502-1-will@kernel.org> select_fallback_rq() only needs to recheck for an allowed CPU if the affinity mask of the task has changed since the last check. Return a 'bool' from cpuset_cpus_allowed_fallback() to indicate whether the affinity mask was updated, and use this to elide the allowed check when the mask has been left alone. No functional change. Suggested-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Will Deacon <will@kernel.org> --- include/linux/cpuset.h | 5 +++-- kernel/cgroup/cpuset.c | 10 ++++++++-- kernel/sched/core.c | 3 +-- 3 files changed, 12 insertions(+), 6 deletions(-) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 414a8e694413..d2b9c41c8edf 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -59,7 +59,7 @@ extern void cpuset_wait_for_hotplug(void); extern void cpuset_read_lock(void); extern void cpuset_read_unlock(void); extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask); -extern void cpuset_cpus_allowed_fallback(struct task_struct *p); +extern bool cpuset_cpus_allowed_fallback(struct task_struct *p); extern nodemask_t cpuset_mems_allowed(struct task_struct *p); #define cpuset_current_mems_allowed (current->mems_allowed) void cpuset_init_current_mems_allowed(void); @@ -188,8 +188,9 @@ static inline void cpuset_cpus_allowed(struct task_struct *p, cpumask_copy(mask, task_cpu_possible_mask(p)); } -static inline void cpuset_cpus_allowed_fallback(struct task_struct *p) +static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p) { + return false; } static inline nodemask_t cpuset_mems_allowed(struct task_struct *p) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 4e7c271e3800..a6bab2259f98 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -3327,17 +3327,22 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) * which will not contain a sane cpumask during cases such as cpu hotplugging. * This is the absolute last resort for the scheduler and it is only used if * _every_ other avenue has been traveled. + * + * Returns true if the affinity of @tsk was changed, false otherwise. **/ -void cpuset_cpus_allowed_fallback(struct task_struct *tsk) +bool cpuset_cpus_allowed_fallback(struct task_struct *tsk) { const struct cpumask *cs_mask; + bool changed = false; const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); rcu_read_lock(); cs_mask = task_cs(tsk)->cpus_allowed; - if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) + if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) { do_set_cpus_allowed(tsk, cs_mask); + changed = true; + } rcu_read_unlock(); /* @@ -3357,6 +3362,7 @@ void cpuset_cpus_allowed_fallback(struct task_struct *tsk) * select_fallback_rq() will fix things ups and set cpu_possible_mask * if required. */ + return changed; } void __init cpuset_init_current_mems_allowed(void) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0c1b6f1a6c91..9e75cb3fbc9c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2779,8 +2779,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p) /* No more Mr. Nice Guy. */ switch (state) { case cpuset: - if (IS_ENABLED(CONFIG_CPUSETS)) { - cpuset_cpus_allowed_fallback(p); + if (cpuset_cpus_allowed_fallback(p)) { state = possible; break; } -- 2.32.0.rc1.229.g3e70b5a671-goog
WARNING: multiple messages have this Message-ID
From: Will Deacon <will@kernel.org> To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon <will@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, Marc Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Peter Zijlstra <peterz@infradead.org>, Morten Rasmussen <morten.rasmussen@arm.com>, Qais Yousef <qais.yousef@arm.com>, Suren Baghdasaryan <surenb@google.com>, Quentin Perret <qperret@google.com>, Tejun Heo <tj@kernel.org>, Johannes Weiner <hannes@cmpxchg.org>, Ingo Molnar <mingo@redhat.com>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, "Rafael J. Wysocki" <rjw@rjwysocki.net>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Daniel Bristot de Oliveira <bristot@redhat.com>, Valentin Schneider <valentin.schneider@arm.com>, Mark Rutland <mark.rutland@arm.com>, kernel-team@android.com Subject: [PATCH v9 08/20] cpuset: Cleanup cpuset_cpus_allowed_fallback() use in select_fallback_rq() Date: Tue, 8 Jun 2021 19:03:01 +0100 [thread overview] Message-ID: <20210608180313.11502-9-will@kernel.org> (raw) In-Reply-To: <20210608180313.11502-1-will@kernel.org> select_fallback_rq() only needs to recheck for an allowed CPU if the affinity mask of the task has changed since the last check. Return a 'bool' from cpuset_cpus_allowed_fallback() to indicate whether the affinity mask was updated, and use this to elide the allowed check when the mask has been left alone. No functional change. Suggested-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Will Deacon <will@kernel.org> --- include/linux/cpuset.h | 5 +++-- kernel/cgroup/cpuset.c | 10 ++++++++-- kernel/sched/core.c | 3 +-- 3 files changed, 12 insertions(+), 6 deletions(-) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 414a8e694413..d2b9c41c8edf 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -59,7 +59,7 @@ extern void cpuset_wait_for_hotplug(void); extern void cpuset_read_lock(void); extern void cpuset_read_unlock(void); extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask); -extern void cpuset_cpus_allowed_fallback(struct task_struct *p); +extern bool cpuset_cpus_allowed_fallback(struct task_struct *p); extern nodemask_t cpuset_mems_allowed(struct task_struct *p); #define cpuset_current_mems_allowed (current->mems_allowed) void cpuset_init_current_mems_allowed(void); @@ -188,8 +188,9 @@ static inline void cpuset_cpus_allowed(struct task_struct *p, cpumask_copy(mask, task_cpu_possible_mask(p)); } -static inline void cpuset_cpus_allowed_fallback(struct task_struct *p) +static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p) { + return false; } static inline nodemask_t cpuset_mems_allowed(struct task_struct *p) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 4e7c271e3800..a6bab2259f98 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -3327,17 +3327,22 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) * which will not contain a sane cpumask during cases such as cpu hotplugging. * This is the absolute last resort for the scheduler and it is only used if * _every_ other avenue has been traveled. + * + * Returns true if the affinity of @tsk was changed, false otherwise. **/ -void cpuset_cpus_allowed_fallback(struct task_struct *tsk) +bool cpuset_cpus_allowed_fallback(struct task_struct *tsk) { const struct cpumask *cs_mask; + bool changed = false; const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); rcu_read_lock(); cs_mask = task_cs(tsk)->cpus_allowed; - if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) + if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) { do_set_cpus_allowed(tsk, cs_mask); + changed = true; + } rcu_read_unlock(); /* @@ -3357,6 +3362,7 @@ void cpuset_cpus_allowed_fallback(struct task_struct *tsk) * select_fallback_rq() will fix things ups and set cpu_possible_mask * if required. */ + return changed; } void __init cpuset_init_current_mems_allowed(void) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0c1b6f1a6c91..9e75cb3fbc9c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2779,8 +2779,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p) /* No more Mr. Nice Guy. */ switch (state) { case cpuset: - if (IS_ENABLED(CONFIG_CPUSETS)) { - cpuset_cpus_allowed_fallback(p); + if (cpuset_cpus_allowed_fallback(p)) { state = possible; break; } -- 2.32.0.rc1.229.g3e70b5a671-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2021-06-08 18:04 UTC|newest] Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-06-08 18:02 [PATCH v9 00/20] Add support for 32-bit tasks on asymmetric AArch32 systems Will Deacon 2021-06-08 18:02 ` Will Deacon 2021-06-08 18:02 ` [PATCH v9 01/20] arm64: cpuinfo: Split AArch32 registers out into a separate struct Will Deacon 2021-06-08 18:02 ` Will Deacon 2021-06-08 18:02 ` [PATCH v9 02/20] arm64: Allow mismatched 32-bit EL0 support Will Deacon 2021-06-08 18:02 ` Will Deacon 2021-06-08 18:02 ` [PATCH v9 03/20] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched " Will Deacon 2021-06-08 18:02 ` Will Deacon 2021-06-08 18:02 ` [PATCH v9 04/20] arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs Will Deacon 2021-06-08 18:02 ` Will Deacon 2021-06-08 18:02 ` [PATCH v9 05/20] sched: Introduce task_cpu_possible_mask() to limit fallback rq selection Will Deacon 2021-06-08 18:02 ` Will Deacon 2021-06-08 18:02 ` [PATCH v9 06/20] cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1 Will Deacon 2021-06-08 18:02 ` Will Deacon 2021-06-10 12:59 ` Valentin Schneider 2021-06-10 12:59 ` Valentin Schneider 2021-06-08 18:03 ` [PATCH v9 07/20] cpuset: Honour task_cpu_possible_mask() in guarantee_online_cpus() Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` Will Deacon [this message] 2021-06-08 18:03 ` [PATCH v9 08/20] cpuset: Cleanup cpuset_cpus_allowed_fallback() use in select_fallback_rq() Will Deacon 2021-06-10 12:59 ` Valentin Schneider 2021-06-10 12:59 ` Valentin Schneider 2021-06-08 18:03 ` [PATCH v9 09/20] sched: Reject CPU affinity changes based on task_cpu_possible_mask() Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 10/20] sched: Introduce task_struct::user_cpus_ptr to track requested affinity Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 11/20] sched: Split the guts of sched_setaffinity() into a helper function Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 12/20] sched: Allow task CPU affinity to be restricted on asymmetric systems Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-10 13:00 ` Valentin Schneider 2021-06-10 13:00 ` Valentin Schneider 2021-06-08 18:03 ` [PATCH v9 13/20] sched: Introduce dl_task_check_affinity() to check proposed affinity Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 14/20] arm64: Implement task_cpu_possible_mask() Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 15/20] arm64: exec: Adjust affinity for compat tasks with mismatched 32-bit EL0 Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 16/20] arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched system Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 17/20] arm64: Advertise CPUs capable of running 32-bit applications in sysfs Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 18/20] arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0 Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 19/20] arm64: Remove logic to kill 32-bit tasks on 64-bit-only cores Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-08 18:03 ` [PATCH v9 20/20] Documentation: arm64: describe asymmetric 32-bit support Will Deacon 2021-06-08 18:03 ` Will Deacon 2021-06-11 16:15 ` [PATCH v9 00/20] Add support for 32-bit tasks on asymmetric AArch32 systems Will Deacon 2021-06-11 16:15 ` Will Deacon
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210608180313.11502-9-will@kernel.org \ --to=will@kernel.org \ --cc=bristot@redhat.com \ --cc=catalin.marinas@arm.com \ --cc=dietmar.eggemann@arm.com \ --cc=gregkh@linuxfoundation.org \ --cc=hannes@cmpxchg.org \ --cc=juri.lelli@redhat.com \ --cc=kernel-team@android.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mark.rutland@arm.com \ --cc=maz@kernel.org \ --cc=mingo@redhat.com \ --cc=morten.rasmussen@arm.com \ --cc=peterz@infradead.org \ --cc=qais.yousef@arm.com \ --cc=qperret@google.com \ --cc=rjw@rjwysocki.net \ --cc=surenb@google.com \ --cc=tj@kernel.org \ --cc=valentin.schneider@arm.com \ --cc=vincent.guittot@linaro.org \ --subject='Re: [PATCH v9 08/20] cpuset: Cleanup cpuset_cpus_allowed_fallback() use in select_fallback_rq()' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.