* [PATCH v5 01/24] x86/resctrl: Track the closid with the rmid
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:32 ` Reinette Chatre
2023-08-15 0:09 ` Fenghua Yu
2023-07-28 16:42 ` [PATCH v5 02/24] x86/resctrl: Access per-rmid structures by index James Morse
` (24 subsequent siblings)
25 siblings, 2 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
x86's RMID are independent of the CLOSID. An RMID can be allocated,
used and freed without considering the CLOSID.
MPAM's equivalent feature is PMG, which is not an independent number,
it extends the CLOSID/PARTID space. For MPAM, only PMG-bits worth of
'RMID' can be allocated for a single CLOSID.
i.e. if there is 1 bit of PMG space, then each CLOSID can have two
monitor groups.
To allow resctrl to disambiguate RMID values for different CLOSID,
everything in resctrl that keeps an RMID value needs to know the CLOSID
too. This will always be ignored on x86.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Reviewed-by: Xin Hao <xhao@linux.alibaba.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Is there a better term for 'the unique identifier for a monitor group'.
Using RMID for that here may be confusing...
Changes since v1:
* Added comment in struct rmid_entry
Changes since v2:
* Moved X86_RESCTRL_BAD_CLOSID from a subsequent patch
Chances since v3:
* Renamed X86_RESCTRL_BAD_CLOSID to EMPTY
* Clarified a few comments and kernel-doc
---
arch/x86/include/asm/resctrl.h | 7 +++
arch/x86/kernel/cpu/resctrl/internal.h | 2 +-
arch/x86/kernel/cpu/resctrl/monitor.c | 65 ++++++++++++++---------
arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 4 +-
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 12 ++---
include/linux/resctrl.h | 12 ++++-
6 files changed, 65 insertions(+), 37 deletions(-)
diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
index 255a78d9d906..29999f52b461 100644
--- a/arch/x86/include/asm/resctrl.h
+++ b/arch/x86/include/asm/resctrl.h
@@ -7,6 +7,13 @@
#include <linux/sched.h>
#include <linux/jump_label.h>
+/*
+ * This value can never be a valid CLOSID, and is used when mapping a
+ * (closid, rmid) pair to an index and back. On x86 only the RMID is
+ * needed.
+ */
+#define X86_RESCTRL_EMPTY_CLOSID ((u32)~0)
+
/**
* struct resctrl_pqr_state - State cache for the PQR MSR
* @cur_rmid: The cached Resource Monitoring ID
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 85ceaf9a31ac..f2da908bb079 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -535,7 +535,7 @@ struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r);
int closids_supported(void);
void closid_free(int closid);
int alloc_rmid(void);
-void free_rmid(u32 rmid);
+void free_rmid(u32 closid, u32 rmid);
int rdt_get_mon_l3_config(struct rdt_resource *r);
bool __init rdt_cpu_has(int flag);
void mon_event_count(void *info);
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index ded1fc7cb7cb..fa66029de41c 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -25,6 +25,12 @@
#include "internal.h"
struct rmid_entry {
+ /*
+ * Some architectures's resctrl_arch_rmid_read() needs the CLOSID value
+ * in order to access the correct monitor. This field provides the
+ * value to list walkers like __check_limbo(). On x86 this is ignored.
+ */
+ u32 closid;
u32 rmid;
int busy;
struct list_head list;
@@ -136,7 +142,7 @@ static inline u64 get_corrected_mbm_count(u32 rmid, unsigned long val)
return val;
}
-static inline struct rmid_entry *__rmid_entry(u32 rmid)
+static inline struct rmid_entry *__rmid_entry(u32 closid, u32 rmid)
{
struct rmid_entry *entry;
@@ -190,7 +196,8 @@ static struct arch_mbm_state *get_arch_mbm_state(struct rdt_hw_domain *hw_dom,
}
void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d,
- u32 rmid, enum resctrl_event_id eventid)
+ u32 closid, u32 rmid,
+ enum resctrl_event_id eventid)
{
struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
struct arch_mbm_state *am;
@@ -230,7 +237,8 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width)
}
int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
- u32 rmid, enum resctrl_event_id eventid, u64 *val)
+ u32 closid, u32 rmid, enum resctrl_event_id eventid,
+ u64 *val)
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
@@ -285,9 +293,9 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
if (nrmid >= r->num_rmid)
break;
- entry = __rmid_entry(nrmid);
+ entry = __rmid_entry(X86_RESCTRL_EMPTY_CLOSID, nrmid);// temporary
- if (resctrl_arch_rmid_read(r, d, entry->rmid,
+ if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid,
QOS_L3_OCCUP_EVENT_ID, &val)) {
rmid_dirty = true;
} else {
@@ -342,7 +350,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
cpu = get_cpu();
list_for_each_entry(d, &r->domains, list) {
if (cpumask_test_cpu(cpu, &d->cpu_mask)) {
- err = resctrl_arch_rmid_read(r, d, entry->rmid,
+ err = resctrl_arch_rmid_read(r, d, entry->closid,
+ entry->rmid,
QOS_L3_OCCUP_EVENT_ID,
&val);
if (err || val <= resctrl_rmid_realloc_threshold)
@@ -366,7 +375,7 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
list_add_tail(&entry->list, &rmid_free_lru);
}
-void free_rmid(u32 rmid)
+void free_rmid(u32 closid, u32 rmid)
{
struct rmid_entry *entry;
@@ -375,7 +384,7 @@ void free_rmid(u32 rmid)
lockdep_assert_held(&rdtgroup_mutex);
- entry = __rmid_entry(rmid);
+ entry = __rmid_entry(closid, rmid);
if (is_llc_occupancy_enabled())
add_rmid_to_limbo(entry);
@@ -383,8 +392,8 @@ void free_rmid(u32 rmid)
list_add_tail(&entry->list, &rmid_free_lru);
}
-static struct mbm_state *get_mbm_state(struct rdt_domain *d, u32 rmid,
- enum resctrl_event_id evtid)
+static struct mbm_state *get_mbm_state(struct rdt_domain *d, u32 closid,
+ u32 rmid, enum resctrl_event_id evtid)
{
switch (evtid) {
case QOS_L3_MBM_TOTAL_EVENT_ID:
@@ -396,20 +405,21 @@ static struct mbm_state *get_mbm_state(struct rdt_domain *d, u32 rmid,
}
}
-static int __mon_event_count(u32 rmid, struct rmid_read *rr)
+static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
{
struct mbm_state *m;
u64 tval = 0;
if (rr->first) {
- resctrl_arch_reset_rmid(rr->r, rr->d, rmid, rr->evtid);
- m = get_mbm_state(rr->d, rmid, rr->evtid);
+ resctrl_arch_reset_rmid(rr->r, rr->d, closid, rmid, rr->evtid);
+ m = get_mbm_state(rr->d, closid, rmid, rr->evtid);
if (m)
memset(m, 0, sizeof(struct mbm_state));
return 0;
}
- rr->err = resctrl_arch_rmid_read(rr->r, rr->d, rmid, rr->evtid, &tval);
+ rr->err = resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, rr->evtid,
+ &tval);
if (rr->err)
return rr->err;
@@ -429,7 +439,7 @@ static int __mon_event_count(u32 rmid, struct rmid_read *rr)
* __mon_event_count() is compared with the chunks value from the previous
* invocation. This must be called once per second to maintain values in MBps.
*/
-static void mbm_bw_count(u32 rmid, struct rmid_read *rr)
+static void mbm_bw_count(u32 closid, u32 rmid, struct rmid_read *rr)
{
struct mbm_state *m = &rr->d->mbm_local[rmid];
u64 cur_bw, bytes, cur_bytes;
@@ -459,7 +469,7 @@ void mon_event_count(void *info)
rdtgrp = rr->rgrp;
- ret = __mon_event_count(rdtgrp->mon.rmid, rr);
+ ret = __mon_event_count(rdtgrp->closid, rdtgrp->mon.rmid, rr);
/*
* For Ctrl groups read data from child monitor groups and
@@ -470,7 +480,8 @@ void mon_event_count(void *info)
if (rdtgrp->type == RDTCTRL_GROUP) {
list_for_each_entry(entry, head, mon.crdtgrp_list) {
- if (__mon_event_count(entry->mon.rmid, rr) == 0)
+ if (__mon_event_count(rdtgrp->closid, entry->mon.rmid,
+ rr) == 0)
ret = 0;
}
}
@@ -600,7 +611,8 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
}
}
-static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
+static void mbm_update(struct rdt_resource *r, struct rdt_domain *d,
+ u32 closid, u32 rmid)
{
struct rmid_read rr;
@@ -615,12 +627,12 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
if (is_mbm_total_enabled()) {
rr.evtid = QOS_L3_MBM_TOTAL_EVENT_ID;
rr.val = 0;
- __mon_event_count(rmid, &rr);
+ __mon_event_count(closid, rmid, &rr);
}
if (is_mbm_local_enabled()) {
rr.evtid = QOS_L3_MBM_LOCAL_EVENT_ID;
rr.val = 0;
- __mon_event_count(rmid, &rr);
+ __mon_event_count(closid, rmid, &rr);
/*
* Call the MBA software controller only for the
@@ -628,7 +640,7 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
* the software controller explicitly.
*/
if (is_mba_sc(NULL))
- mbm_bw_count(rmid, &rr);
+ mbm_bw_count(closid, rmid, &rr);
}
}
@@ -685,11 +697,11 @@ void mbm_handle_overflow(struct work_struct *work)
d = container_of(work, struct rdt_domain, mbm_over.work);
list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) {
- mbm_update(r, d, prgrp->mon.rmid);
+ mbm_update(r, d, prgrp->closid, prgrp->mon.rmid);
head = &prgrp->mon.crdtgrp_list;
list_for_each_entry(crgrp, head, mon.crdtgrp_list)
- mbm_update(r, d, crgrp->mon.rmid);
+ mbm_update(r, d, crgrp->closid, crgrp->mon.rmid);
if (is_mba_sc(NULL))
update_mba_bw(prgrp, d);
@@ -732,10 +744,11 @@ static int dom_data_init(struct rdt_resource *r)
}
/*
- * RMID 0 is special and is always allocated. It's used for all
- * tasks that are not monitored.
+ * CLOSID 0 and RMID 0 are special and are always allocated. These are
+ * used for rdtgroup_default control group, which will be setup later.
+ * See rdtgroup_setup_root().
*/
- entry = __rmid_entry(0);
+ entry = __rmid_entry(0, 0);
list_del(&entry->list);
return 0;
diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
index 458cb7419502..aeadaeb5df9a 100644
--- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
@@ -738,7 +738,7 @@ int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp)
* anymore when this group would be used for pseudo-locking. This
* is safe to call on platforms not capable of monitoring.
*/
- free_rmid(rdtgrp->mon.rmid);
+ free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
ret = 0;
goto out;
@@ -773,7 +773,7 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp)
ret = rdtgroup_locksetup_user_restore(rdtgrp);
if (ret) {
- free_rmid(rdtgrp->mon.rmid);
+ free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
return ret;
}
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 725344048f85..f7fda4fc2c9e 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -2714,7 +2714,7 @@ static void free_all_child_rdtgrp(struct rdtgroup *rdtgrp)
head = &rdtgrp->mon.crdtgrp_list;
list_for_each_entry_safe(sentry, stmp, head, mon.crdtgrp_list) {
- free_rmid(sentry->mon.rmid);
+ free_rmid(sentry->closid, sentry->mon.rmid);
list_del(&sentry->mon.crdtgrp_list);
if (atomic_read(&sentry->waitcount) != 0)
@@ -2754,7 +2754,7 @@ static void rmdir_all_sub(void)
cpumask_or(&rdtgroup_default.cpu_mask,
&rdtgroup_default.cpu_mask, &rdtgrp->cpu_mask);
- free_rmid(rdtgrp->mon.rmid);
+ free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
kernfs_remove(rdtgrp->kn);
list_del(&rdtgrp->rdtgroup_list);
@@ -3252,7 +3252,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
return 0;
out_idfree:
- free_rmid(rdtgrp->mon.rmid);
+ free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
out_destroy:
kernfs_put(rdtgrp->kn);
kernfs_remove(rdtgrp->kn);
@@ -3266,7 +3266,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
static void mkdir_rdt_prepare_clean(struct rdtgroup *rgrp)
{
kernfs_remove(rgrp->kn);
- free_rmid(rgrp->mon.rmid);
+ free_rmid(rgrp->closid, rgrp->mon.rmid);
rdtgroup_remove(rgrp);
}
@@ -3415,7 +3415,7 @@ static int rdtgroup_rmdir_mon(struct rdtgroup *rdtgrp, cpumask_var_t tmpmask)
update_closid_rmid(tmpmask, NULL);
rdtgrp->flags = RDT_DELETED;
- free_rmid(rdtgrp->mon.rmid);
+ free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
/*
* Remove the rdtgrp from the parent ctrl_mon group's list
@@ -3461,8 +3461,8 @@ static int rdtgroup_rmdir_ctrl(struct rdtgroup *rdtgrp, cpumask_var_t tmpmask)
cpumask_or(tmpmask, tmpmask, &rdtgrp->cpu_mask);
update_closid_rmid(tmpmask, NULL);
+ free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
closid_free(rdtgrp->closid);
- free_rmid(rdtgrp->mon.rmid);
rdtgroup_ctrl_remove(rdtgrp);
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index 8334eeacfec5..c413bb11d336 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -225,6 +225,9 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
* for this resource and domain.
* @r: resource that the counter should be read from.
* @d: domain that the counter should be read from.
+ * @closid: closid that matches the rmid. Depending on the architecture, the
+ * counter may match traffic of both @closid and @rmid, or @rmid
+ * only.
* @rmid: rmid of the counter to read.
* @eventid: eventid to read, e.g. L3 occupancy.
* @val: result of the counter read in bytes.
@@ -235,20 +238,25 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
* 0 on success, or -EIO, -EINVAL etc on error.
*/
int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
- u32 rmid, enum resctrl_event_id eventid, u64 *val);
+ u32 closid, u32 rmid, enum resctrl_event_id eventid,
+ u64 *val);
+
/**
* resctrl_arch_reset_rmid() - Reset any private state associated with rmid
* and eventid.
* @r: The domain's resource.
* @d: The rmid's domain.
+ * @closid: closid that matches the rmid. Depending on the architecture, the
+ * counter may match traffic of both @closid and @rmid, or @rmid only.
* @rmid: The rmid whose counter values should be reset.
* @eventid: The eventid whose counter values should be reset.
*
* This can be called from any CPU.
*/
void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d,
- u32 rmid, enum resctrl_event_id eventid);
+ u32 closid, u32 rmid,
+ enum resctrl_event_id eventid);
/**
* resctrl_arch_reset_rmid_all() - Reset all private state associated with
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 01/24] x86/resctrl: Track the closid with the rmid
2023-07-28 16:42 ` [PATCH v5 01/24] x86/resctrl: Track the closid with the rmid James Morse
@ 2023-08-09 22:32 ` Reinette Chatre
2023-08-24 16:50 ` James Morse
2023-08-15 0:09 ` Fenghua Yu
1 sibling, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:32 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index ded1fc7cb7cb..fa66029de41c 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -25,6 +25,12 @@
> #include "internal.h"
>
> struct rmid_entry {
> + /*
> + * Some architectures's resctrl_arch_rmid_read() needs the CLOSID value
> + * in order to access the correct monitor. This field provides the
> + * value to list walkers like __check_limbo(). On x86 this is ignored.
> + */
> + u32 closid;
> u32 rmid;
> int busy;
> struct list_head list;
In Documentation/process/maintainer-tip.rst the x86 maintainers ask to avoid
documenting struct members within the declaration. Could you please use
kernel-doc format instead as is requested there?
...
> @@ -429,7 +439,7 @@ static int __mon_event_count(u32 rmid, struct rmid_read *rr)
> * __mon_event_count() is compared with the chunks value from the previous
> * invocation. This must be called once per second to maintain values in MBps.
> */
> -static void mbm_bw_count(u32 rmid, struct rmid_read *rr)
> +static void mbm_bw_count(u32 closid, u32 rmid, struct rmid_read *rr)
> {
> struct mbm_state *m = &rr->d->mbm_local[rmid];
> u64 cur_bw, bytes, cur_bytes;
> @@ -459,7 +469,7 @@ void mon_event_count(void *info)
>
> rdtgrp = rr->rgrp;
>
> - ret = __mon_event_count(rdtgrp->mon.rmid, rr);
> + ret = __mon_event_count(rdtgrp->closid, rdtgrp->mon.rmid, rr);
>
> /*
> * For Ctrl groups read data from child monitor groups and
> @@ -470,7 +480,8 @@ void mon_event_count(void *info)
>
> if (rdtgrp->type == RDTCTRL_GROUP) {
> list_for_each_entry(entry, head, mon.crdtgrp_list) {
> - if (__mon_event_count(entry->mon.rmid, rr) == 0)
> + if (__mon_event_count(rdtgrp->closid, entry->mon.rmid,
> + rr) == 0)
> ret = 0;
> }
> }
I understand that the parent and child resource groups should have the same
closid, but that makes me wonder why you use the parent closid in this change,
but later in the change to mbm_handle_overflow() where the monitor groups are
traversed you use the closid from the child resource group?
> @@ -600,7 +611,8 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
> }
> }
>
> -static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
> +static void mbm_update(struct rdt_resource *r, struct rdt_domain *d,
> + u32 closid, u32 rmid)
> {
> struct rmid_read rr;
>
> @@ -615,12 +627,12 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
> if (is_mbm_total_enabled()) {
> rr.evtid = QOS_L3_MBM_TOTAL_EVENT_ID;
> rr.val = 0;
> - __mon_event_count(rmid, &rr);
> + __mon_event_count(closid, rmid, &rr);
> }
> if (is_mbm_local_enabled()) {
> rr.evtid = QOS_L3_MBM_LOCAL_EVENT_ID;
> rr.val = 0;
> - __mon_event_count(rmid, &rr);
> + __mon_event_count(closid, rmid, &rr);
>
> /*
> * Call the MBA software controller only for the
> @@ -628,7 +640,7 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
> * the software controller explicitly.
> */
> if (is_mba_sc(NULL))
> - mbm_bw_count(rmid, &rr);
> + mbm_bw_count(closid, rmid, &rr);
> }
> }
>
> @@ -685,11 +697,11 @@ void mbm_handle_overflow(struct work_struct *work)
> d = container_of(work, struct rdt_domain, mbm_over.work);
>
> list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) {
> - mbm_update(r, d, prgrp->mon.rmid);
> + mbm_update(r, d, prgrp->closid, prgrp->mon.rmid);
>
> head = &prgrp->mon.crdtgrp_list;
> list_for_each_entry(crgrp, head, mon.crdtgrp_list)
> - mbm_update(r, d, crgrp->mon.rmid);
> + mbm_update(r, d, crgrp->closid, crgrp->mon.rmid);
>
> if (is_mba_sc(NULL))
> update_mba_bw(prgrp, d);
Above hunk is what I referred to above.
> @@ -732,10 +744,11 @@ static int dom_data_init(struct rdt_resource *r)
> }
>
> /*
> - * RMID 0 is special and is always allocated. It's used for all
> - * tasks that are not monitored.
> + * CLOSID 0 and RMID 0 are special and are always allocated. These are
> + * used for rdtgroup_default control group, which will be setup later.
> + * See rdtgroup_setup_root().
> */
> - entry = __rmid_entry(0);
> + entry = __rmid_entry(0, 0);
There seems to be an ordering issue here with the hardcoded values for
RESCTRL_RESERVED_CLOSID and RESCTRL_RESERVED_RMID used before those defines
are introduced in the next patch. That may be ok since this code changes in
the next patch ... but the comment is left referring to the constant. Maybe
it would just be clearer if the defines are moved to this patch?
> list_del(&entry->list);
>
> return 0;
> diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> index 458cb7419502..aeadaeb5df9a 100644
> --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> @@ -738,7 +738,7 @@ int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp)
> * anymore when this group would be used for pseudo-locking. This
> * is safe to call on platforms not capable of monitoring.
> */
> - free_rmid(rdtgrp->mon.rmid);
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
>
> ret = 0;
> goto out;
> @@ -773,7 +773,7 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp)
>
> ret = rdtgroup_locksetup_user_restore(rdtgrp);
> if (ret) {
> - free_rmid(rdtgrp->mon.rmid);
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
> return ret;
> }
>
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index 725344048f85..f7fda4fc2c9e 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -2714,7 +2714,7 @@ static void free_all_child_rdtgrp(struct rdtgroup *rdtgrp)
>
> head = &rdtgrp->mon.crdtgrp_list;
> list_for_each_entry_safe(sentry, stmp, head, mon.crdtgrp_list) {
> - free_rmid(sentry->mon.rmid);
> + free_rmid(sentry->closid, sentry->mon.rmid);
> list_del(&sentry->mon.crdtgrp_list);
>
> if (atomic_read(&sentry->waitcount) != 0)
> @@ -2754,7 +2754,7 @@ static void rmdir_all_sub(void)
> cpumask_or(&rdtgroup_default.cpu_mask,
> &rdtgroup_default.cpu_mask, &rdtgrp->cpu_mask);
>
> - free_rmid(rdtgrp->mon.rmid);
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
>
> kernfs_remove(rdtgrp->kn);
> list_del(&rdtgrp->rdtgroup_list);
> @@ -3252,7 +3252,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
> return 0;
>
> out_idfree:
> - free_rmid(rdtgrp->mon.rmid);
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
> out_destroy:
> kernfs_put(rdtgrp->kn);
> kernfs_remove(rdtgrp->kn);
This does not look right ... as you note in later patches closid_alloc() is called
_after_ mkdir_rdt_prepare(). Adding rdtgrp->closid to free_rmid() at this point would
thus use an uninitialized value. I know this code is being moved in subsequent
patches so it seems the patches may need to be reordered?
> @@ -3266,7 +3266,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
> static void mkdir_rdt_prepare_clean(struct rdtgroup *rgrp)
> {
> kernfs_remove(rgrp->kn);
> - free_rmid(rgrp->mon.rmid);
> + free_rmid(rgrp->closid, rgrp->mon.rmid);
> rdtgroup_remove(rgrp);
> }
>
Related issue to above. Looking at how mkdir_rdt_prepare_clean() is called, right
after closid is freed, this seems to be use-after-free? Another motivation to
re-order the patches?
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 01/24] x86/resctrl: Track the closid with the rmid
2023-08-09 22:32 ` Reinette Chatre
@ 2023-08-24 16:50 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:50 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 09/08/2023 23:32, Reinette Chatre wrote:
> On 7/28/2023 9:42 AM, James Morse wrote:
>
>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>> index ded1fc7cb7cb..fa66029de41c 100644
>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>> @@ -470,7 +480,8 @@ void mon_event_count(void *info)
>>
>> if (rdtgrp->type == RDTCTRL_GROUP) {
>> list_for_each_entry(entry, head, mon.crdtgrp_list) {
>> - if (__mon_event_count(entry->mon.rmid, rr) == 0)
>> + if (__mon_event_count(rdtgrp->closid, entry->mon.rmid,
>> + rr) == 0)
>> ret = 0;
>> }
>> }
> I understand that the parent and child resource groups should have the same
> closid, but that makes me wonder why you use the parent closid in this change,
> but later in the change to mbm_handle_overflow() where the monitor groups are
> traversed you use the closid from the child resource group?
I'd intended to always use the values from the same struct, as that is the least
surprising thing to do. This is the odd one out, I'll fix it.
>> @@ -732,10 +744,11 @@ static int dom_data_init(struct rdt_resource *r)
>> }
>>
>> /*
>> - * RMID 0 is special and is always allocated. It's used for all
>> - * tasks that are not monitored.
>> + * CLOSID 0 and RMID 0 are special and are always allocated. These are
>> + * used for rdtgroup_default control group, which will be setup later.
>> + * See rdtgroup_setup_root().
>> */
>> - entry = __rmid_entry(0);
>> + entry = __rmid_entry(0, 0);
>
> There seems to be an ordering issue here with the hardcoded values for
> RESCTRL_RESERVED_CLOSID and RESCTRL_RESERVED_RMID used before those defines
> are introduced in the next patch. That may be ok since this code changes in
> the next patch ... but the comment is left referring to the constant. Maybe
> it would just be clearer if the defines are moved to this patch?
Sure,
>> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> index 725344048f85..f7fda4fc2c9e 100644
>> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> @@ -3252,7 +3252,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
>> return 0;
>>
>> out_idfree:
>> - free_rmid(rdtgrp->mon.rmid);
>> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
>> out_destroy:
>> kernfs_put(rdtgrp->kn);
>> kernfs_remove(rdtgrp->kn);
>
> This does not look right ... as you note in later patches closid_alloc() is called
> _after_ mkdir_rdt_prepare(). Adding rdtgrp->closid to free_rmid() at this point would
> thus use an uninitialized value. I know this code is being moved in subsequent
> patches so it seems the patches may need to be reordered?
>
>> @@ -3266,7 +3266,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
>> static void mkdir_rdt_prepare_clean(struct rdtgroup *rgrp)
>> {
>> kernfs_remove(rgrp->kn);
>> - free_rmid(rgrp->mon.rmid);
>> + free_rmid(rgrp->closid, rgrp->mon.rmid);
>> rdtgroup_remove(rgrp);
>> }
>>
>
> Related issue to above. Looking at how mkdir_rdt_prepare_clean() is called, right
> after closid is freed, this seems to be use-after-free? Another motivation to
> re-order the patches?
It all washes out in the end, and nothing depends on this value until the MPAM support is
merged.
I'll take a look at how invasive it is to re-order the series.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 01/24] x86/resctrl: Track the closid with the rmid
2023-07-28 16:42 ` [PATCH v5 01/24] x86/resctrl: Track the closid with the rmid James Morse
2023-08-09 22:32 ` Reinette Chatre
@ 2023-08-15 0:09 ` Fenghua Yu
1 sibling, 0 replies; 77+ messages in thread
From: Fenghua Yu @ 2023-08-15 0:09 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi, James,
On 7/28/23 09:42, James Morse wrote:
> x86's RMID are independent of the CLOSID. An RMID can be allocated,
> used and freed without considering the CLOSID.
>
> MPAM's equivalent feature is PMG, which is not an independent number,
> it extends the CLOSID/PARTID space. For MPAM, only PMG-bits worth of
> 'RMID' can be allocated for a single CLOSID.
> i.e. if there is 1 bit of PMG space, then each CLOSID can have two
> monitor groups.
>
> To allow resctrl to disambiguate RMID values for different CLOSID,
> everything in resctrl that keeps an RMID value needs to know the CLOSID
> too. This will always be ignored on x86.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Reviewed-by: Xin Hao <xhao@linux.alibaba.com>
> Signed-off-by: James Morse <james.morse@arm.com>
>
> ---
> Is there a better term for 'the unique identifier for a monitor group'.
> Using RMID for that here may be confusing...
>
> Changes since v1:
> * Added comment in struct rmid_entry
>
> Changes since v2:
> * Moved X86_RESCTRL_BAD_CLOSID from a subsequent patch
>
> Chances since v3:
> * Renamed X86_RESCTRL_BAD_CLOSID to EMPTY
> * Clarified a few comments and kernel-doc
> ---
> arch/x86/include/asm/resctrl.h | 7 +++
> arch/x86/kernel/cpu/resctrl/internal.h | 2 +-
> arch/x86/kernel/cpu/resctrl/monitor.c | 65 ++++++++++++++---------
> arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 4 +-
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 12 ++---
> include/linux/resctrl.h | 12 ++++-
> 6 files changed, 65 insertions(+), 37 deletions(-)
>
> diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
> index 255a78d9d906..29999f52b461 100644
> --- a/arch/x86/include/asm/resctrl.h
> +++ b/arch/x86/include/asm/resctrl.h
> @@ -7,6 +7,13 @@
> #include <linux/sched.h>
> #include <linux/jump_label.h>
>
> +/*
> + * This value can never be a valid CLOSID, and is used when mapping a
> + * (closid, rmid) pair to an index and back. On x86 only the RMID is
> + * needed.
> + */
This value is not defined by x86 hw (not by ARM hw either, I guess). So
I would add something in the comment like "It's a software defined value."
> +#define X86_RESCTRL_EMPTY_CLOSID ((u32)~0)
> +
> /**
> * struct resctrl_pqr_state - State cache for the PQR MSR
> * @cur_rmid: The cached Resource Monitoring ID
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index 85ceaf9a31ac..f2da908bb079 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -535,7 +535,7 @@ struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r);
> int closids_supported(void);
> void closid_free(int closid);
> int alloc_rmid(void);
> -void free_rmid(u32 rmid);
> +void free_rmid(u32 closid, u32 rmid);
> int rdt_get_mon_l3_config(struct rdt_resource *r);
> bool __init rdt_cpu_has(int flag);
> void mon_event_count(void *info);
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index ded1fc7cb7cb..fa66029de41c 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -25,6 +25,12 @@
> #include "internal.h"
>
> struct rmid_entry {
> + /*
> + * Some architectures's resctrl_arch_rmid_read() needs the CLOSID value
> + * in order to access the correct monitor. This field provides the
> + * value to list walkers like __check_limbo(). On x86 this is ignored.
> + */
> + u32 closid;
> u32 rmid;
> int busy;
> struct list_head list;
> @@ -136,7 +142,7 @@ static inline u64 get_corrected_mbm_count(u32 rmid, unsigned long val)
> return val;
> }
>
> -static inline struct rmid_entry *__rmid_entry(u32 rmid)
> +static inline struct rmid_entry *__rmid_entry(u32 closid, u32 rmid)
> {
> struct rmid_entry *entry;
>
> @@ -190,7 +196,8 @@ static struct arch_mbm_state *get_arch_mbm_state(struct rdt_hw_domain *hw_dom,
> }
>
> void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d,
> - u32 rmid, enum resctrl_event_id eventid)
> + u32 closid, u32 rmid,
"closid" is not used on x86. Usually it's named as "u32 unused" on x86
so that it's clear for others that the parameter won't be used in this
fucntion.
> + enum resctrl_event_id eventid)
> {
> struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
> struct arch_mbm_state *am;
> @@ -230,7 +237,8 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width)
> }
>
> int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> - u32 rmid, enum resctrl_event_id eventid, u64 *val)
> + u32 closid, u32 rmid, enum resctrl_event_id eventid,
Ditto.
I would think all "closid" parameters in all related arch functions on
x86 should be re-named as "unused" so that it's cleared to express its
meaning. Otherwise, people may get confused why "closid" is tangled when
monitoring.
> + u64 *val)
> {
> struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
> struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
> @@ -285,9 +293,9 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
> if (nrmid >= r->num_rmid)
> break;
>
> - entry = __rmid_entry(nrmid);
> + entry = __rmid_entry(X86_RESCTRL_EMPTY_CLOSID, nrmid);// temporary
>
> - if (resctrl_arch_rmid_read(r, d, entry->rmid,
> + if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid,
> QOS_L3_OCCUP_EVENT_ID, &val)) {
> rmid_dirty = true;
> } else {
> @@ -342,7 +350,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> cpu = get_cpu();
> list_for_each_entry(d, &r->domains, list) {
> if (cpumask_test_cpu(cpu, &d->cpu_mask)) {
> - err = resctrl_arch_rmid_read(r, d, entry->rmid,
> + err = resctrl_arch_rmid_read(r, d, entry->closid,
> + entry->rmid,
> QOS_L3_OCCUP_EVENT_ID,
> &val);
> if (err || val <= resctrl_rmid_realloc_threshold)
> @@ -366,7 +375,7 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> list_add_tail(&entry->list, &rmid_free_lru);
> }
>
> -void free_rmid(u32 rmid)
> +void free_rmid(u32 closid, u32 rmid)
> {
> struct rmid_entry *entry;
>
> @@ -375,7 +384,7 @@ void free_rmid(u32 rmid)
>
> lockdep_assert_held(&rdtgroup_mutex);
>
> - entry = __rmid_entry(rmid);
> + entry = __rmid_entry(closid, rmid);
>
> if (is_llc_occupancy_enabled())
> add_rmid_to_limbo(entry);
> @@ -383,8 +392,8 @@ void free_rmid(u32 rmid)
> list_add_tail(&entry->list, &rmid_free_lru);
> }
>
> -static struct mbm_state *get_mbm_state(struct rdt_domain *d, u32 rmid,
> - enum resctrl_event_id evtid)
> +static struct mbm_state *get_mbm_state(struct rdt_domain *d, u32 closid,
> + u32 rmid, enum resctrl_event_id evtid)
> {
> switch (evtid) {
> case QOS_L3_MBM_TOTAL_EVENT_ID:
> @@ -396,20 +405,21 @@ static struct mbm_state *get_mbm_state(struct rdt_domain *d, u32 rmid,
> }
> }
>
> -static int __mon_event_count(u32 rmid, struct rmid_read *rr)
> +static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
> {
> struct mbm_state *m;
> u64 tval = 0;
>
> if (rr->first) {
> - resctrl_arch_reset_rmid(rr->r, rr->d, rmid, rr->evtid);
> - m = get_mbm_state(rr->d, rmid, rr->evtid);
> + resctrl_arch_reset_rmid(rr->r, rr->d, closid, rmid, rr->evtid);
> + m = get_mbm_state(rr->d, closid, rmid, rr->evtid);
> if (m)
> memset(m, 0, sizeof(struct mbm_state));
> return 0;
> }
>
> - rr->err = resctrl_arch_rmid_read(rr->r, rr->d, rmid, rr->evtid, &tval);
> + rr->err = resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, rr->evtid,
> + &tval);
> if (rr->err)
> return rr->err;
>
> @@ -429,7 +439,7 @@ static int __mon_event_count(u32 rmid, struct rmid_read *rr)
> * __mon_event_count() is compared with the chunks value from the previous
> * invocation. This must be called once per second to maintain values in MBps.
> */
> -static void mbm_bw_count(u32 rmid, struct rmid_read *rr)
> +static void mbm_bw_count(u32 closid, u32 rmid, struct rmid_read *rr)
> {
> struct mbm_state *m = &rr->d->mbm_local[rmid];
> u64 cur_bw, bytes, cur_bytes;
> @@ -459,7 +469,7 @@ void mon_event_count(void *info)
>
> rdtgrp = rr->rgrp;
>
> - ret = __mon_event_count(rdtgrp->mon.rmid, rr);
> + ret = __mon_event_count(rdtgrp->closid, rdtgrp->mon.rmid, rr);
>
> /*
> * For Ctrl groups read data from child monitor groups and
> @@ -470,7 +480,8 @@ void mon_event_count(void *info)
>
> if (rdtgrp->type == RDTCTRL_GROUP) {
> list_for_each_entry(entry, head, mon.crdtgrp_list) {
> - if (__mon_event_count(entry->mon.rmid, rr) == 0)
> + if (__mon_event_count(rdtgrp->closid, entry->mon.rmid,
> + rr) == 0)
> ret = 0;
> }
> }
> @@ -600,7 +611,8 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
> }
> }
>
> -static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
> +static void mbm_update(struct rdt_resource *r, struct rdt_domain *d,
> + u32 closid, u32 rmid)
> {
> struct rmid_read rr;
>
> @@ -615,12 +627,12 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
> if (is_mbm_total_enabled()) {
> rr.evtid = QOS_L3_MBM_TOTAL_EVENT_ID;
> rr.val = 0;
> - __mon_event_count(rmid, &rr);
> + __mon_event_count(closid, rmid, &rr);
> }
> if (is_mbm_local_enabled()) {
> rr.evtid = QOS_L3_MBM_LOCAL_EVENT_ID;
> rr.val = 0;
> - __mon_event_count(rmid, &rr);
> + __mon_event_count(closid, rmid, &rr);
>
> /*
> * Call the MBA software controller only for the
> @@ -628,7 +640,7 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
> * the software controller explicitly.
> */
> if (is_mba_sc(NULL))
> - mbm_bw_count(rmid, &rr);
> + mbm_bw_count(closid, rmid, &rr);
> }
> }
>
> @@ -685,11 +697,11 @@ void mbm_handle_overflow(struct work_struct *work)
> d = container_of(work, struct rdt_domain, mbm_over.work);
>
> list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) {
> - mbm_update(r, d, prgrp->mon.rmid);
> + mbm_update(r, d, prgrp->closid, prgrp->mon.rmid);
>
> head = &prgrp->mon.crdtgrp_list;
> list_for_each_entry(crgrp, head, mon.crdtgrp_list)
> - mbm_update(r, d, crgrp->mon.rmid);
> + mbm_update(r, d, crgrp->closid, crgrp->mon.rmid);
>
> if (is_mba_sc(NULL))
> update_mba_bw(prgrp, d);
> @@ -732,10 +744,11 @@ static int dom_data_init(struct rdt_resource *r)
> }
>
> /*
> - * RMID 0 is special and is always allocated. It's used for all
> - * tasks that are not monitored.
> + * CLOSID 0 and RMID 0 are special and are always allocated. These are
> + * used for rdtgroup_default control group, which will be setup later.
> + * See rdtgroup_setup_root().
> */
> - entry = __rmid_entry(0);
> + entry = __rmid_entry(0, 0);
> list_del(&entry->list);
>
> return 0;
> diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> index 458cb7419502..aeadaeb5df9a 100644
> --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> @@ -738,7 +738,7 @@ int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp)
> * anymore when this group would be used for pseudo-locking. This
> * is safe to call on platforms not capable of monitoring.
> */
> - free_rmid(rdtgrp->mon.rmid);
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
>
> ret = 0;
> goto out;
> @@ -773,7 +773,7 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp)
>
> ret = rdtgroup_locksetup_user_restore(rdtgrp);
> if (ret) {
> - free_rmid(rdtgrp->mon.rmid);
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
> return ret;
> }
>
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index 725344048f85..f7fda4fc2c9e 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -2714,7 +2714,7 @@ static void free_all_child_rdtgrp(struct rdtgroup *rdtgrp)
>
> head = &rdtgrp->mon.crdtgrp_list;
> list_for_each_entry_safe(sentry, stmp, head, mon.crdtgrp_list) {
> - free_rmid(sentry->mon.rmid);
> + free_rmid(sentry->closid, sentry->mon.rmid);
> list_del(&sentry->mon.crdtgrp_list);
>
> if (atomic_read(&sentry->waitcount) != 0)
> @@ -2754,7 +2754,7 @@ static void rmdir_all_sub(void)
> cpumask_or(&rdtgroup_default.cpu_mask,
> &rdtgroup_default.cpu_mask, &rdtgrp->cpu_mask);
>
> - free_rmid(rdtgrp->mon.rmid);
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
>
> kernfs_remove(rdtgrp->kn);
> list_del(&rdtgrp->rdtgroup_list);
> @@ -3252,7 +3252,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
> return 0;
>
> out_idfree:
> - free_rmid(rdtgrp->mon.rmid);
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
> out_destroy:
> kernfs_put(rdtgrp->kn);
> kernfs_remove(rdtgrp->kn);
> @@ -3266,7 +3266,7 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
> static void mkdir_rdt_prepare_clean(struct rdtgroup *rgrp)
> {
> kernfs_remove(rgrp->kn);
> - free_rmid(rgrp->mon.rmid);
> + free_rmid(rgrp->closid, rgrp->mon.rmid);
> rdtgroup_remove(rgrp);
> }
>
> @@ -3415,7 +3415,7 @@ static int rdtgroup_rmdir_mon(struct rdtgroup *rdtgrp, cpumask_var_t tmpmask)
> update_closid_rmid(tmpmask, NULL);
>
> rdtgrp->flags = RDT_DELETED;
> - free_rmid(rdtgrp->mon.rmid);
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
>
> /*
> * Remove the rdtgrp from the parent ctrl_mon group's list
> @@ -3461,8 +3461,8 @@ static int rdtgroup_rmdir_ctrl(struct rdtgroup *rdtgrp, cpumask_var_t tmpmask)
> cpumask_or(tmpmask, tmpmask, &rdtgrp->cpu_mask);
> update_closid_rmid(tmpmask, NULL);
>
> + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
> closid_free(rdtgrp->closid);
> - free_rmid(rdtgrp->mon.rmid);
>
> rdtgroup_ctrl_remove(rdtgrp);
>
> diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
> index 8334eeacfec5..c413bb11d336 100644
> --- a/include/linux/resctrl.h
> +++ b/include/linux/resctrl.h
> @@ -225,6 +225,9 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
> * for this resource and domain.
> * @r: resource that the counter should be read from.
> * @d: domain that the counter should be read from.
> + * @closid: closid that matches the rmid. Depending on the architecture, the
> + * counter may match traffic of both @closid and @rmid, or @rmid
> + * only.
> * @rmid: rmid of the counter to read.
> * @eventid: eventid to read, e.g. L3 occupancy.
> * @val: result of the counter read in bytes.
> @@ -235,20 +238,25 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
> * 0 on success, or -EIO, -EINVAL etc on error.
> */
> int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> - u32 rmid, enum resctrl_event_id eventid, u64 *val);
> + u32 closid, u32 rmid, enum resctrl_event_id eventid,
> + u64 *val);
> +
>
> /**
> * resctrl_arch_reset_rmid() - Reset any private state associated with rmid
> * and eventid.
> * @r: The domain's resource.
> * @d: The rmid's domain.
> + * @closid: closid that matches the rmid. Depending on the architecture, the
> + * counter may match traffic of both @closid and @rmid, or @rmid only.
> * @rmid: The rmid whose counter values should be reset.
> * @eventid: The eventid whose counter values should be reset.
> *
> * This can be called from any CPU.
> */
> void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d,
> - u32 rmid, enum resctrl_event_id eventid);
> + u32 closid, u32 rmid,
> + enum resctrl_event_id eventid);
>
> /**
> * resctrl_arch_reset_rmid_all() - Reset all private state associated with
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 02/24] x86/resctrl: Access per-rmid structures by index
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
2023-07-28 16:42 ` [PATCH v5 01/24] x86/resctrl: Track the closid with the rmid James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:32 ` Reinette Chatre
2023-07-28 16:42 ` [PATCH v5 03/24] x86/resctrl: Create helper for RMID allocation and mondata dir creation James Morse
` (23 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
x86 systems identify traffic using the CLOSID and RMID. The CLOSID is
used to lookup the control policy, the RMID is used for monitoring. For
x86 these are independent numbers.
Arm's MPAM has equivalent features PARTID and PMG, where the PARTID is
used to lookup the control policy. The PMG in contrast is a small number
of bits that are used to subdivide PARTID when monitoring. The
cache-occupancy monitors require the PARTID to be specified when monitoring.
This means MPAM's PMG field is not unique. There are multiple PMG-0, one
per allocated CLOSID/PARTID. If PMG is treated as equivalent to RMID, it
cannot be allocated as an independent number. Bitmaps like rmid_busy_llc
need to be sized by the number of unique entries for this resource.
Treat the combined CLOSID and RMID as an index, and provide architecture
helpers to pack and unpack an index. This makes the MPAM values unique.
The domain's rmid_busy_llc and rmid_ptrs[] are then sized by index, as
are domain mbm_local[] and mbm_total[].
x86 can ignore the CLOSID field when packing and unpacking an index, and
report as many indexes as RMID.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v1:
* Added X86_BAD_CLOSID macro to make it clear what this value means
* Added second WARN_ON() for closid checking, and made both _ONCE()
Changes since v2:
* Added RESCTRL_RESERVED_CLOSID
* Removed a newline
* Repharsed some comments
* Renamed a variable 'ignore'd
* Moved X86_RESCTRL_BAD_CLOSID to a previous patch
Changes since v3:
* Changed a variable name
* Fixed various typos
Changes since v4:
* Removed resource paramter from has_busy_rmid()
* Rewrote commit message
---
arch/x86/include/asm/resctrl.h | 17 +++++
arch/x86/kernel/cpu/resctrl/core.c | 4 +-
arch/x86/kernel/cpu/resctrl/internal.h | 3 +-
arch/x86/kernel/cpu/resctrl/monitor.c | 92 +++++++++++++++++---------
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 9 +--
include/linux/resctrl.h | 4 ++
6 files changed, 92 insertions(+), 37 deletions(-)
diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
index 29999f52b461..9510c23db62d 100644
--- a/arch/x86/include/asm/resctrl.h
+++ b/arch/x86/include/asm/resctrl.h
@@ -101,6 +101,23 @@ static inline void resctrl_sched_in(struct task_struct *tsk)
__resctrl_sched_in(tsk);
}
+static inline u32 resctrl_arch_system_num_rmid_idx(void)
+{
+ /* RMID are independent numbers for x86. num_rmid_idx == num_rmid */
+ return boot_cpu_data.x86_cache_max_rmid + 1;
+}
+
+static inline void resctrl_arch_rmid_idx_decode(u32 idx, u32 *closid, u32 *rmid)
+{
+ *rmid = idx;
+ *closid = X86_RESCTRL_EMPTY_CLOSID;
+}
+
+static inline u32 resctrl_arch_rmid_idx_encode(u32 ignored, u32 rmid)
+{
+ return rmid;
+}
+
void resctrl_cpu_detect(struct cpuinfo_x86 *c);
#else
diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
index 030d3b409768..8dfede01b0c9 100644
--- a/arch/x86/kernel/cpu/resctrl/core.c
+++ b/arch/x86/kernel/cpu/resctrl/core.c
@@ -585,7 +585,7 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
mbm_setup_overflow_handler(d, 0);
}
if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu &&
- has_busy_rmid(r, d)) {
+ has_busy_rmid(d)) {
cancel_delayed_work(&d->cqm_limbo);
cqm_setup_limbo_handler(d, 0);
}
@@ -600,7 +600,7 @@ static void clear_closid_rmid(int cpu)
state->default_rmid = 0;
state->cur_closid = 0;
state->cur_rmid = 0;
- wrmsr(MSR_IA32_PQR_ASSOC, 0, 0);
+ wrmsr(MSR_IA32_PQR_ASSOC, 0, RESCTRL_RESERVED_CLOSID);
}
static int resctrl_online_cpu(unsigned int cpu)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index f2da908bb079..b48715bb8762 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -7,6 +7,7 @@
#include <linux/kernfs.h>
#include <linux/fs_context.h>
#include <linux/jump_label.h>
+#include <asm/resctrl.h>
#define L3_QOS_CDP_ENABLE 0x01ULL
@@ -550,7 +551,7 @@ void __init intel_rdt_mbm_apply_quirk(void);
bool is_mba_sc(struct rdt_resource *r);
void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms);
void cqm_handle_limbo(struct work_struct *work);
-bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d);
+bool has_busy_rmid(struct rdt_domain *d);
void __check_limbo(struct rdt_domain *d, bool force_free);
void rdt_domain_reconfigure_cdp(struct rdt_resource *r);
void __init thread_throttle_mode_init(void);
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index fa66029de41c..bd234b66dddf 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -142,12 +142,29 @@ static inline u64 get_corrected_mbm_count(u32 rmid, unsigned long val)
return val;
}
-static inline struct rmid_entry *__rmid_entry(u32 closid, u32 rmid)
+/*
+ * x86 and arm64 differ in their handling of monitoring.
+ * x86's RMID are an independent number, there is only one source of traffic
+ * with an RMID value of '1'.
+ * arm64's PMG extend the PARTID/CLOSID space, there are multiple sources of
+ * traffic with a PMG value of '1', one for each CLOSID, meaning the RMID
+ * value is no longer unique.
+ * To account for this, resctrl uses an index. On x86 this is just the RMID,
+ * on arm64 it encodes the CLOSID and RMID. This gives a unique number.
+ *
+ * The domain's rmid_busy_llc and rmid_ptrs[] are sized by index. The arch code
+ * must accept an attempt to read every index.
+ */
+static inline struct rmid_entry *__rmid_entry(u32 idx)
{
struct rmid_entry *entry;
+ u32 closid, rmid;
- entry = &rmid_ptrs[rmid];
- WARN_ON(entry->rmid != rmid);
+ entry = &rmid_ptrs[idx];
+ resctrl_arch_rmid_idx_decode(idx, &closid, &rmid);
+
+ WARN_ON_ONCE(entry->closid != closid);
+ WARN_ON_ONCE(entry->rmid != rmid);
return entry;
}
@@ -277,8 +294,9 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
void __check_limbo(struct rdt_domain *d, bool force_free)
{
struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
+ u32 idx_limit = resctrl_arch_system_num_rmid_idx();
struct rmid_entry *entry;
- u32 crmid = 1, nrmid;
+ u32 idx, cur_idx = 1;
bool rmid_dirty;
u64 val = 0;
@@ -289,12 +307,11 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
* RMID and move it to the free list when the counter reaches 0.
*/
for (;;) {
- nrmid = find_next_bit(d->rmid_busy_llc, r->num_rmid, crmid);
- if (nrmid >= r->num_rmid)
+ idx = find_next_bit(d->rmid_busy_llc, idx_limit, cur_idx);
+ if (idx >= idx_limit)
break;
- entry = __rmid_entry(X86_RESCTRL_EMPTY_CLOSID, nrmid);// temporary
-
+ entry = __rmid_entry(idx);
if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid,
QOS_L3_OCCUP_EVENT_ID, &val)) {
rmid_dirty = true;
@@ -303,19 +320,21 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
}
if (force_free || !rmid_dirty) {
- clear_bit(entry->rmid, d->rmid_busy_llc);
+ clear_bit(idx, d->rmid_busy_llc);
if (!--entry->busy) {
rmid_limbo_count--;
list_add_tail(&entry->list, &rmid_free_lru);
}
}
- crmid = nrmid + 1;
+ cur_idx = idx + 1;
}
}
-bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d)
+bool has_busy_rmid(struct rdt_domain *d)
{
- return find_first_bit(d->rmid_busy_llc, r->num_rmid) != r->num_rmid;
+ u32 idx_limit = resctrl_arch_system_num_rmid_idx();
+
+ return find_first_bit(d->rmid_busy_llc, idx_limit) != idx_limit;
}
/*
@@ -345,6 +364,9 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
struct rdt_domain *d;
int cpu, err;
u64 val = 0;
+ u32 idx;
+
+ idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
entry->busy = 0;
cpu = get_cpu();
@@ -362,9 +384,9 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
* For the first limbo RMID in the domain,
* setup up the limbo worker.
*/
- if (!has_busy_rmid(r, d))
+ if (!has_busy_rmid(d))
cqm_setup_limbo_handler(d, CQM_LIMBOCHECK_INTERVAL);
- set_bit(entry->rmid, d->rmid_busy_llc);
+ set_bit(idx, d->rmid_busy_llc);
entry->busy++;
}
put_cpu();
@@ -377,14 +399,17 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
void free_rmid(u32 closid, u32 rmid)
{
+ u32 idx = resctrl_arch_rmid_idx_encode(closid, rmid);
struct rmid_entry *entry;
- if (!rmid)
- return;
-
lockdep_assert_held(&rdtgroup_mutex);
- entry = __rmid_entry(closid, rmid);
+ /* do not allow the default rmid to be free'd */
+ if (idx == resctrl_arch_rmid_idx_encode(RESCTRL_RESERVED_CLOSID,
+ RESCTRL_RESERVED_RMID))
+ return;
+
+ entry = __rmid_entry(idx);
if (is_llc_occupancy_enabled())
add_rmid_to_limbo(entry);
@@ -395,11 +420,13 @@ void free_rmid(u32 closid, u32 rmid)
static struct mbm_state *get_mbm_state(struct rdt_domain *d, u32 closid,
u32 rmid, enum resctrl_event_id evtid)
{
+ u32 idx = resctrl_arch_rmid_idx_encode(closid, rmid);
+
switch (evtid) {
case QOS_L3_MBM_TOTAL_EVENT_ID:
- return &d->mbm_total[rmid];
+ return &d->mbm_total[idx];
case QOS_L3_MBM_LOCAL_EVENT_ID:
- return &d->mbm_local[rmid];
+ return &d->mbm_local[idx];
default:
return NULL;
}
@@ -441,7 +468,8 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
*/
static void mbm_bw_count(u32 closid, u32 rmid, struct rmid_read *rr)
{
- struct mbm_state *m = &rr->d->mbm_local[rmid];
+ u32 idx = resctrl_arch_rmid_idx_encode(closid, rmid);
+ struct mbm_state *m = &rr->d->mbm_local[idx];
u64 cur_bw, bytes, cur_bytes;
cur_bytes = rr->val;
@@ -531,7 +559,7 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
{
u32 closid, rmid, cur_msr_val, new_msr_val;
struct mbm_state *pmbm_data, *cmbm_data;
- u32 cur_bw, delta_bw, user_bw;
+ u32 cur_bw, delta_bw, user_bw, idx;
struct rdt_resource *r_mba;
struct rdt_domain *dom_mba;
struct list_head *head;
@@ -544,7 +572,8 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
closid = rgrp->closid;
rmid = rgrp->mon.rmid;
- pmbm_data = &dom_mbm->mbm_local[rmid];
+ idx = resctrl_arch_rmid_idx_encode(closid, rmid);
+ pmbm_data = &dom_mbm->mbm_local[idx];
dom_mba = get_domain_from_cpu(smp_processor_id(), r_mba);
if (!dom_mba) {
@@ -662,7 +691,7 @@ void cqm_handle_limbo(struct work_struct *work)
__check_limbo(d, false);
- if (has_busy_rmid(r, d))
+ if (has_busy_rmid(d))
schedule_delayed_work_on(cpu, &d->cqm_limbo, delay);
mutex_unlock(&rdtgroup_mutex);
@@ -727,19 +756,20 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
static int dom_data_init(struct rdt_resource *r)
{
+ u32 idx_limit = resctrl_arch_system_num_rmid_idx();
struct rmid_entry *entry = NULL;
- int i, nr_rmids;
+ u32 idx;
+ int i;
- nr_rmids = r->num_rmid;
- rmid_ptrs = kcalloc(nr_rmids, sizeof(struct rmid_entry), GFP_KERNEL);
+ rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL);
if (!rmid_ptrs)
return -ENOMEM;
- for (i = 0; i < nr_rmids; i++) {
+ for (i = 0; i < idx_limit; i++) {
entry = &rmid_ptrs[i];
INIT_LIST_HEAD(&entry->list);
- entry->rmid = i;
+ resctrl_arch_rmid_idx_decode(i, &entry->closid, &entry->rmid);
list_add_tail(&entry->list, &rmid_free_lru);
}
@@ -748,7 +778,9 @@ static int dom_data_init(struct rdt_resource *r)
* used for rdtgroup_default control group, which will be setup later.
* See rdtgroup_setup_root().
*/
- entry = __rmid_entry(0, 0);
+ idx = resctrl_arch_rmid_idx_encode(RESCTRL_RESERVED_CLOSID,
+ RESCTRL_RESERVED_RMID);
+ entry = __rmid_entry(idx);
list_del(&entry->list);
return 0;
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index f7fda4fc2c9e..6b7190f9cff6 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -3727,7 +3727,7 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
if (is_mbm_enabled())
cancel_delayed_work(&d->mbm_over);
- if (is_llc_occupancy_enabled() && has_busy_rmid(r, d)) {
+ if (is_llc_occupancy_enabled() && has_busy_rmid(d)) {
/*
* When a package is going down, forcefully
* decrement rmid->ebusy. There is no way to know
@@ -3745,16 +3745,17 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
{
+ u32 idx_limit = resctrl_arch_system_num_rmid_idx();
size_t tsize;
if (is_llc_occupancy_enabled()) {
- d->rmid_busy_llc = bitmap_zalloc(r->num_rmid, GFP_KERNEL);
+ d->rmid_busy_llc = bitmap_zalloc(idx_limit, GFP_KERNEL);
if (!d->rmid_busy_llc)
return -ENOMEM;
}
if (is_mbm_total_enabled()) {
tsize = sizeof(*d->mbm_total);
- d->mbm_total = kcalloc(r->num_rmid, tsize, GFP_KERNEL);
+ d->mbm_total = kcalloc(idx_limit, tsize, GFP_KERNEL);
if (!d->mbm_total) {
bitmap_free(d->rmid_busy_llc);
return -ENOMEM;
@@ -3762,7 +3763,7 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
}
if (is_mbm_local_enabled()) {
tsize = sizeof(*d->mbm_local);
- d->mbm_local = kcalloc(r->num_rmid, tsize, GFP_KERNEL);
+ d->mbm_local = kcalloc(idx_limit, tsize, GFP_KERNEL);
if (!d->mbm_local) {
bitmap_free(d->rmid_busy_llc);
kfree(d->mbm_total);
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index c413bb11d336..660752406174 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -6,6 +6,10 @@
#include <linux/list.h>
#include <linux/pid.h>
+/* CLOSID, RMID value used by the default control group */
+#define RESCTRL_RESERVED_CLOSID 0
+#define RESCTRL_RESERVED_RMID 0
+
#ifdef CONFIG_PROC_CPU_RESCTRL
int proc_resctrl_show(struct seq_file *m,
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 02/24] x86/resctrl: Access per-rmid structures by index
2023-07-28 16:42 ` [PATCH v5 02/24] x86/resctrl: Access per-rmid structures by index James Morse
@ 2023-08-09 22:32 ` Reinette Chatre
2023-08-24 16:51 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:32 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> @@ -600,7 +600,7 @@ static void clear_closid_rmid(int cpu)
> state->default_rmid = 0;
> state->cur_closid = 0;
> state->cur_rmid = 0;
> - wrmsr(MSR_IA32_PQR_ASSOC, 0, 0);
> + wrmsr(MSR_IA32_PQR_ASSOC, 0, RESCTRL_RESERVED_CLOSID);
> }
Can the remaining "0" be replaced with RESCTRL_RESERVED_RMID?
...
> @@ -377,14 +399,17 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
>
> void free_rmid(u32 closid, u32 rmid)
> {
> + u32 idx = resctrl_arch_rmid_idx_encode(closid, rmid);
> struct rmid_entry *entry;
>
> - if (!rmid)
> - return;
> -
> lockdep_assert_held(&rdtgroup_mutex);
>
> - entry = __rmid_entry(closid, rmid);
> + /* do not allow the default rmid to be free'd */
> + if (idx == resctrl_arch_rmid_idx_encode(RESCTRL_RESERVED_CLOSID,
> + RESCTRL_RESERVED_RMID))
> + return;
> +
Why is this encoding necessary? Can the provided function parameters
not be tested directly against RESCTRL_RESERVED_CLOSID and
RESCTRL_RESERVED_RMID ?
> + entry = __rmid_entry(idx);
>
> if (is_llc_occupancy_enabled())
> add_rmid_to_limbo(entry);
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 02/24] x86/resctrl: Access per-rmid structures by index
2023-08-09 22:32 ` Reinette Chatre
@ 2023-08-24 16:51 ` James Morse
2023-08-25 0:29 ` Reinette Chatre
0 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-08-24 16:51 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 09/08/2023 23:32, Reinette Chatre wrote:
> On 7/28/2023 9:42 AM, James Morse wrote:
>> @@ -377,14 +399,17 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
>>
>> void free_rmid(u32 closid, u32 rmid)
>> {
>> + u32 idx = resctrl_arch_rmid_idx_encode(closid, rmid);
>> struct rmid_entry *entry;
>>
>> - if (!rmid)
>> - return;
>> -
>> lockdep_assert_held(&rdtgroup_mutex);
>>
>> - entry = __rmid_entry(closid, rmid);
>> + /* do not allow the default rmid to be free'd */
>> + if (idx == resctrl_arch_rmid_idx_encode(RESCTRL_RESERVED_CLOSID,
>> + RESCTRL_RESERVED_RMID))
>> + return;
>> +
> Why is this encoding necessary? Can the provided function parameters
> not be tested directly against RESCTRL_RESERVED_CLOSID and
> RESCTRL_RESERVED_RMID ?
Doing this by encoding means if the architecture code supplies an
resctrl_arch_rmid_idx_encode() that ignores the closid, this reduces down to:
| if (rmid == RESCTRL_RESERVED_RMID)
which is what the code did before. I'll add a comment:
| /*
| * Do not allow RESCTRL_RESERVED_RMID to be free'd. Comparing by index
| * allows architectures that ignore the closid parameter to avoid an
| * unnecessary check.
| */
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 02/24] x86/resctrl: Access per-rmid structures by index
2023-08-24 16:51 ` James Morse
@ 2023-08-25 0:29 ` Reinette Chatre
0 siblings, 0 replies; 77+ messages in thread
From: Reinette Chatre @ 2023-08-25 0:29 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 8/24/2023 9:51 AM, James Morse wrote:
> Hi Reinette,
>
> On 09/08/2023 23:32, Reinette Chatre wrote:
>> On 7/28/2023 9:42 AM, James Morse wrote:
>>> @@ -377,14 +399,17 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
>>>
>>> void free_rmid(u32 closid, u32 rmid)
>>> {
>>> + u32 idx = resctrl_arch_rmid_idx_encode(closid, rmid);
>>> struct rmid_entry *entry;
>>>
>>> - if (!rmid)
>>> - return;
>>> -
>>> lockdep_assert_held(&rdtgroup_mutex);
>>>
>>> - entry = __rmid_entry(closid, rmid);
>>> + /* do not allow the default rmid to be free'd */
>>> + if (idx == resctrl_arch_rmid_idx_encode(RESCTRL_RESERVED_CLOSID,
>>> + RESCTRL_RESERVED_RMID))
>>> + return;
>>> +
>
>> Why is this encoding necessary? Can the provided function parameters
>> not be tested directly against RESCTRL_RESERVED_CLOSID and
>> RESCTRL_RESERVED_RMID ?
>
> Doing this by encoding means if the architecture code supplies an
> resctrl_arch_rmid_idx_encode() that ignores the closid, this reduces down to:
> | if (rmid == RESCTRL_RESERVED_RMID)
>
> which is what the code did before. I'll add a comment:
> | /*
> | * Do not allow RESCTRL_RESERVED_RMID to be free'd. Comparing by index
> | * allows architectures that ignore the closid parameter to avoid an
> | * unnecessary check.
> | */
>
Sounds good. Thank you.
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 03/24] x86/resctrl: Create helper for RMID allocation and mondata dir creation
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
2023-07-28 16:42 ` [PATCH v5 01/24] x86/resctrl: Track the closid with the rmid James Morse
2023-07-28 16:42 ` [PATCH v5 02/24] x86/resctrl: Access per-rmid structures by index James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:32 ` Reinette Chatre
2023-07-28 16:42 ` [PATCH v5 04/24] x86/resctrl: Move rmid allocation out of mkdir_rdt_prepare() James Morse
` (22 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
When monitoring is support, each monitor and control group is allocated
an RMID. For control groups, rdtgroup_mkdir_ctrl_mon() later goes on to
allocate the CLOSID.
MPAM's equivalent of RMID are not an independent number, so can't be
allocated until the CLOSID is known. An RMID allocation for one CLOSID
may fail, whereas another may succeed depending on how many monitor
groups a control group has.
The RMID allocation needs to move to be after the CLOSID has been
allocated.
Move the RMID allocation and mondata dir creation to a helper, this
makes a subsequent change easier to read.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v4:
* Fixed typo in commit message, moved some words around.
---
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 42 +++++++++++++++++---------
1 file changed, 27 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 6b7190f9cff6..e7178bbbd30f 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -3165,6 +3165,30 @@ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
return ret;
}
+static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
+{
+ int ret;
+
+ if (!rdt_mon_capable)
+ return 0;
+
+ ret = alloc_rmid();
+ if (ret < 0) {
+ rdt_last_cmd_puts("Out of RMIDs\n");
+ return ret;
+ }
+ rdtgrp->mon.rmid = ret;
+
+ ret = mkdir_mondata_all(rdtgrp->kn, rdtgrp, &rdtgrp->mon.mon_data_kn);
+ if (ret) {
+ rdt_last_cmd_puts("kernfs subdir error\n");
+ free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
+ return ret;
+ }
+
+ return 0;
+}
+
static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
const char *name, umode_t mode,
enum rdt_group_type rtype, struct rdtgroup **r)
@@ -3230,20 +3254,10 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
goto out_destroy;
}
- if (rdt_mon_capable) {
- ret = alloc_rmid();
- if (ret < 0) {
- rdt_last_cmd_puts("Out of RMIDs\n");
- goto out_destroy;
- }
- rdtgrp->mon.rmid = ret;
+ ret = mkdir_rdt_prepare_rmid_alloc(rdtgrp);
+ if (ret)
+ goto out_destroy;
- ret = mkdir_mondata_all(kn, rdtgrp, &rdtgrp->mon.mon_data_kn);
- if (ret) {
- rdt_last_cmd_puts("kernfs subdir error\n");
- goto out_idfree;
- }
- }
kernfs_activate(kn);
/*
@@ -3251,8 +3265,6 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
*/
return 0;
-out_idfree:
- free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
out_destroy:
kernfs_put(rdtgrp->kn);
kernfs_remove(rdtgrp->kn);
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 03/24] x86/resctrl: Create helper for RMID allocation and mondata dir creation
2023-07-28 16:42 ` [PATCH v5 03/24] x86/resctrl: Create helper for RMID allocation and mondata dir creation James Morse
@ 2023-08-09 22:32 ` Reinette Chatre
0 siblings, 0 replies; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:32 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> When monitoring is support, each monitor and control group is allocated
"When monitoring is support" -> "When monitoring is supported"
> an RMID. For control groups, rdtgroup_mkdir_ctrl_mon() later goes on to
> allocate the CLOSID.
>
> MPAM's equivalent of RMID are not an independent number, so can't be
> allocated until the CLOSID is known. An RMID allocation for one CLOSID
> may fail, whereas another may succeed depending on how many monitor
> groups a control group has.
>
> The RMID allocation needs to move to be after the CLOSID has been
> allocated.
>
> Move the RMID allocation and mondata dir creation to a helper, this
> makes a subsequent change easier to read.
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 04/24] x86/resctrl: Move rmid allocation out of mkdir_rdt_prepare()
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (2 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 03/24] x86/resctrl: Create helper for RMID allocation and mondata dir creation James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-15 0:50 ` Fenghua Yu
2023-07-28 16:42 ` [PATCH v5 05/24] x86/resctrl: Allow RMID allocation to be scoped by CLOSID James Morse
` (21 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
RMID are allocated for each monitor or control group directory, because
each of these needs its own RMID. For control groups,
rdtgroup_mkdir_ctrl_mon() later goes on to allocate the CLOSID.
MPAM's equivalent of RMID is not an independent number, so can't be
allocated until the CLOSID is known. An RMID allocation for one CLOSID
may fail, whereas another may succeed depending on how many monitor
groups a control group has.
The RMID allocation needs to move to be after the CLOSID has been
allocated.
Move the RMID allocation out of mkdir_rdt_prepare() to occur in its caller,
after the mkdir_rdt_prepare() call. This allows the RMID allocator to
know the CLOSID.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v2:
* Moved kernfs_activate() later to preserve atomicity of files being visible
---
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 35 +++++++++++++++++++-------
1 file changed, 26 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index e7178bbbd30f..7c5cfb373d03 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -3189,6 +3189,12 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
return 0;
}
+static void mkdir_rdt_prepare_rmid_free(struct rdtgroup *rgrp)
+{
+ if (rdt_mon_capable)
+ free_rmid(rgrp->closid, rgrp->mon.rmid);
+}
+
static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
const char *name, umode_t mode,
enum rdt_group_type rtype, struct rdtgroup **r)
@@ -3254,12 +3260,6 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
goto out_destroy;
}
- ret = mkdir_rdt_prepare_rmid_alloc(rdtgrp);
- if (ret)
- goto out_destroy;
-
- kernfs_activate(kn);
-
/*
* The caller unlocks the parent_kn upon success.
*/
@@ -3278,7 +3278,6 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
static void mkdir_rdt_prepare_clean(struct rdtgroup *rgrp)
{
kernfs_remove(rgrp->kn);
- free_rmid(rgrp->closid, rgrp->mon.rmid);
rdtgroup_remove(rgrp);
}
@@ -3300,12 +3299,21 @@ static int rdtgroup_mkdir_mon(struct kernfs_node *parent_kn,
prgrp = rdtgrp->mon.parent;
rdtgrp->closid = prgrp->closid;
+ ret = mkdir_rdt_prepare_rmid_alloc(rdtgrp);
+ if (ret) {
+ mkdir_rdt_prepare_clean(rdtgrp);
+ goto out_unlock;
+ }
+
+ kernfs_activate(rdtgrp->kn);
+
/*
* Add the rdtgrp to the list of rdtgrps the parent
* ctrl_mon group has to track.
*/
list_add_tail(&rdtgrp->mon.crdtgrp_list, &prgrp->mon.crdtgrp_list);
+out_unlock:
rdtgroup_kn_unlock(parent_kn);
return ret;
}
@@ -3336,10 +3344,17 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
ret = 0;
rdtgrp->closid = closid;
- ret = rdtgroup_init_alloc(rdtgrp);
- if (ret < 0)
+
+ ret = mkdir_rdt_prepare_rmid_alloc(rdtgrp);
+ if (ret)
goto out_id_free;
+ kernfs_activate(rdtgrp->kn);
+
+ ret = rdtgroup_init_alloc(rdtgrp);
+ if (ret < 0)
+ goto out_rmid_free;
+
list_add(&rdtgrp->rdtgroup_list, &rdt_all_groups);
if (rdt_mon_capable) {
@@ -3358,6 +3373,8 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
out_del_list:
list_del(&rdtgrp->rdtgroup_list);
+out_rmid_free:
+ mkdir_rdt_prepare_rmid_free(rdtgrp);
out_id_free:
closid_free(closid);
out_common_fail:
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 04/24] x86/resctrl: Move rmid allocation out of mkdir_rdt_prepare()
2023-07-28 16:42 ` [PATCH v5 04/24] x86/resctrl: Move rmid allocation out of mkdir_rdt_prepare() James Morse
@ 2023-08-15 0:50 ` Fenghua Yu
2023-08-24 16:52 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Fenghua Yu @ 2023-08-15 0:50 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi, James,
On 7/28/23 09:42, James Morse wrote:
> RMID are allocated for each monitor or control group directory, because
> each of these needs its own RMID. For control groups,
> rdtgroup_mkdir_ctrl_mon() later goes on to allocate the CLOSID.
>
> MPAM's equivalent of RMID is not an independent number, so can't be
> allocated until the CLOSID is known. An RMID allocation for one CLOSID
> may fail, whereas another may succeed depending on how many monitor
> groups a control group has.
>
> The RMID allocation needs to move to be after the CLOSID has been
> allocated.
>
> Move the RMID allocation out of mkdir_rdt_prepare() to occur in its caller,
> after the mkdir_rdt_prepare() call. This allows the RMID allocator to
> know the CLOSID.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v2:
> * Moved kernfs_activate() later to preserve atomicity of files being visible
> ---
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 35 +++++++++++++++++++-------
> 1 file changed, 26 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index e7178bbbd30f..7c5cfb373d03 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -3189,6 +3189,12 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
> return 0;
> }
>
> +static void mkdir_rdt_prepare_rmid_free(struct rdtgroup *rgrp)
> +{
> + if (rdt_mon_capable)
> + free_rmid(rgrp->closid, rgrp->mon.rmid);
> +}
> +
> static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
> const char *name, umode_t mode,
> enum rdt_group_type rtype, struct rdtgroup **r)
> @@ -3254,12 +3260,6 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
> goto out_destroy;
> }
>
> - ret = mkdir_rdt_prepare_rmid_alloc(rdtgrp);
> - if (ret)
> - goto out_destroy;
> -
> - kernfs_activate(kn);
> -
> /*
> * The caller unlocks the parent_kn upon success.
> */
> @@ -3278,7 +3278,6 @@ static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
> static void mkdir_rdt_prepare_clean(struct rdtgroup *rgrp)
> {
> kernfs_remove(rgrp->kn);
> - free_rmid(rgrp->closid, rgrp->mon.rmid);
> rdtgroup_remove(rgrp);
> }
>
> @@ -3300,12 +3299,21 @@ static int rdtgroup_mkdir_mon(struct kernfs_node *parent_kn,
> prgrp = rdtgrp->mon.parent;
> rdtgrp->closid = prgrp->closid;
>
> + ret = mkdir_rdt_prepare_rmid_alloc(rdtgrp);
> + if (ret) {
> + mkdir_rdt_prepare_clean(rdtgrp);
> + goto out_unlock;
> + }
> +
> + kernfs_activate(rdtgrp->kn);
> +
> /*
> * Add the rdtgrp to the list of rdtgrps the parent
> * ctrl_mon group has to track.
> */
> list_add_tail(&rdtgrp->mon.crdtgrp_list, &prgrp->mon.crdtgrp_list);
>
> +out_unlock:
> rdtgroup_kn_unlock(parent_kn);
> return ret;
> }
> @@ -3336,10 +3344,17 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
> ret = 0;
>
> rdtgrp->closid = closid;
> - ret = rdtgroup_init_alloc(rdtgrp);
> - if (ret < 0)
> +
> + ret = mkdir_rdt_prepare_rmid_alloc(rdtgrp);
> + if (ret)
> goto out_id_free;
Is it better to change "out_id_free" to "out_closid_free"?
It's not confused to name it "out_id_free" because only closid is freed.
But this patch introduces new "rmid" free. So it's better to rename the
label to "out_closid_free" and it matches the following "out_rmid_free"
as well.
>
> + kernfs_activate(rdtgrp->kn);
> +
> + ret = rdtgroup_init_alloc(rdtgrp);
> + if (ret < 0)
> + goto out_rmid_free;
> +
> list_add(&rdtgrp->rdtgroup_list, &rdt_all_groups);
>
> if (rdt_mon_capable) {
> @@ -3358,6 +3373,8 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
>
> out_del_list:
> list_del(&rdtgrp->rdtgroup_list);
> +out_rmid_free:
> + mkdir_rdt_prepare_rmid_free(rdtgrp);
> out_id_free:
s/out_id_free/out_closid_free/?
> closid_free(closid);
> out_common_fail:
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 04/24] x86/resctrl: Move rmid allocation out of mkdir_rdt_prepare()
2023-08-15 0:50 ` Fenghua Yu
@ 2023-08-24 16:52 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:52 UTC (permalink / raw)
To: Fenghua Yu, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Fenghua,
On 15/08/2023 01:50, Fenghua Yu wrote:
> On 7/28/23 09:42, James Morse wrote:
>> RMID are allocated for each monitor or control group directory, because
>> each of these needs its own RMID. For control groups,
>> rdtgroup_mkdir_ctrl_mon() later goes on to allocate the CLOSID.
>>
>> MPAM's equivalent of RMID is not an independent number, so can't be
>> allocated until the CLOSID is known. An RMID allocation for one CLOSID
>> may fail, whereas another may succeed depending on how many monitor
>> groups a control group has.
>>
>> The RMID allocation needs to move to be after the CLOSID has been
>> allocated.
>>
>> Move the RMID allocation out of mkdir_rdt_prepare() to occur in its caller,
>> after the mkdir_rdt_prepare() call. This allows the RMID allocator to
>> know the CLOSID.
>> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> index e7178bbbd30f..7c5cfb373d03 100644
>> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> @@ -3336,10 +3344,17 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
>> ret = 0;
>> rdtgrp->closid = closid;
>> - ret = rdtgroup_init_alloc(rdtgrp);
>> - if (ret < 0)
>> +
>> + ret = mkdir_rdt_prepare_rmid_alloc(rdtgrp);
>> + if (ret)
>> goto out_id_free;
>
> Is it better to change "out_id_free" to "out_closid_free"?
> It's not confused to name it "out_id_free" because only closid is freed.
> But this patch introduces new "rmid" free. So it's better to rename the label to
> "out_closid_free" and it matches the following "out_rmid_free" as well.
Yup, makes sense. I only left the existing code alone to avoid too much churn. This way is
much more readable.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 05/24] x86/resctrl: Allow RMID allocation to be scoped by CLOSID
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (3 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 04/24] x86/resctrl: Move rmid allocation out of mkdir_rdt_prepare() James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:33 ` Reinette Chatre
2023-08-15 1:22 ` Fenghua Yu
2023-07-28 16:42 ` [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has James Morse
` (20 subsequent siblings)
25 siblings, 2 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
MPAMs RMID values are not unique unless the CLOSID is considered as well.
alloc_rmid() expects the RMID to be an independent number.
Pass the CLOSID in to alloc_rmid(). Use this to compare indexes when
allocating. If the CLOSID is not relevant to the index, this ends up
comparing the free RMID with itself, and the first free entry will be
used. With MPAM the CLOSID is included in the index, so this becomes a
walk of the free RMID entries, until one that matches the supplied
CLOSID is found.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v2;
* Rephrased comment in resctrl_find_free_rmid() to describe this in terms of
list_entry_first()
* Rephrased comment above alloc_rmid()
Changes since v3:
* Flipped conditions in alloc_rmid()
Changes since v4:
* Typo in comment
---
arch/x86/kernel/cpu/resctrl/internal.h | 2 +-
arch/x86/kernel/cpu/resctrl/monitor.c | 51 +++++++++++++++++------
arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 2 +-
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 2 +-
4 files changed, 41 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index b48715bb8762..94749ee950dd 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -535,7 +535,7 @@ void rdtgroup_pseudo_lock_remove(struct rdtgroup *rdtgrp);
struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r);
int closids_supported(void);
void closid_free(int closid);
-int alloc_rmid(void);
+int alloc_rmid(u32 closid);
void free_rmid(u32 closid, u32 rmid);
int rdt_get_mon_l3_config(struct rdt_resource *r);
bool __init rdt_cpu_has(int flag);
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index bd234b66dddf..de91ca781d9f 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -337,24 +337,49 @@ bool has_busy_rmid(struct rdt_domain *d)
return find_first_bit(d->rmid_busy_llc, idx_limit) != idx_limit;
}
-/*
- * As of now the RMIDs allocation is global.
- * However we keep track of which packages the RMIDs
- * are used to optimize the limbo list management.
- */
-int alloc_rmid(void)
+static struct rmid_entry *resctrl_find_free_rmid(u32 closid)
{
- struct rmid_entry *entry;
-
- lockdep_assert_held(&rdtgroup_mutex);
+ struct rmid_entry *itr;
+ u32 itr_idx, cmp_idx;
if (list_empty(&rmid_free_lru))
- return rmid_limbo_count ? -EBUSY : -ENOSPC;
+ return rmid_limbo_count ? ERR_PTR(-EBUSY) : ERR_PTR(-ENOSPC);
+
+ list_for_each_entry(itr, &rmid_free_lru, list) {
+ /*
+ * Get the index of this free RMID, and the index it would need
+ * to be if it were used with this CLOSID.
+ * If the CLOSID is irrelevant on this architecture, these will
+ * always be the same meaning the compiler can reduce this loop
+ * to a single list_entry_first() call.
+ */
+ itr_idx = resctrl_arch_rmid_idx_encode(itr->closid, itr->rmid);
+ cmp_idx = resctrl_arch_rmid_idx_encode(closid, itr->rmid);
+
+ if (itr_idx == cmp_idx)
+ return itr;
+ }
+
+ return ERR_PTR(-ENOSPC);
+}
+
+/*
+ * For MPAM the RMID value is not unique, and has to be considered with
+ * the CLOSID. The (CLOSID, RMID) pair is allocated on all domains, which
+ * allows all domains to be managed by a single limbo list.
+ * Each domain also has a rmid_busy_llc to reduce the work of the limbo handler.
+ */
+int alloc_rmid(u32 closid)
+{
+ struct rmid_entry *entry;
+
+ lockdep_assert_held(&rdtgroup_mutex);
+
+ entry = resctrl_find_free_rmid(closid);
+ if (IS_ERR(entry))
+ return PTR_ERR(entry);
- entry = list_first_entry(&rmid_free_lru,
- struct rmid_entry, list);
list_del(&entry->list);
-
return entry->rmid;
}
diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
index aeadaeb5df9a..5ebd6e54c7f2 100644
--- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
@@ -763,7 +763,7 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp)
int ret;
if (rdt_mon_capable) {
- ret = alloc_rmid();
+ ret = alloc_rmid(rdtgrp->closid);
if (ret < 0) {
rdt_last_cmd_puts("Out of RMIDs\n");
return ret;
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 7c5cfb373d03..b97e119dbe46 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -3172,7 +3172,7 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
if (!rdt_mon_capable)
return 0;
- ret = alloc_rmid();
+ ret = alloc_rmid(rdtgrp->closid);
if (ret < 0) {
rdt_last_cmd_puts("Out of RMIDs\n");
return ret;
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 05/24] x86/resctrl: Allow RMID allocation to be scoped by CLOSID
2023-07-28 16:42 ` [PATCH v5 05/24] x86/resctrl: Allow RMID allocation to be scoped by CLOSID James Morse
@ 2023-08-09 22:33 ` Reinette Chatre
2023-08-15 1:22 ` Fenghua Yu
1 sibling, 0 replies; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:33 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> -int alloc_rmid(void)
> +static struct rmid_entry *resctrl_find_free_rmid(u32 closid)
> {
> - struct rmid_entry *entry;
> -
> - lockdep_assert_held(&rdtgroup_mutex);
> + struct rmid_entry *itr;
> + u32 itr_idx, cmp_idx;
>
> if (list_empty(&rmid_free_lru))
> - return rmid_limbo_count ? -EBUSY : -ENOSPC;
> + return rmid_limbo_count ? ERR_PTR(-EBUSY) : ERR_PTR(-ENOSPC);
> +
> + list_for_each_entry(itr, &rmid_free_lru, list) {
> + /*
> + * Get the index of this free RMID, and the index it would need
> + * to be if it were used with this CLOSID.
> + * If the CLOSID is irrelevant on this architecture, these will
> + * always be the same meaning the compiler can reduce this loop
> + * to a single list_entry_first() call.
> + */
> + itr_idx = resctrl_arch_rmid_idx_encode(itr->closid, itr->rmid);
> + cmp_idx = resctrl_arch_rmid_idx_encode(closid, itr->rmid);
> +
> + if (itr_idx == cmp_idx)
> + return itr;
> + }
> +
> + return ERR_PTR(-ENOSPC);
> +}
> +
> +/*
> + * For MPAM the RMID value is not unique, and has to be considered with
> + * the CLOSID. The (CLOSID, RMID) pair is allocated on all domains, which
> + * allows all domains to be managed by a single limbo list.
> + * Each domain also has a rmid_busy_llc to reduce the work of the limbo handler.
> + */
I find the above comment to be contradicting - it talks about a single limbo list
yet there is "also" a limbo list/bitmask per domain. Should "single limbo list"
perhaps be "single free list"?
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 05/24] x86/resctrl: Allow RMID allocation to be scoped by CLOSID
2023-07-28 16:42 ` [PATCH v5 05/24] x86/resctrl: Allow RMID allocation to be scoped by CLOSID James Morse
2023-08-09 22:33 ` Reinette Chatre
@ 2023-08-15 1:22 ` Fenghua Yu
1 sibling, 0 replies; 77+ messages in thread
From: Fenghua Yu @ 2023-08-15 1:22 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi, James,
On 7/28/23 09:42, James Morse wrote:
> MPAMs RMID values are not unique unless the CLOSID is considered as well.
>
> alloc_rmid() expects the RMID to be an independent number.
>
> Pass the CLOSID in to alloc_rmid(). Use this to compare indexes when
> allocating. If the CLOSID is not relevant to the index, this ends up
> comparing the free RMID with itself, and the first free entry will be
> used. With MPAM the CLOSID is included in the index, so this becomes a
> walk of the free RMID entries, until one that matches the supplied
> CLOSID is found.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v2;
> * Rephrased comment in resctrl_find_free_rmid() to describe this in terms of
> list_entry_first()
> * Rephrased comment above alloc_rmid()
>
> Changes since v3:
> * Flipped conditions in alloc_rmid()
>
> Changes since v4:
> * Typo in comment
> ---
> arch/x86/kernel/cpu/resctrl/internal.h | 2 +-
> arch/x86/kernel/cpu/resctrl/monitor.c | 51 +++++++++++++++++------
> arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 2 +-
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 2 +-
> 4 files changed, 41 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index b48715bb8762..94749ee950dd 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -535,7 +535,7 @@ void rdtgroup_pseudo_lock_remove(struct rdtgroup *rdtgrp);
> struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r);
> int closids_supported(void);
> void closid_free(int closid);
> -int alloc_rmid(void);
> +int alloc_rmid(u32 closid);
> void free_rmid(u32 closid, u32 rmid);
> int rdt_get_mon_l3_config(struct rdt_resource *r);
> bool __init rdt_cpu_has(int flag);
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index bd234b66dddf..de91ca781d9f 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -337,24 +337,49 @@ bool has_busy_rmid(struct rdt_domain *d)
> return find_first_bit(d->rmid_busy_llc, idx_limit) != idx_limit;
> }
>
> -/*
> - * As of now the RMIDs allocation is global.
> - * However we keep track of which packages the RMIDs
> - * are used to optimize the limbo list management.
> - */
> -int alloc_rmid(void)
> +static struct rmid_entry *resctrl_find_free_rmid(u32 closid)
> {
> - struct rmid_entry *entry;
> -
> - lockdep_assert_held(&rdtgroup_mutex);
> + struct rmid_entry *itr;
> + u32 itr_idx, cmp_idx;
>
> if (list_empty(&rmid_free_lru))
> - return rmid_limbo_count ? -EBUSY : -ENOSPC;
> + return rmid_limbo_count ? ERR_PTR(-EBUSY) : ERR_PTR(-ENOSPC);
> +
> + list_for_each_entry(itr, &rmid_free_lru, list) {
> + /*
> + * Get the index of this free RMID, and the index it would need
> + * to be if it were used with this CLOSID.
> + * If the CLOSID is irrelevant on this architecture, these will
> + * always be the same meaning the compiler can reduce this loop
> + * to a single list_entry_first() call.
s/list_entry_first()/list_first_entry()/?
Seems the comment is not accurate because the loop is not really reduced
to a single list_first_entry() call. Getting itr_idx and cmp_idx and
comparing them are extra operations that list_first_entry() doesn't have.
Maybe change the second half comment to something like:
If the CLOSID is irrelevant on this architecture, the two index values
are always same on every entry and thus the very first entry will be
returned.
> + */
> + itr_idx = resctrl_arch_rmid_idx_encode(itr->closid, itr->rmid);
> + cmp_idx = resctrl_arch_rmid_idx_encode(closid, itr->rmid);
> +
> + if (itr_idx == cmp_idx)
> + return itr;
> + }
> +
> + return ERR_PTR(-ENOSPC);
> +}
> +
> +/*
> + * For MPAM the RMID value is not unique, and has to be considered with
> + * the CLOSID. The (CLOSID, RMID) pair is allocated on all domains, which
> + * allows all domains to be managed by a single limbo list.
> + * Each domain also has a rmid_busy_llc to reduce the work of the limbo handler.
> + */
> +int alloc_rmid(u32 closid)
> +{
> + struct rmid_entry *entry;
> +
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> + entry = resctrl_find_free_rmid(closid);
> + if (IS_ERR(entry))
> + return PTR_ERR(entry);
>
> - entry = list_first_entry(&rmid_free_lru,
> - struct rmid_entry, list);
> list_del(&entry->list);
> -
> return entry->rmid;
> }
>
> diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> index aeadaeb5df9a..5ebd6e54c7f2 100644
> --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> @@ -763,7 +763,7 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp)
> int ret;
>
> if (rdt_mon_capable) {
> - ret = alloc_rmid();
> + ret = alloc_rmid(rdtgrp->closid);
> if (ret < 0) {
> rdt_last_cmd_puts("Out of RMIDs\n");
> return ret;
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index 7c5cfb373d03..b97e119dbe46 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -3172,7 +3172,7 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
> if (!rdt_mon_capable)
> return 0;
>
> - ret = alloc_rmid();
> + ret = alloc_rmid(rdtgrp->closid);
> if (ret < 0) {
> rdt_last_cmd_puts("Out of RMIDs\n");
> return ret;
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (4 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 05/24] x86/resctrl: Allow RMID allocation to be scoped by CLOSID James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:33 ` Reinette Chatre
` (2 more replies)
2023-07-28 16:42 ` [PATCH v5 07/24] x86/resctrl: Use set_bit()/clear_bit() instead of open coding James Morse
` (19 subsequent siblings)
25 siblings, 3 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
used for different control groups.
This means once a CLOSID is allocated, all its monitoring ids may still be
dirty, and held in limbo.
Keep track of the number of RMID held in limbo each CLOSID has. This will
allow a future helper to find the 'cleanest' CLOSID when allocating.
The array is only needed when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is
defined. This will never be the case on x86.
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v4:
* Moved closid_num_dirty_rmid[] update under entry->busy check
* Take the mutex in dom_data_init() as the caller doesn't.
---
arch/x86/kernel/cpu/resctrl/monitor.c | 49 +++++++++++++++++++++++----
1 file changed, 42 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index de91ca781d9f..44addc0126fc 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -43,6 +43,13 @@ struct rmid_entry {
*/
static LIST_HEAD(rmid_free_lru);
+/**
+ * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has.
+ * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined.
+ * Indexed by CLOSID. Protected by rdtgroup_mutex.
+ */
+static int *closid_num_dirty_rmid;
+
/**
* @rmid_limbo_count count of currently unused but (potentially)
* dirty RMIDs.
@@ -285,6 +292,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
return 0;
}
+static void limbo_release_entry(struct rmid_entry *entry)
+{
+ lockdep_assert_held(&rdtgroup_mutex);
+
+ rmid_limbo_count--;
+ list_add_tail(&entry->list, &rmid_free_lru);
+
+ if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
+ closid_num_dirty_rmid[entry->closid]--;
+}
+
/*
* Check the RMIDs that are marked as busy for this domain. If the
* reported LLC occupancy is below the threshold clear the busy bit and
@@ -321,10 +339,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
if (force_free || !rmid_dirty) {
clear_bit(idx, d->rmid_busy_llc);
- if (!--entry->busy) {
- rmid_limbo_count--;
- list_add_tail(&entry->list, &rmid_free_lru);
- }
+ if (!--entry->busy)
+ limbo_release_entry(entry);
}
cur_idx = idx + 1;
}
@@ -391,6 +407,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
u64 val = 0;
u32 idx;
+ lockdep_assert_held(&rdtgroup_mutex);
+
idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
entry->busy = 0;
@@ -416,9 +434,11 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
}
put_cpu();
- if (entry->busy)
+ if (entry->busy) {
rmid_limbo_count++;
- else
+ if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
+ closid_num_dirty_rmid[entry->closid]++;
+ } else
list_add_tail(&entry->list, &rmid_free_lru);
}
@@ -782,13 +802,28 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
static int dom_data_init(struct rdt_resource *r)
{
u32 idx_limit = resctrl_arch_system_num_rmid_idx();
+ u32 num_closid = resctrl_arch_get_num_closid(r);
struct rmid_entry *entry = NULL;
u32 idx;
int i;
+ if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
+ int *tmp;
+
+ tmp = kcalloc(num_closid, sizeof(int), GFP_KERNEL);
+ if (!tmp)
+ return -ENOMEM;
+
+ mutex_lock(&rdtgroup_mutex);
+ closid_num_dirty_rmid = tmp;
+ mutex_unlock(&rdtgroup_mutex);
+ }
+
rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL);
- if (!rmid_ptrs)
+ if (!rmid_ptrs) {
+ kfree(closid_num_dirty_rmid);
return -ENOMEM;
+ }
for (i = 0; i < idx_limit; i++) {
entry = &rmid_ptrs[i];
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has
2023-07-28 16:42 ` [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has James Morse
@ 2023-08-09 22:33 ` Reinette Chatre
2023-08-24 16:53 ` James Morse
2023-08-14 23:58 ` Fenghua Yu
2023-08-15 2:37 ` Fenghua Yu
2 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:33 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index de91ca781d9f..44addc0126fc 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -43,6 +43,13 @@ struct rmid_entry {
> */
> static LIST_HEAD(rmid_free_lru);
>
> +/**
> + * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has.
> + * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined.
> + * Indexed by CLOSID. Protected by rdtgroup_mutex.
> + */
> +static int *closid_num_dirty_rmid;
> +
Will the values ever be negative?
> /**
> * @rmid_limbo_count count of currently unused but (potentially)
> * dirty RMIDs.
> @@ -285,6 +292,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> return 0;
> }
>
> +static void limbo_release_entry(struct rmid_entry *entry)
> +{
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> + rmid_limbo_count--;
> + list_add_tail(&entry->list, &rmid_free_lru);
> +
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> + closid_num_dirty_rmid[entry->closid]--;
> +}
> +
> /*
> * Check the RMIDs that are marked as busy for this domain. If the
> * reported LLC occupancy is below the threshold clear the busy bit and
> @@ -321,10 +339,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
>
> if (force_free || !rmid_dirty) {
> clear_bit(idx, d->rmid_busy_llc);
> - if (!--entry->busy) {
> - rmid_limbo_count--;
> - list_add_tail(&entry->list, &rmid_free_lru);
> - }
> + if (!--entry->busy)
> + limbo_release_entry(entry);
> }
> cur_idx = idx + 1;
> }
> @@ -391,6 +407,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> u64 val = 0;
> u32 idx;
>
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
>
> entry->busy = 0;
> @@ -416,9 +434,11 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> }
> put_cpu();
>
> - if (entry->busy)
> + if (entry->busy) {
> rmid_limbo_count++;
> - else
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> + closid_num_dirty_rmid[entry->closid]++;
> + } else
> list_add_tail(&entry->list, &rmid_free_lru);
> }
This new addition breaks the coding style with the last statement
now also needing a brace.
>
> @@ -782,13 +802,28 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
> static int dom_data_init(struct rdt_resource *r)
> {
> u32 idx_limit = resctrl_arch_system_num_rmid_idx();
> + u32 num_closid = resctrl_arch_get_num_closid(r);
> struct rmid_entry *entry = NULL;
> u32 idx;
> int i;
>
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
> + int *tmp;
> +
> + tmp = kcalloc(num_closid, sizeof(int), GFP_KERNEL);
> + if (!tmp)
> + return -ENOMEM;
> +
> + mutex_lock(&rdtgroup_mutex);
> + closid_num_dirty_rmid = tmp;
> + mutex_unlock(&rdtgroup_mutex);
> + }
> +
It does no harm but I cannot see why the mutex is needed here.
> rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL);
> - if (!rmid_ptrs)
> + if (!rmid_ptrs) {
> + kfree(closid_num_dirty_rmid);
> return -ENOMEM;
> + }
>
> for (i = 0; i < idx_limit; i++) {
> entry = &rmid_ptrs[i];
How will this new memory be freed? Actually I cannot find where
rmid_ptrs is freed either .... is a "dom_data_free()" needed?
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has
2023-08-09 22:33 ` Reinette Chatre
@ 2023-08-24 16:53 ` James Morse
2023-08-24 22:58 ` Reinette Chatre
2023-08-30 22:32 ` Tony Luck
0 siblings, 2 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:53 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 09/08/2023 23:33, Reinette Chatre wrote:
> On 7/28/2023 9:42 AM, James Morse wrote:
>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>> index de91ca781d9f..44addc0126fc 100644
>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>> @@ -43,6 +43,13 @@ struct rmid_entry {
>> */
>> static LIST_HEAD(rmid_free_lru);
>>
>> +/**
>> + * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has.
>> + * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined.
>> + * Indexed by CLOSID. Protected by rdtgroup_mutex.
>> + */
>> +static int *closid_num_dirty_rmid;
>> +
>
> Will the values ever be negative?
Nope, int is just fewer keystrokes. I'll change it to unsigned int.
>> /**
>> * @rmid_limbo_count count of currently unused but (potentially)
>> * dirty RMIDs.
>> @@ -782,13 +802,28 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
>> static int dom_data_init(struct rdt_resource *r)
>> {
>> u32 idx_limit = resctrl_arch_system_num_rmid_idx();
>> + u32 num_closid = resctrl_arch_get_num_closid(r);
>> struct rmid_entry *entry = NULL;
>> u32 idx;
>> int i;
>>
>> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
>> + int *tmp;
>> +
>> + tmp = kcalloc(num_closid, sizeof(int), GFP_KERNEL);
>> + if (!tmp)
>> + return -ENOMEM;
>> +
>> + mutex_lock(&rdtgroup_mutex);
>> + closid_num_dirty_rmid = tmp;
>> + mutex_unlock(&rdtgroup_mutex);
>> + }
>> +
>
> It does no harm but I cannot see why the mutex is needed here.
It's belt-and-braces to ensure that all accesses to that global variable are protected by
that lock. This avoids giving me a memory ordering headache.
rmid_ptrs and the call to __rmid_entry() that dereferences it should probably get the same
treatment.
I'll move the locking to the caller as the least-churny way of covering both.
>> rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL);
>> - if (!rmid_ptrs)
>> + if (!rmid_ptrs) {
>> + kfree(closid_num_dirty_rmid);
>> return -ENOMEM;
>> + }
>>
>> for (i = 0; i < idx_limit; i++) {
>> entry = &rmid_ptrs[i];
>
> How will this new memory be freed? Actually I cannot find where
> rmid_ptrs is freed either .... is a "dom_data_free()" needed?
Oh that's not deliberate? :P
rmid_ptrs has been immortal since the beginning. The good news is resctrl_exit() goes in
the exitcall section, which is in the DISCARDS section of the linker script as resctrl
can't be built as a module. It isn't possible to tear resctrl down, so no-one will notice
this leak.
Something on my eternal-todo-list is to make the filesystem parts of resctrl a loadable
module (if Tony doesn't get there first!). That would flush this sort of thing out.
Last time I triggered resctrl_exit() manually not all of the files got cleaned up - I
haven't investigated it further.
I agree it should probably have a kfree() call somewhere under rdtgroup_exit(), as its
only the L3 that needs any of this, I'll add resctrl_exit_mon_l3_config() for
rdtgroup_exit() to call.
Another option is to rip out all the __exit text as its discarded anyway. But if loadable
modules is the direction of travel, it probably make more sense to fix it.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has
2023-08-24 16:53 ` James Morse
@ 2023-08-24 22:58 ` Reinette Chatre
2023-08-30 22:32 ` Tony Luck
1 sibling, 0 replies; 77+ messages in thread
From: Reinette Chatre @ 2023-08-24 22:58 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 8/24/2023 9:53 AM, James Morse wrote:
> Hi Reinette,
>
> On 09/08/2023 23:33, Reinette Chatre wrote:
>> On 7/28/2023 9:42 AM, James Morse wrote:
>>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>>> index de91ca781d9f..44addc0126fc 100644
>>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>>> @@ -43,6 +43,13 @@ struct rmid_entry {
>>> */
>>> static LIST_HEAD(rmid_free_lru);
>>>
>>> +/**
>>> + * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has.
>>> + * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined.
>>> + * Indexed by CLOSID. Protected by rdtgroup_mutex.
>>> + */
>>> +static int *closid_num_dirty_rmid;
>>> +
>>
>> Will the values ever be negative?
>
> Nope, int is just fewer keystrokes. I'll change it to unsigned int.
>
>
>>> /**
>>> * @rmid_limbo_count count of currently unused but (potentially)
>>> * dirty RMIDs.
>
>
>>> @@ -782,13 +802,28 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
>>> static int dom_data_init(struct rdt_resource *r)
>>> {
>>> u32 idx_limit = resctrl_arch_system_num_rmid_idx();
>>> + u32 num_closid = resctrl_arch_get_num_closid(r);
>>> struct rmid_entry *entry = NULL;
>>> u32 idx;
>>> int i;
>>>
>>> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
>>> + int *tmp;
>>> +
>>> + tmp = kcalloc(num_closid, sizeof(int), GFP_KERNEL);
>>> + if (!tmp)
>>> + return -ENOMEM;
>>> +
>>> + mutex_lock(&rdtgroup_mutex);
>>> + closid_num_dirty_rmid = tmp;
>>> + mutex_unlock(&rdtgroup_mutex);
>>> + }
>>> +
>>
>> It does no harm but I cannot see why the mutex is needed here.
>
> It's belt-and-braces to ensure that all accesses to that global variable are protected by
> that lock. This avoids giving me a memory ordering headache.
> rmid_ptrs and the call to __rmid_entry() that dereferences it should probably get the same
> treatment.
This is fair.
> I'll move the locking to the caller as the least-churny way of covering both.
This is not clear to me. From what I can tell all the sites you mention
are in dom_data_init() so keeping the locking there (but covering the
additional sites) seem appropriate?
>
>>> rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL);
>>> - if (!rmid_ptrs)
>>> + if (!rmid_ptrs) {
>>> + kfree(closid_num_dirty_rmid);
>>> return -ENOMEM;
>>> + }
>>>
>>> for (i = 0; i < idx_limit; i++) {
>>> entry = &rmid_ptrs[i];
>>
>> How will this new memory be freed? Actually I cannot find where
>> rmid_ptrs is freed either .... is a "dom_data_free()" needed?
>
> Oh that's not deliberate? :P
>
> rmid_ptrs has been immortal since the beginning. The good news is resctrl_exit() goes in
> the exitcall section, which is in the DISCARDS section of the linker script as resctrl
> can't be built as a module. It isn't possible to tear resctrl down, so no-one will notice
> this leak.
>
> Something on my eternal-todo-list is to make the filesystem parts of resctrl a loadable
> module (if Tony doesn't get there first!). That would flush this sort of thing out.
> Last time I triggered resctrl_exit() manually not all of the files got cleaned up - I
> haven't investigated it further.
>
>
> I agree it should probably have a kfree() call somewhere under rdtgroup_exit(), as its
> only the L3 that needs any of this, I'll add resctrl_exit_mon_l3_config() for
> rdtgroup_exit() to call.
I'd prefer that allocation and free are clearly symmetrical. Doing so helps
to make the code easier to understand. rdtgroup_exit() is intended to clean
up after rdtgroup_init(). Since this allocation does not occur within rdtgroup_init()
I do not think rdtgroup_exit() is the best place for this cleanup. resctrl_exit() looks
more appropriate to me. Having a dom_data_free() to clean up after a dom_data_init() also
seems like an addition that will help to make the code easier to understand but that
is without a clear understanding about what you have in mind for
resctrl_exit_mon_l3_config().
>
> Another option is to rip out all the __exit text as its discarded anyway. But if loadable
> modules is the direction of travel, it probably make more sense to fix it.
My preference is to do the cleanup properly.
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has
2023-08-24 16:53 ` James Morse
2023-08-24 22:58 ` Reinette Chatre
@ 2023-08-30 22:32 ` Tony Luck
1 sibling, 0 replies; 77+ messages in thread
From: Tony Luck @ 2023-08-30 22:32 UTC (permalink / raw)
To: James Morse
Cc: Reinette Chatre, x86, linux-kernel, Fenghua Yu, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, H Peter Anvin, Babu Moger,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
On Thu, Aug 24, 2023 at 05:53:03PM +0100, James Morse wrote:
> Something on my eternal-todo-list is to make the filesystem parts of resctrl a loadable
> module (if Tony doesn't get there first!). That would flush this sort of thing out.
> Last time I triggered resctrl_exit() manually not all of the files got cleaned up - I
> haven't investigated it further.
James,
I looked at going to a full loadable module approach for about 3 seconds,
and found none of the kernfs support functions are exported. So I also
put that on the eternal-todo-list :-)
There are possibly a few other functions that need exporting like
get_cpu_cacheinfo(), and two or three others from the "perf"
code for pseudo-lock debugfs support.
-Tony
P.S. Latest version of my re-write is at:
https://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux.git/log/?h=resctrl2_v65rc7
Well, almost latest. I haven't pushed the changes to auto-load all the
modules for basic X86 functions based on X86_FEATURE_* bits.
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has
2023-07-28 16:42 ` [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has James Morse
2023-08-09 22:33 ` Reinette Chatre
@ 2023-08-14 23:58 ` Fenghua Yu
2023-08-15 2:37 ` Fenghua Yu
2 siblings, 0 replies; 77+ messages in thread
From: Fenghua Yu @ 2023-08-14 23:58 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi, James,
On 7/28/23 09:42, James Morse wrote:
> MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
> used for different control groups.
>
> This means once a CLOSID is allocated, all its monitoring ids may still be
> dirty, and held in limbo.
>
> Keep track of the number of RMID held in limbo each CLOSID has. This will
> allow a future helper to find the 'cleanest' CLOSID when allocating.
>
> The array is only needed when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is
> defined. This will never be the case on x86.
>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v4:
> * Moved closid_num_dirty_rmid[] update under entry->busy check
> * Take the mutex in dom_data_init() as the caller doesn't.
> ---
> arch/x86/kernel/cpu/resctrl/monitor.c | 49 +++++++++++++++++++++++----
> 1 file changed, 42 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index de91ca781d9f..44addc0126fc 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -43,6 +43,13 @@ struct rmid_entry {
> */
> static LIST_HEAD(rmid_free_lru);
>
> +/**
> + * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has.
> + * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined.
> + * Indexed by CLOSID. Protected by rdtgroup_mutex.
> + */
> +static int *closid_num_dirty_rmid;
> +
> /**
> * @rmid_limbo_count count of currently unused but (potentially)
> * dirty RMIDs.
> @@ -285,6 +292,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> return 0;
> }
>
> +static void limbo_release_entry(struct rmid_entry *entry)
> +{
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> + rmid_limbo_count--;
> + list_add_tail(&entry->list, &rmid_free_lru);
> +
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> + closid_num_dirty_rmid[entry->closid]--;
> +}
> +
> /*
> * Check the RMIDs that are marked as busy for this domain. If the
> * reported LLC occupancy is below the threshold clear the busy bit and
> @@ -321,10 +339,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
>
> if (force_free || !rmid_dirty) {
> clear_bit(idx, d->rmid_busy_llc);
> - if (!--entry->busy) {
> - rmid_limbo_count--;
> - list_add_tail(&entry->list, &rmid_free_lru);
> - }
> + if (!--entry->busy)
> + limbo_release_entry(entry);
> }
> cur_idx = idx + 1;
> }
> @@ -391,6 +407,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> u64 val = 0;
> u32 idx;
>
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
>
> entry->busy = 0;
> @@ -416,9 +434,11 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> }
> put_cpu();
>
> - if (entry->busy)
> + if (entry->busy) {
> rmid_limbo_count++;
> - else
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> + closid_num_dirty_rmid[entry->closid]++;
> + } else
> list_add_tail(&entry->list, &rmid_free_lru);
Unbalanced braces in if-else. Need to add braces in "else".
> }
>
> @@ -782,13 +802,28 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
> static int dom_data_init(struct rdt_resource *r)
> {
> u32 idx_limit = resctrl_arch_system_num_rmid_idx();
> + u32 num_closid = resctrl_arch_get_num_closid(r);
> struct rmid_entry *entry = NULL;
> u32 idx;
> int i;
>
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
> + int *tmp;
> +
> + tmp = kcalloc(num_closid, sizeof(int), GFP_KERNEL);
> + if (!tmp)
> + return -ENOMEM;
> +
> + mutex_lock(&rdtgroup_mutex);
> + closid_num_dirty_rmid = tmp;
> + mutex_unlock(&rdtgroup_mutex);
> + }
> +
> rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL);
> - if (!rmid_ptrs)
> + if (!rmid_ptrs) {
> + kfree(closid_num_dirty_rmid);
> return -ENOMEM;
> + }
>
> for (i = 0; i < idx_limit; i++) {
> entry = &rmid_ptrs[i];
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has
2023-07-28 16:42 ` [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has James Morse
2023-08-09 22:33 ` Reinette Chatre
2023-08-14 23:58 ` Fenghua Yu
@ 2023-08-15 2:37 ` Fenghua Yu
2023-08-24 16:53 ` James Morse
2 siblings, 1 reply; 77+ messages in thread
From: Fenghua Yu @ 2023-08-15 2:37 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi, James,
On 7/28/23 09:42, James Morse wrote:
> MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
> used for different control groups.
>
> This means once a CLOSID is allocated, all its monitoring ids may still be
> dirty, and held in limbo.
>
> Keep track of the number of RMID held in limbo each CLOSID has. This will
> allow a future helper to find the 'cleanest' CLOSID when allocating.
>
> The array is only needed when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is
> defined. This will never be the case on x86.
>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v4:
> * Moved closid_num_dirty_rmid[] update under entry->busy check
> * Take the mutex in dom_data_init() as the caller doesn't.
> ---
> arch/x86/kernel/cpu/resctrl/monitor.c | 49 +++++++++++++++++++++++----
> 1 file changed, 42 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index de91ca781d9f..44addc0126fc 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -43,6 +43,13 @@ struct rmid_entry {
> */
> static LIST_HEAD(rmid_free_lru);
>
Better to add:
#if CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID
> +/**
> + * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has.
> + * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined.
> + * Indexed by CLOSID. Protected by rdtgroup_mutex.
> + */
> +static int *closid_num_dirty_rmid;
#endif
Then the global variable won't exist on x86 to avoid confusion and space.
Some code related to the CONFIG also needs to be changed accordingly.
> +
> /**
> * @rmid_limbo_count count of currently unused but (potentially)
> * dirty RMIDs.
> @@ -285,6 +292,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> return 0;
> }
>
> +static void limbo_release_entry(struct rmid_entry *entry)
> +{
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> + rmid_limbo_count--;
> + list_add_tail(&entry->list, &rmid_free_lru);
> +
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> + closid_num_dirty_rmid[entry->closid]--;
Maybe define some helpers (along with other similar ones) in resctrl.h
like this:
#ifdef CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID
static inline void closid_num_dirty_rmid_dec(struct rmid_entry *entry)
{
closid_num_dirty_rmid[entry->closid]--;
}
...
#else
static inline void closid_num_dirty_rmid_dec(struct rmid_entry *unused)
{
}
...
#endif
Then directly call the helper here:
+ closid_num_dirty_rmid_dec(entry);
On x86 this is noop without occupy any space and cleaner code.
> +}
> +
> /*
> * Check the RMIDs that are marked as busy for this domain. If the
> * reported LLC occupancy is below the threshold clear the busy bit and
> @@ -321,10 +339,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
>
> if (force_free || !rmid_dirty) {
> clear_bit(idx, d->rmid_busy_llc);
> - if (!--entry->busy) {
> - rmid_limbo_count--;
> - list_add_tail(&entry->list, &rmid_free_lru);
> - }
> + if (!--entry->busy)
> + limbo_release_entry(entry);
> }
> cur_idx = idx + 1;
> }
> @@ -391,6 +407,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> u64 val = 0;
> u32 idx;
>
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
>
> entry->busy = 0;
> @@ -416,9 +434,11 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> }
> put_cpu();
>
> - if (entry->busy)
> + if (entry->busy) {
> rmid_limbo_count++;
> - else
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> + closid_num_dirty_rmid[entry->closid]++;
Ditto.
> + } else
> list_add_tail(&entry->list, &rmid_free_lru);
> }
>
> @@ -782,13 +802,28 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
> static int dom_data_init(struct rdt_resource *r)
> {
> u32 idx_limit = resctrl_arch_system_num_rmid_idx();
> + u32 num_closid = resctrl_arch_get_num_closid(r);
> struct rmid_entry *entry = NULL;
> u32 idx;
> int i;
>
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
> + int *tmp;
> +
> + tmp = kcalloc(num_closid, sizeof(int), GFP_KERNEL);
> + if (!tmp)
> + return -ENOMEM;
> +
> + mutex_lock(&rdtgroup_mutex);
data_init() is called in __init. No need to lock here, right?
> + closid_num_dirty_rmid = tmp;
> + mutex_unlock(&rdtgroup_mutex);
> + }
> +
This code is also can be defined as a helper in resctrl.h.
> rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL);
> - if (!rmid_ptrs)
> + if (!rmid_ptrs) {
> + kfree(closid_num_dirty_rmid);
> return -ENOMEM;
> + }
>
> for (i = 0; i < idx_limit; i++) {
> entry = &rmid_ptrs[i];
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has
2023-08-15 2:37 ` Fenghua Yu
@ 2023-08-24 16:53 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:53 UTC (permalink / raw)
To: Fenghua Yu, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Fenghua
On 15/08/2023 03:37, Fenghua Yu wrote:
> On 7/28/23 09:42, James Morse wrote:
>> MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
>> used for different control groups.
>>
>> This means once a CLOSID is allocated, all its monitoring ids may still be
>> dirty, and held in limbo.
>>
>> Keep track of the number of RMID held in limbo each CLOSID has. This will
>> allow a future helper to find the 'cleanest' CLOSID when allocating.
>>
>> The array is only needed when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is
>> defined. This will never be the case on x86.
>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>> index de91ca781d9f..44addc0126fc 100644
>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>> @@ -43,6 +43,13 @@ struct rmid_entry {
>> */
>> static LIST_HEAD(rmid_free_lru);
>>
>
> Better to add:
>
> #if CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID
>> +/**
>> + * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has.
>> + * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined.
>> + * Indexed by CLOSID. Protected by rdtgroup_mutex.
>> + */
>> +static int *closid_num_dirty_rmid;
> #endif
>
> Then the global variable won't exist on x86 to avoid confusion and space.
>
> Some code related to the CONFIG also needs to be changed accordingly.
Uh-huh, that would force me to put #ifdef warts all over the code that accesses that variable.
Modern compilers are really smart. Because this is static, the compiler is free to remove
it if there are no users. All the users are behind if(IS_ENABLED()), meaning the compilers
dead-code elimination will cull the lot, and this variable too:
morse@eglon:~/kernel/mpam/build_x86_64/fs/resctrl$ nm -s monitor.o | grep closid_num_dirty
morse@eglon:~/kernel/mpam/build_arm64/fs/resctrl$ nm -s monitor.o | grep closid_num_dirty
0000000000000000 b closid_num_dirty_rmid
morse@eglon:~/kernel/mpam/build_arm64/fs/resctrl$
Using #ifdef is not only ugly - it prevents the compiler from seeing all the code, so the
CI build systems get worse coverage.
>> +
>> /**
>> * @rmid_limbo_count count of currently unused but (potentially)
>> * dirty RMIDs.
>> @@ -285,6 +292,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct
>> rdt_domain *d,
>> return 0;
>> }
>> +static void limbo_release_entry(struct rmid_entry *entry)
>> +{
>> + lockdep_assert_held(&rdtgroup_mutex);
>> +
>> + rmid_limbo_count--;
>> + list_add_tail(&entry->list, &rmid_free_lru);
>> +
>> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
>> + closid_num_dirty_rmid[entry->closid]--;
>
>
> Maybe define some helpers (along with other similar ones) in resctrl.h like this:
>
> #ifdef CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID
> static inline void closid_num_dirty_rmid_dec(struct rmid_entry *entry)
> {
> closid_num_dirty_rmid[entry->closid]--;
> }
> ...
> #else
> static inline void closid_num_dirty_rmid_dec(struct rmid_entry *unused)
> {
> }
> ...
> #endif
>
> Then directly call the helper here:
>
> + closid_num_dirty_rmid_dec(entry);
>
> On x86 this is noop without
and the compiler knows this.
> occupy any space
Literally more lines of code.
> and cleaner code.
Maybe, this would hide the IS_ENABLED() check - but moving that out as a single use helper
would required closid_num_dirty_rmid[] to be exported from this file - which would prevent
it being optimised out. You'd get the result you were trying to avoid.
>> +}
>> +
>> /*
>> * Check the RMIDs that are marked as busy for this domain. If the
>> * reported LLC occupancy is below the threshold clear the busy bit and
>> @@ -321,10 +339,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
>> if (force_free || !rmid_dirty) {
>> clear_bit(idx, d->rmid_busy_llc);
>> - if (!--entry->busy) {
>> - rmid_limbo_count--;
>> - list_add_tail(&entry->list, &rmid_free_lru);
>> - }
>> + if (!--entry->busy)
>> + limbo_release_entry(entry);
>> }
>> cur_idx = idx + 1;
>> }
>> @@ -391,6 +407,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
>> u64 val = 0;
>> u32 idx;
>> + lockdep_assert_held(&rdtgroup_mutex);
>> +
>> idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
>> entry->busy = 0;
>> @@ -416,9 +434,11 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
>> }
>> put_cpu();
>> - if (entry->busy)
>> + if (entry->busy) {
>> rmid_limbo_count++;
>> - else
>> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
>> + closid_num_dirty_rmid[entry->closid]++;
>
> Ditto.
>
>> + } else
>> list_add_tail(&entry->list, &rmid_free_lru);
>> }
>> @@ -782,13 +802,28 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned
>> long delay_ms)
>> static int dom_data_init(struct rdt_resource *r)
>> {
>> u32 idx_limit = resctrl_arch_system_num_rmid_idx();
>> + u32 num_closid = resctrl_arch_get_num_closid(r);
>> struct rmid_entry *entry = NULL;
>> u32 idx;
>> int i;
>> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
>> + int *tmp;
>> +
>> + tmp = kcalloc(num_closid, sizeof(int), GFP_KERNEL);
>> + if (!tmp)
>> + return -ENOMEM;
>> +
>> + mutex_lock(&rdtgroup_mutex);
> data_init() is called in __init. No need to lock here, right?
__init code can still race with other callers - especially as there are
CPUHP_AP_ONLINE_DYN cpuhp callbacks that are expected to sleep.
This is about ensuring all accesses to those global variables are protected by the lock.
This saves me a memory ordering headache.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 07/24] x86/resctrl: Use set_bit()/clear_bit() instead of open coding
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (5 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a CLOSID has James Morse
@ 2023-07-28 16:42 ` James Morse
2023-07-28 16:42 ` [PATCH v5 08/24] x86/resctrl: Allocate the cleanest CLOSID by searching closid_num_dirty_rmid James Morse
` (18 subsequent siblings)
25 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
The resctrl CLOSID allocator uses a single 32bit word to track which
CLOSID are free. The setting and clearing of bits is open coded.
A subsequent patch adds resctrl_closid_is_free(), which adds more open
coded bitmaps operations. These will eventually need changing to use
the bitops helpers so that a CLOSID bitmap of the correct size can be
allocated dynamically.
Convert the existing open coded bit manipulations of closid_free_map
to use set_bit() and friends.
Signed-off-by: James Morse <james.morse@arm.com>
---
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index b97e119dbe46..4ab9bb018c17 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -106,7 +106,7 @@ void rdt_staged_configs_clear(void)
* - Our choices on how to configure each resource become progressively more
* limited as the number of resources grows.
*/
-static int closid_free_map;
+static unsigned long closid_free_map;
static int closid_free_map_len;
int closids_supported(void)
@@ -126,7 +126,7 @@ static void closid_init(void)
closid_free_map = BIT_MASK(rdt_min_closid) - 1;
/* CLOSID 0 is always reserved for the default group */
- closid_free_map &= ~1;
+ clear_bit(0, &closid_free_map);
closid_free_map_len = rdt_min_closid;
}
@@ -137,14 +137,14 @@ static int closid_alloc(void)
if (closid == 0)
return -ENOSPC;
closid--;
- closid_free_map &= ~(1 << closid);
+ clear_bit(closid, &closid_free_map);
return closid;
}
void closid_free(int closid)
{
- closid_free_map |= 1 << closid;
+ set_bit(closid, &closid_free_map);
}
/**
@@ -156,7 +156,7 @@ void closid_free(int closid)
*/
static bool closid_allocated(unsigned int closid)
{
- return (closid_free_map & (1 << closid)) == 0;
+ return !test_bit(closid, &closid_free_map);
}
/**
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* [PATCH v5 08/24] x86/resctrl: Allocate the cleanest CLOSID by searching closid_num_dirty_rmid
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (6 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 07/24] x86/resctrl: Use set_bit()/clear_bit() instead of open coding James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-15 2:59 ` Fenghua Yu
2023-07-28 16:42 ` [PATCH v5 09/24] x86/resctrl: Move CLOSID/RMID matching and setting to use helpers James Morse
` (17 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
used for different control groups.
This means once a CLOSID is allocated, all its monitoring ids may still be
dirty, and held in limbo.
Instead of allocating the first free CLOSID, on architectures where
CONFIG_RESCTRL_RMID_DEPENDS_ON_COSID is enabled, search
closid_num_dirty_rmid[] to find the cleanest CLOSID.
The CLOSID found is returned to closid_alloc() for the free list
to be updated.
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v4:
* Dropped stale section from comment
---
arch/x86/kernel/cpu/resctrl/internal.h | 2 ++
arch/x86/kernel/cpu/resctrl/monitor.c | 42 ++++++++++++++++++++++++++
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 19 +++++++++---
3 files changed, 58 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 94749ee950dd..7c2a1c235480 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -557,5 +557,7 @@ void rdt_domain_reconfigure_cdp(struct rdt_resource *r);
void __init thread_throttle_mode_init(void);
void __init mbm_config_rftype_init(const char *config);
void rdt_staged_configs_clear(void);
+bool closid_allocated(unsigned int closid);
+int resctrl_find_cleanest_closid(void);
#endif /* _ASM_X86_RESCTRL_INTERNAL_H */
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 44addc0126fc..c268aa5925c7 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -379,6 +379,48 @@ static struct rmid_entry *resctrl_find_free_rmid(u32 closid)
return ERR_PTR(-ENOSPC);
}
+/**
+ * resctrl_find_cleanest_closid() - Find a CLOSID where all the associated
+ * RMID are clean, or the CLOSID that has
+ * the most clean RMID.
+ *
+ * MPAM's equivalent of RMID are per-CLOSID, meaning a freshly allocated CLOSID
+ * may not be able to allocate clean RMID. To avoid this the allocator will
+ * choose the CLOSID with the most clean RMID.
+ *
+ * When the CLOSID and RMID are independent numbers, the first free CLOSID will
+ * be returned.
+ */
+int resctrl_find_cleanest_closid(void)
+{
+ u32 cleanest_closid = ~0, iter_num_dirty;
+ int i = 0;
+
+ lockdep_assert_held(&rdtgroup_mutex);
+
+ if (!IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
+ return -EIO;
+
+ for (i = 0; i < closids_supported(); i++) {
+ if (closid_allocated(i))
+ continue;
+
+ iter_num_dirty = closid_num_dirty_rmid[i];
+ if (iter_num_dirty == 0)
+ return i;
+
+ if (cleanest_closid == ~0)
+ cleanest_closid = i;
+
+ if (iter_num_dirty < closid_num_dirty_rmid[cleanest_closid])
+ cleanest_closid = i;
+ }
+
+ if (cleanest_closid == ~0)
+ return -ENOSPC;
+ return cleanest_closid;
+}
+
/*
* For MPAM the RMID value is not unique, and has to be considered with
* the CLOSID. The (CLOSID, RMID) pair is allocated on all domains, which
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 4ab9bb018c17..df28b81d2c9c 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -132,11 +132,20 @@ static void closid_init(void)
static int closid_alloc(void)
{
- u32 closid = ffs(closid_free_map);
+ u32 closid;
+ int err;
- if (closid == 0)
- return -ENOSPC;
- closid--;
+ if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
+ err = resctrl_find_cleanest_closid();
+ if (err < 0)
+ return err;
+ closid = err;
+ } else {
+ closid = ffs(closid_free_map);
+ if (closid == 0)
+ return -ENOSPC;
+ closid--;
+ }
clear_bit(closid, &closid_free_map);
return closid;
@@ -154,7 +163,7 @@ void closid_free(int closid)
* Return: true if @closid is currently associated with a resource group,
* false if @closid is free
*/
-static bool closid_allocated(unsigned int closid)
+bool closid_allocated(unsigned int closid)
{
return !test_bit(closid, &closid_free_map);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 08/24] x86/resctrl: Allocate the cleanest CLOSID by searching closid_num_dirty_rmid
2023-07-28 16:42 ` [PATCH v5 08/24] x86/resctrl: Allocate the cleanest CLOSID by searching closid_num_dirty_rmid James Morse
@ 2023-08-15 2:59 ` Fenghua Yu
2023-08-24 16:54 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Fenghua Yu @ 2023-08-15 2:59 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi, James,
On 7/28/23 09:42, James Morse wrote:
> MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
> used for different control groups.
>
> This means once a CLOSID is allocated, all its monitoring ids may still be
> dirty, and held in limbo.
>
> Instead of allocating the first free CLOSID, on architectures where
> CONFIG_RESCTRL_RMID_DEPENDS_ON_COSID is enabled, search
> closid_num_dirty_rmid[] to find the cleanest CLOSID.
>
> The CLOSID found is returned to closid_alloc() for the free list
> to be updated.
>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v4:
> * Dropped stale section from comment
> ---
> arch/x86/kernel/cpu/resctrl/internal.h | 2 ++
> arch/x86/kernel/cpu/resctrl/monitor.c | 42 ++++++++++++++++++++++++++
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 19 +++++++++---
> 3 files changed, 58 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index 94749ee950dd..7c2a1c235480 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -557,5 +557,7 @@ void rdt_domain_reconfigure_cdp(struct rdt_resource *r);
> void __init thread_throttle_mode_init(void);
> void __init mbm_config_rftype_init(const char *config);
> void rdt_staged_configs_clear(void);
> +bool closid_allocated(unsigned int closid);
> +int resctrl_find_cleanest_closid(void);
>
> #endif /* _ASM_X86_RESCTRL_INTERNAL_H */
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 44addc0126fc..c268aa5925c7 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -379,6 +379,48 @@ static struct rmid_entry *resctrl_find_free_rmid(u32 closid)
> return ERR_PTR(-ENOSPC);
> }
>
> +/**
> + * resctrl_find_cleanest_closid() - Find a CLOSID where all the associated
> + * RMID are clean, or the CLOSID that has
> + * the most clean RMID.
> + *
> + * MPAM's equivalent of RMID are per-CLOSID, meaning a freshly allocated CLOSID
> + * may not be able to allocate clean RMID. To avoid this the allocator will
> + * choose the CLOSID with the most clean RMID.
> + *
> + * When the CLOSID and RMID are independent numbers, the first free CLOSID will
> + * be returned.
> + */
> +int resctrl_find_cleanest_closid(void)
> +{
> + u32 cleanest_closid = ~0, iter_num_dirty;
> + int i = 0;
> +
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> + if (!IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> + return -EIO;
> +
> + for (i = 0; i < closids_supported(); i++) {
> + if (closid_allocated(i))
> + continue;
> +
> + iter_num_dirty = closid_num_dirty_rmid[i];
> + if (iter_num_dirty == 0)
> + return i;
> +
> + if (cleanest_closid == ~0)
> + cleanest_closid = i;
> +
> + if (iter_num_dirty < closid_num_dirty_rmid[cleanest_closid])
> + cleanest_closid = i;
> + }
> +
> + if (cleanest_closid == ~0)
> + return -ENOSPC;
> + return cleanest_closid;
> +}
> +
resctrl_find_cleanest_closid() is not empty on x86 after compiled
although it's very short. After all, the function is irrelevant to x86
and could be completely empty on x86.
If put the function in
#ifdef CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID
resctrl_find_cleanest_closid()
...
#else
resctrl_find_cleanest_closid() {}
#endif
It's fully empty on x86.
> /*
> * For MPAM the RMID value is not unique, and has to be considered with
> * the CLOSID. The (CLOSID, RMID) pair is allocated on all domains, which
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index 4ab9bb018c17..df28b81d2c9c 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -132,11 +132,20 @@ static void closid_init(void)
>
> static int closid_alloc(void)
> {
> - u32 closid = ffs(closid_free_map);
> + u32 closid;
> + int err;
>
> - if (closid == 0)
> - return -ENOSPC;
> - closid--;
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
> + err = resctrl_find_cleanest_closid();
> + if (err < 0)
> + return err;
> + closid = err;
> + } else {
> + closid = ffs(closid_free_map);
> + if (closid == 0)
> + return -ENOSPC;
> + closid--;
> + }
> clear_bit(closid, &closid_free_map);
>
> return closid;
> @@ -154,7 +163,7 @@ void closid_free(int closid)
> * Return: true if @closid is currently associated with a resource group,
> * false if @closid is free
> */
> -static bool closid_allocated(unsigned int closid)
> +bool closid_allocated(unsigned int closid)
> {
> return !test_bit(closid, &closid_free_map);
> }
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 08/24] x86/resctrl: Allocate the cleanest CLOSID by searching closid_num_dirty_rmid
2023-08-15 2:59 ` Fenghua Yu
@ 2023-08-24 16:54 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:54 UTC (permalink / raw)
To: Fenghua Yu, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Fenghua,
On 15/08/2023 03:59, Fenghua Yu wrote:
> On 7/28/23 09:42, James Morse wrote:
>> MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be
>> used for different control groups.
>>
>> This means once a CLOSID is allocated, all its monitoring ids may still be
>> dirty, and held in limbo.
>>
>> Instead of allocating the first free CLOSID, on architectures where
>> CONFIG_RESCTRL_RMID_DEPENDS_ON_COSID is enabled, search
>> closid_num_dirty_rmid[] to find the cleanest CLOSID.
>>
>> The CLOSID found is returned to closid_alloc() for the free list
>> to be updated.
>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>> index 44addc0126fc..c268aa5925c7 100644
>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>> @@ -379,6 +379,48 @@ static struct rmid_entry *resctrl_find_free_rmid(u32 closid)
>> return ERR_PTR(-ENOSPC);
>> }
>> +/**
>> + * resctrl_find_cleanest_closid() - Find a CLOSID where all the associated
>> + * RMID are clean, or the CLOSID that has
>> + * the most clean RMID.
>> + *
>> + * MPAM's equivalent of RMID are per-CLOSID, meaning a freshly allocated CLOSID
>> + * may not be able to allocate clean RMID. To avoid this the allocator will
>> + * choose the CLOSID with the most clean RMID.
>> + *
>> + * When the CLOSID and RMID are independent numbers, the first free CLOSID will
>> + * be returned.
>> + */
>> +int resctrl_find_cleanest_closid(void)
>> +{
>> + u32 cleanest_closid = ~0, iter_num_dirty;
>> + int i = 0;
>> +
>> + lockdep_assert_held(&rdtgroup_mutex);
>> +
>> + if (!IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
>> + return -EIO;
>> +
>> + for (i = 0; i < closids_supported(); i++) {
>> + if (closid_allocated(i))
>> + continue;
>> +
>> + iter_num_dirty = closid_num_dirty_rmid[i];
>> + if (iter_num_dirty == 0)
>> + return i;
>> +
>> + if (cleanest_closid == ~0)
>> + cleanest_closid = i;
>> +
>> + if (iter_num_dirty < closid_num_dirty_rmid[cleanest_closid])
>> + cleanest_closid = i;
>> + }
>> +
>> + if (cleanest_closid == ~0)
>> + return -ENOSPC;
>> + return cleanest_closid;
>> +}
>> +
>
> resctrl_find_cleanest_closid() is not empty on x86 after compiled although it's very
> short. After all, the function is irrelevant to x86 and could be completely empty on x86.
>
> If put the function in
> #ifdef CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID
> resctrl_find_cleanest_closid()
> ...
> #else
> resctrl_find_cleanest_closid() {}
> #endif
>
> It's fully empty on x86.
I think you forgot the return type. You'd still need to return an error in the stub -
which is what the existing function will be reduced to by the compilers dead code elimination.
Here is the existing function on x86:
| 0000000000000680 <resctrl_find_cleanest_closid>:
| 680: f3 0f 1e fa endbr64
| 684: b8 fb ff ff ff mov $0xfffffffb,%eax
| 689: e9 00 00 00 00 jmp 68e <resctrl_find_cleanest_closid+0xe>
| 68e: 66 90 xchg %ax,%ax
| 690: 90 nop
| 691: 90 nop
[and quite a few more nops]
and here is the stub you propose:
| int resctrl_find_cleanest_closid_as_a_stub(void)
| {
| return -EIO;
| }
which builds as:
| 00000000000006a0 <resctrl_find_cleanest_closid_as_a_stub>:
| 6a0: f3 0f 1e fa endbr64
| 6a4: b8 fb ff ff ff mov $0xfffffffb,%eax
| 6a9: e9 00 00 00 00 jmp 6ae <resctrl_find_cleanest_closid_as_a_s>
| 6ae: 66 90 xchg %ax,%ax
| 6b0: 90 nop
| 6b1: 90 nop
[and quite a few more nops]
The only difference is the #ifdeffery makes this hard to read, and means CI systems need
extra Kconfig files to get good coverage of this code.
It's not the 90s anymore: we no-one wants to see an #ifdef!
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 09/24] x86/resctrl: Move CLOSID/RMID matching and setting to use helpers
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (7 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 08/24] x86/resctrl: Allocate the cleanest CLOSID by searching closid_num_dirty_rmid James Morse
@ 2023-07-28 16:42 ` James Morse
2023-07-28 16:42 ` [PATCH v5 10/24] tick/nohz: Move tick_nohz_full_mask declaration outside the #ifdef James Morse
` (16 subsequent siblings)
25 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
When switching tasks, the CLOSID and RMID that the new task should
use are stored in struct task_struct. For x86 the CLOSID known by resctrl,
the value in task_struct, and the value written to the CPU register are
all the same thing.
MPAM's CPU interface has two different PARTID's one for data accesses
the other for instruction fetch. Storing resctrl's CLOSID value in
struct task_struct implies the arch code knows whether resctrl is using
CDP.
Move the matching and setting of the struct task_struct properties
to use helpers. This allows arm64 to store the hardware format of
the register, instead of having to convert it each time.
__rdtgroup_move_task()s use of READ_ONCE()/WRITE_ONCE() ensures torn
values aren't seen as another CPU may schedule the task being moved
while the value is being changed. MPAM has an additional corner-case
here as the PMG bits extend the PARTID space. If the scheduler sees a
new-CLOSID but old-RMID, the task will dirty an RMID that the limbo code
is not watching causing an inaccurate count. x86's RMID are independent
values, so the limbo code will still be watching the old-RMID in this
circumstance.
To avoid this, arm64 needs both the CLOSID/RMID WRITE_ONCE()d together.
Both values must be provided together.
Because MPAM's RMID values are not unique, the CLOSID must be provided
when matching the RMID.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v2:
* __rdtgroup_move_task() changed to set CLOSID from different CLOSID place
depending on group type
---
arch/x86/include/asm/resctrl.h | 18 ++++++++
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 62 ++++++++++++++++----------
2 files changed, 56 insertions(+), 24 deletions(-)
diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
index 9510c23db62d..66d9e18cdc61 100644
--- a/arch/x86/include/asm/resctrl.h
+++ b/arch/x86/include/asm/resctrl.h
@@ -95,6 +95,24 @@ static inline unsigned int resctrl_arch_round_mon_val(unsigned int val)
return val * scale;
}
+static inline void resctrl_arch_set_closid_rmid(struct task_struct *tsk,
+ u32 closid, u32 rmid)
+{
+ WRITE_ONCE(tsk->closid, closid);
+ WRITE_ONCE(tsk->rmid, rmid);
+}
+
+static inline bool resctrl_arch_match_closid(struct task_struct *tsk, u32 closid)
+{
+ return READ_ONCE(tsk->closid) == closid;
+}
+
+static inline bool resctrl_arch_match_rmid(struct task_struct *tsk, u32 ignored,
+ u32 rmid)
+{
+ return READ_ONCE(tsk->rmid) == rmid;
+}
+
static inline void resctrl_sched_in(struct task_struct *tsk)
{
if (static_branch_likely(&rdt_enable_key))
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index df28b81d2c9c..775f6bede6f8 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -97,7 +97,7 @@ void rdt_staged_configs_clear(void)
*
* Using a global CLOSID across all resources has some advantages and
* some drawbacks:
- * + We can simply set "current->closid" to assign a task to a resource
+ * + We can simply set current's closid to assign a task to a resource
* group.
* + Context switch code can avoid extra memory references deciding which
* CLOSID to load into the PQR_ASSOC MSR
@@ -563,14 +563,26 @@ static void update_task_closid_rmid(struct task_struct *t)
_update_task_closid_rmid(t);
}
+static bool task_in_rdtgroup(struct task_struct *tsk, struct rdtgroup *rdtgrp)
+{
+ u32 closid, rmid = rdtgrp->mon.rmid;
+
+ if (rdtgrp->type == RDTCTRL_GROUP)
+ closid = rdtgrp->closid;
+ else if (rdtgrp->type == RDTMON_GROUP)
+ closid = rdtgrp->mon.parent->closid;
+ else
+ return false;
+
+ return resctrl_arch_match_closid(tsk, closid) &&
+ resctrl_arch_match_rmid(tsk, closid, rmid);
+}
+
static int __rdtgroup_move_task(struct task_struct *tsk,
struct rdtgroup *rdtgrp)
{
/* If the task is already in rdtgrp, no need to move the task. */
- if ((rdtgrp->type == RDTCTRL_GROUP && tsk->closid == rdtgrp->closid &&
- tsk->rmid == rdtgrp->mon.rmid) ||
- (rdtgrp->type == RDTMON_GROUP && tsk->rmid == rdtgrp->mon.rmid &&
- tsk->closid == rdtgrp->mon.parent->closid))
+ if (task_in_rdtgroup(tsk, rdtgrp))
return 0;
/*
@@ -581,19 +593,19 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
* For monitor groups, can move the tasks only from
* their parent CTRL group.
*/
-
- if (rdtgrp->type == RDTCTRL_GROUP) {
- WRITE_ONCE(tsk->closid, rdtgrp->closid);
- WRITE_ONCE(tsk->rmid, rdtgrp->mon.rmid);
- } else if (rdtgrp->type == RDTMON_GROUP) {
- if (rdtgrp->mon.parent->closid == tsk->closid) {
- WRITE_ONCE(tsk->rmid, rdtgrp->mon.rmid);
- } else {
- rdt_last_cmd_puts("Can't move task to different control group\n");
- return -EINVAL;
- }
+ if (rdtgrp->type == RDTMON_GROUP &&
+ !resctrl_arch_match_closid(tsk, rdtgrp->mon.parent->closid)) {
+ rdt_last_cmd_puts("Can't move task to different control group\n");
+ return -EINVAL;
}
+ if (rdtgrp->type == RDTMON_GROUP)
+ resctrl_arch_set_closid_rmid(tsk, rdtgrp->mon.parent->closid,
+ rdtgrp->mon.rmid);
+ else
+ resctrl_arch_set_closid_rmid(tsk, rdtgrp->closid,
+ rdtgrp->mon.rmid);
+
/*
* Ensure the task's closid and rmid are written before determining if
* the task is current that will decide if it will be interrupted.
@@ -615,14 +627,15 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
static bool is_closid_match(struct task_struct *t, struct rdtgroup *r)
{
- return (rdt_alloc_capable &&
- (r->type == RDTCTRL_GROUP) && (t->closid == r->closid));
+ return (rdt_alloc_capable && (r->type == RDTCTRL_GROUP) &&
+ resctrl_arch_match_closid(t, r->closid));
}
static bool is_rmid_match(struct task_struct *t, struct rdtgroup *r)
{
- return (rdt_mon_capable &&
- (r->type == RDTMON_GROUP) && (t->rmid == r->mon.rmid));
+ return (rdt_mon_capable && (r->type == RDTMON_GROUP) &&
+ resctrl_arch_match_rmid(t, r->mon.parent->closid,
+ r->mon.rmid));
}
/**
@@ -822,7 +835,7 @@ int proc_resctrl_show(struct seq_file *s, struct pid_namespace *ns,
rdtg->mode != RDT_MODE_EXCLUSIVE)
continue;
- if (rdtg->closid != tsk->closid)
+ if (!resctrl_arch_match_closid(tsk, rdtg->closid))
continue;
seq_printf(s, "res:%s%s\n", (rdtg == &rdtgroup_default) ? "/" : "",
@@ -830,7 +843,8 @@ int proc_resctrl_show(struct seq_file *s, struct pid_namespace *ns,
seq_puts(s, "mon:");
list_for_each_entry(crg, &rdtg->mon.crdtgrp_list,
mon.crdtgrp_list) {
- if (tsk->rmid != crg->mon.rmid)
+ if (!resctrl_arch_match_rmid(tsk, crg->mon.parent->closid,
+ crg->mon.rmid))
continue;
seq_printf(s, "%s", crg->kn->name);
break;
@@ -2691,8 +2705,8 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to,
for_each_process_thread(p, t) {
if (!from || is_closid_match(t, from) ||
is_rmid_match(t, from)) {
- WRITE_ONCE(t->closid, to->closid);
- WRITE_ONCE(t->rmid, to->mon.rmid);
+ resctrl_arch_set_closid_rmid(t, to->closid,
+ to->mon.rmid);
/*
* Order the closid/rmid stores above before the loads
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* [PATCH v5 10/24] tick/nohz: Move tick_nohz_full_mask declaration outside the #ifdef
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (8 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 09/24] x86/resctrl: Move CLOSID/RMID matching and setting to use helpers James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:34 ` Reinette Chatre
2023-07-28 16:42 ` [PATCH v5 11/24] x86/resctrl: Add cpumask_any_housekeeping() for limbo/overflow James Morse
` (15 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
tick_nohz_full_mask lists the CPUs that are nohz_full. This is only
needed when CONFIG_NO_HZ_FULL is defined. tick_nohz_full_cpu() allows
a specific CPU to be tested against the mask, and evaluates to false
when CONFIG_NO_HZ_FULL is not defined.
The resctrl code needs to pick a CPU to run some work on, a new helper
prefers housekeeping CPUs by examining the tick_nohz_full_mask. Hiding
the declaration behind #ifdef CONFIG_NO_HZ_FULL forces all the users to
be behind an ifdef too.
Move the tick_nohz_full_mask declaration, this lets callers drop the
ifdef, and guard access to tick_nohz_full_mask with IS_ENABLED() or
something like tick_nohz_full_cpu().
The definition does not need to be moved as any callers should be
removed at compile time unless CONFIG_NO_HZ_FULL is defined.
Signed-off-by: James Morse <james.morse@arm.com>
---
include/linux/tick.h | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/include/linux/tick.h b/include/linux/tick.h
index 9459fef5b857..65af90ca409a 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -174,9 +174,16 @@ static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
static inline void tick_nohz_idle_stop_tick_protected(void) { }
#endif /* !CONFIG_NO_HZ_COMMON */
+/*
+ * Mask of CPUs that are nohz_full.
+ *
+ * Users should be guarded by CONFIG_NO_HZ_FULL or a tick_nohz_full_cpu()
+ * check.
+ */
+extern cpumask_var_t tick_nohz_full_mask;
+
#ifdef CONFIG_NO_HZ_FULL
extern bool tick_nohz_full_running;
-extern cpumask_var_t tick_nohz_full_mask;
static inline bool tick_nohz_full_enabled(void)
{
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 10/24] tick/nohz: Move tick_nohz_full_mask declaration outside the #ifdef
2023-07-28 16:42 ` [PATCH v5 10/24] tick/nohz: Move tick_nohz_full_mask declaration outside the #ifdef James Morse
@ 2023-08-09 22:34 ` Reinette Chatre
2023-08-24 16:55 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:34 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> tick_nohz_full_mask lists the CPUs that are nohz_full. This is only
> needed when CONFIG_NO_HZ_FULL is defined. tick_nohz_full_cpu() allows
> a specific CPU to be tested against the mask, and evaluates to false
> when CONFIG_NO_HZ_FULL is not defined.
>
> The resctrl code needs to pick a CPU to run some work on, a new helper
> prefers housekeeping CPUs by examining the tick_nohz_full_mask. Hiding
> the declaration behind #ifdef CONFIG_NO_HZ_FULL forces all the users to
> be behind an ifdef too.
>
> Move the tick_nohz_full_mask declaration, this lets callers drop the
> ifdef, and guard access to tick_nohz_full_mask with IS_ENABLED() or
> something like tick_nohz_full_cpu().
>
> The definition does not need to be moved as any callers should be
> removed at compile time unless CONFIG_NO_HZ_FULL is defined.
>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> include/linux/tick.h | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
This is outside of the resctrl area. What is the upstreaming
plan for this patch?
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 10/24] tick/nohz: Move tick_nohz_full_mask declaration outside the #ifdef
2023-08-09 22:34 ` Reinette Chatre
@ 2023-08-24 16:55 ` James Morse
2023-08-25 0:43 ` Reinette Chatre
0 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-08-24 16:55 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 09/08/2023 23:34, Reinette Chatre wrote:
> On 7/28/2023 9:42 AM, James Morse wrote:
>> tick_nohz_full_mask lists the CPUs that are nohz_full. This is only
>> needed when CONFIG_NO_HZ_FULL is defined. tick_nohz_full_cpu() allows
>> a specific CPU to be tested against the mask, and evaluates to false
>> when CONFIG_NO_HZ_FULL is not defined.
>>
>> The resctrl code needs to pick a CPU to run some work on, a new helper
>> prefers housekeeping CPUs by examining the tick_nohz_full_mask. Hiding
>> the declaration behind #ifdef CONFIG_NO_HZ_FULL forces all the users to
>> be behind an ifdef too.
>>
>> Move the tick_nohz_full_mask declaration, this lets callers drop the
>> ifdef, and guard access to tick_nohz_full_mask with IS_ENABLED() or
>> something like tick_nohz_full_cpu().
>>
>> The definition does not need to be moved as any callers should be
>> removed at compile time unless CONFIG_NO_HZ_FULL is defined.
>>
>> Signed-off-by: James Morse <james.morse@arm.com>
>> ---
>> include/linux/tick.h | 9 ++++++++-
>> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> This is outside of the resctrl area. What is the upstreaming
> plan for this patch?
Once you're happy with the rest of it - we can give the other folk on CC a poke.
I'd assume changes to this file also go via tip. It would just need an ack from the
relevant person.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 10/24] tick/nohz: Move tick_nohz_full_mask declaration outside the #ifdef
2023-08-24 16:55 ` James Morse
@ 2023-08-25 0:43 ` Reinette Chatre
2023-09-08 15:58 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-25 0:43 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 8/24/2023 9:55 AM, James Morse wrote:
> On 09/08/2023 23:34, Reinette Chatre wrote:
>> On 7/28/2023 9:42 AM, James Morse wrote:
>>> tick_nohz_full_mask lists the CPUs that are nohz_full. This is only
>>> needed when CONFIG_NO_HZ_FULL is defined. tick_nohz_full_cpu() allows
>>> a specific CPU to be tested against the mask, and evaluates to false
>>> when CONFIG_NO_HZ_FULL is not defined.
>>>
>>> The resctrl code needs to pick a CPU to run some work on, a new helper
>>> prefers housekeeping CPUs by examining the tick_nohz_full_mask. Hiding
>>> the declaration behind #ifdef CONFIG_NO_HZ_FULL forces all the users to
>>> be behind an ifdef too.
>>>
>>> Move the tick_nohz_full_mask declaration, this lets callers drop the
>>> ifdef, and guard access to tick_nohz_full_mask with IS_ENABLED() or
>>> something like tick_nohz_full_cpu().
>>>
>>> The definition does not need to be moved as any callers should be
>>> removed at compile time unless CONFIG_NO_HZ_FULL is defined.
>>>
>>> Signed-off-by: James Morse <james.morse@arm.com>
>>> ---
>>> include/linux/tick.h | 9 ++++++++-
>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> This is outside of the resctrl area. What is the upstreaming
>> plan for this patch?
>
> Once you're happy with the rest of it - we can give the other folk on CC a poke.
> I'd assume changes to this file also go via tip. It would just need an ack from the
> relevant person.
At the moment this change is buried within a pile of resctrl
changes so we need to make sure that folks are not surprised by this
thinking we are trying to sneak it in. Please note that
CC is currently missing Frederic Weisbecker.
I wonder if it may help to change cover letter to be something like
"x86/resctrl and tick/nohz: Monitor ..." Just an idea.
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 10/24] tick/nohz: Move tick_nohz_full_mask declaration outside the #ifdef
2023-08-25 0:43 ` Reinette Chatre
@ 2023-09-08 15:58 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-09-08 15:58 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 8/25/23 01:43, Reinette Chatre wrote:
> On 8/24/2023 9:55 AM, James Morse wrote:
>> On 09/08/2023 23:34, Reinette Chatre wrote:
>>> On 7/28/2023 9:42 AM, James Morse wrote:
>>>> tick_nohz_full_mask lists the CPUs that are nohz_full. This is only
>>>> needed when CONFIG_NO_HZ_FULL is defined. tick_nohz_full_cpu() allows
>>>> a specific CPU to be tested against the mask, and evaluates to false
>>>> when CONFIG_NO_HZ_FULL is not defined.
>>>>
>>>> The resctrl code needs to pick a CPU to run some work on, a new helper
>>>> prefers housekeeping CPUs by examining the tick_nohz_full_mask. Hiding
>>>> the declaration behind #ifdef CONFIG_NO_HZ_FULL forces all the users to
>>>> be behind an ifdef too.
>>>>
>>>> Move the tick_nohz_full_mask declaration, this lets callers drop the
>>>> ifdef, and guard access to tick_nohz_full_mask with IS_ENABLED() or
>>>> something like tick_nohz_full_cpu().
>>>>
>>>> The definition does not need to be moved as any callers should be
>>>> removed at compile time unless CONFIG_NO_HZ_FULL is defined.
>>> This is outside of the resctrl area. What is the upstreaming
>>> plan for this patch?
>>
>> Once you're happy with the rest of it - we can give the other folk on CC a poke.
>> I'd assume changes to this file also go via tip. It would just need an ack from the
>> relevant person.
>
> At the moment this change is buried within a pile of resctrl
> changes so we need to make sure that folks are not surprised by this
> thinking we are trying to sneak it in. Please note that
> CC is currently missing Frederic Weisbecker.
Oops, fixed.
> I wonder if it may help to change cover letter to be something like
> "x86/resctrl and tick/nohz: Monitor ..." Just an idea.
I think that would be excessive - the subject of the patch already matches what is normal for that file. I'll move the patch to the top of the series as that makes it clearer that there is no dependency on the rest of the series.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 11/24] x86/resctrl: Add cpumask_any_housekeeping() for limbo/overflow
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (9 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 10/24] tick/nohz: Move tick_nohz_full_mask declaration outside the #ifdef James Morse
@ 2023-07-28 16:42 ` James Morse
2023-07-28 16:42 ` [PATCH v5 12/24] x86/resctrl: Make resctrl_arch_rmid_read() retry when it is interrupted James Morse
` (14 subsequent siblings)
25 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
The limbo and overflow code picks a CPU to use from the domain's list
of online CPUs. Work is then scheduled on these CPUs to maintain
the limbo list and any counters that may overflow.
cpumask_any() may pick a CPU that is marked nohz_full, which will
either penalise the work that CPU was dedicated to, or delay the
processing of limbo list or counters that may overflow. Perhaps
indefinitely. Delaying the overflow handling will skew the bandwidth
values calculated by mba_sc, which expects to be called once a second.
Add cpumask_any_housekeeping() as a replacement for cpumask_any()
that prefers housekeeping CPUs. This helper will still return
a nohz_full CPU if that is the only option. The CPU to use is
re-evaluated each time the limbo/overflow work runs. This ensures
the work will move off a nohz_full CPU once a housekeeping CPU is
available.
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v3:
* typos fixed
Changes since v4:
* Made temporary variables unsigned
---
arch/x86/kernel/cpu/resctrl/internal.h | 23 +++++++++++++++++++++++
arch/x86/kernel/cpu/resctrl/monitor.c | 17 ++++++++++++-----
2 files changed, 35 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 7c2a1c235480..a32d307292a1 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -7,6 +7,7 @@
#include <linux/kernfs.h>
#include <linux/fs_context.h>
#include <linux/jump_label.h>
+#include <linux/tick.h>
#include <asm/resctrl.h>
#define L3_QOS_CDP_ENABLE 0x01ULL
@@ -55,6 +56,28 @@
/* Max event bits supported */
#define MAX_EVT_CONFIG_BITS GENMASK(6, 0)
+/**
+ * cpumask_any_housekeeping() - Choose any CPU in @mask, preferring those that
+ * aren't marked nohz_full
+ * @mask: The mask to pick a CPU from.
+ *
+ * Returns a CPU in @mask. If there are housekeeping CPUs that don't use
+ * nohz_full, these are preferred.
+ */
+static inline unsigned int cpumask_any_housekeeping(const struct cpumask *mask)
+{
+ unsigned int cpu, hk_cpu;
+
+ cpu = cpumask_any(mask);
+ if (tick_nohz_full_cpu(cpu)) {
+ hk_cpu = cpumask_nth_andnot(0, mask, tick_nohz_full_mask);
+ if (hk_cpu < nr_cpu_ids)
+ cpu = hk_cpu;
+ }
+
+ return cpu;
+}
+
struct rdt_fs_context {
struct kernfs_fs_context kfc;
bool enable_cdpl2;
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index c268aa5925c7..f0670795b446 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -767,9 +767,9 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d,
void cqm_handle_limbo(struct work_struct *work)
{
unsigned long delay = msecs_to_jiffies(CQM_LIMBOCHECK_INTERVAL);
- int cpu = smp_processor_id();
struct rdt_resource *r;
struct rdt_domain *d;
+ int cpu;
mutex_lock(&rdtgroup_mutex);
@@ -778,8 +778,10 @@ void cqm_handle_limbo(struct work_struct *work)
__check_limbo(d, false);
- if (has_busy_rmid(d))
+ if (has_busy_rmid(d)) {
+ cpu = cpumask_any_housekeeping(&d->cpu_mask);
schedule_delayed_work_on(cpu, &d->cqm_limbo, delay);
+ }
mutex_unlock(&rdtgroup_mutex);
}
@@ -789,7 +791,7 @@ void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms)
unsigned long delay = msecs_to_jiffies(delay_ms);
int cpu;
- cpu = cpumask_any(&dom->cpu_mask);
+ cpu = cpumask_any_housekeeping(&dom->cpu_mask);
dom->cqm_work_cpu = cpu;
schedule_delayed_work_on(cpu, &dom->cqm_limbo, delay);
@@ -799,10 +801,10 @@ void mbm_handle_overflow(struct work_struct *work)
{
unsigned long delay = msecs_to_jiffies(MBM_OVERFLOW_INTERVAL);
struct rdtgroup *prgrp, *crgrp;
- int cpu = smp_processor_id();
struct list_head *head;
struct rdt_resource *r;
struct rdt_domain *d;
+ int cpu;
mutex_lock(&rdtgroup_mutex);
@@ -823,6 +825,11 @@ void mbm_handle_overflow(struct work_struct *work)
update_mba_bw(prgrp, d);
}
+ /*
+ * Re-check for housekeeping CPUs. This allows the overflow handler to
+ * move off a nohz_full CPU quickly.
+ */
+ cpu = cpumask_any_housekeeping(&d->cpu_mask);
schedule_delayed_work_on(cpu, &d->mbm_over, delay);
out_unlock:
@@ -836,7 +843,7 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
if (!static_branch_likely(&rdt_mon_enable_key))
return;
- cpu = cpumask_any(&dom->cpu_mask);
+ cpu = cpumask_any_housekeeping(&dom->cpu_mask);
dom->mbm_work_cpu = cpu;
schedule_delayed_work_on(cpu, &dom->mbm_over, delay);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* [PATCH v5 12/24] x86/resctrl: Make resctrl_arch_rmid_read() retry when it is interrupted
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (10 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 11/24] x86/resctrl: Add cpumask_any_housekeeping() for limbo/overflow James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:35 ` Reinette Chatre
2023-07-28 16:42 ` [PATCH v5 13/24] x86/resctrl: Queue mon_event_read() instead of sending an IPI James Morse
` (13 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
resctrl_arch_rmid_read() could be called by resctrl in process context,
and then called by the PMU driver from irq context on the same CPU.
This could cause struct arch_mbm_state's prev_msr value to go backwards,
leading to the chunks value being incremented multiple times.
The struct arch_mbm_state holds both the previous msr value, and a count
of the number of chunks. These two fields need to be updated atomically.
Similarly __rmid_read() must write to one MSR and read from another,
this must be proteted from re-entrance.
Read the prev_msr before accessing the hardware, and cmpxchg() the value
back. If the value has changed, the whole thing is re-attempted. To protect
the MSR, __rmid_read() will retry reads for QM_CTR if QM_EVTSEL has changed
from the selected value.
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v4:
* Added retry loop in __rmid_read() to protect the CPU MSRs.
---
arch/x86/kernel/cpu/resctrl/internal.h | 5 +--
arch/x86/kernel/cpu/resctrl/monitor.c | 45 ++++++++++++++++++++------
2 files changed, 38 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index a32d307292a1..7012f42a82ee 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -2,6 +2,7 @@
#ifndef _ASM_X86_RESCTRL_INTERNAL_H
#define _ASM_X86_RESCTRL_INTERNAL_H
+#include <linux/atomic.h>
#include <linux/resctrl.h>
#include <linux/sched.h>
#include <linux/kernfs.h>
@@ -338,8 +339,8 @@ struct mbm_state {
* find this struct.
*/
struct arch_mbm_state {
- u64 chunks;
- u64 prev_msr;
+ atomic64_t chunks;
+ atomic64_t prev_msr;
};
/**
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index f0670795b446..62350bbd23e0 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -16,6 +16,7 @@
*/
#include <linux/module.h>
+#include <linux/percpu.h>
#include <linux/sizes.h>
#include <linux/slab.h>
@@ -24,6 +25,9 @@
#include "internal.h"
+/* Sequence number for writes to IA32 QM_EVTSEL */
+static DEFINE_PER_CPU(u64, qm_evtsel_seq);
+
struct rmid_entry {
/*
* Some architectures's resctrl_arch_rmid_read() needs the CLOSID value
@@ -178,7 +182,7 @@ static inline struct rmid_entry *__rmid_entry(u32 idx)
static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val)
{
- u64 msr_val;
+ u64 msr_val, seq;
/*
* As per the SDM, when IA32_QM_EVTSEL.EvtID (bits 7:0) is configured
@@ -187,9 +191,16 @@ static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val)
* IA32_QM_CTR.data (bits 61:0) reports the monitored data.
* IA32_QM_CTR.Error (bit 63) and IA32_QM_CTR.Unavailable (bit 62)
* are error bits.
+ * A per-cpu sequence counter is incremented each time QM_EVTSEL is
+ * written. This is used to detect if this function was interrupted by
+ * another call without re-reading the MSRs. Retry the MSR read when
+ * this happens as the QM_CTR value may belong to a different event.
*/
- wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid);
- rdmsrl(MSR_IA32_QM_CTR, msr_val);
+ do {
+ seq = this_cpu_inc_return(qm_evtsel_seq);
+ wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid);
+ rdmsrl(MSR_IA32_QM_CTR, msr_val);
+ } while (seq != this_cpu_read(qm_evtsel_seq));
if (msr_val & RMID_VAL_ERROR)
return -EIO;
@@ -225,13 +236,15 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d,
{
struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
struct arch_mbm_state *am;
+ u64 msr_val;
am = get_arch_mbm_state(hw_dom, rmid, eventid);
if (am) {
memset(am, 0, sizeof(*am));
/* Record any initial, non-zero count value. */
- __rmid_read(rmid, eventid, &am->prev_msr);
+ __rmid_read(rmid, eventid, &msr_val);
+ atomic64_set(&am->prev_msr, msr_val);
}
}
@@ -266,23 +279,35 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
+ u64 start_msr_val, old_msr_val, msr_val, chunks;
struct arch_mbm_state *am;
- u64 msr_val, chunks;
- int ret;
+ int ret = 0;
if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask))
return -EINVAL;
+interrupted:
+ am = get_arch_mbm_state(hw_dom, rmid, eventid);
+ if (am)
+ start_msr_val = atomic64_read(&am->prev_msr);
+
ret = __rmid_read(rmid, eventid, &msr_val);
if (ret)
return ret;
am = get_arch_mbm_state(hw_dom, rmid, eventid);
if (am) {
- am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
- hw_res->mbm_width);
- chunks = get_corrected_mbm_count(rmid, am->chunks);
- am->prev_msr = msr_val;
+ old_msr_val = atomic64_cmpxchg(&am->prev_msr, start_msr_val,
+ msr_val);
+ if (old_msr_val != start_msr_val)
+ goto interrupted;
+
+ chunks = mbm_overflow_count(start_msr_val, msr_val,
+ hw_res->mbm_width);
+ atomic64_add(chunks, &am->chunks);
+
+ chunks = get_corrected_mbm_count(rmid,
+ atomic64_read(&am->chunks));
} else {
chunks = msr_val;
}
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 12/24] x86/resctrl: Make resctrl_arch_rmid_read() retry when it is interrupted
2023-07-28 16:42 ` [PATCH v5 12/24] x86/resctrl: Make resctrl_arch_rmid_read() retry when it is interrupted James Morse
@ 2023-08-09 22:35 ` Reinette Chatre
2023-08-24 16:55 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:35 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> resctrl_arch_rmid_read() could be called by resctrl in process context,
> and then called by the PMU driver from irq context on the same CPU.
The changelog is written as a bug report of current behavior.
This does not seem to describe current but instead planned future behavior.
> This could cause struct arch_mbm_state's prev_msr value to go backwards,
> leading to the chunks value being incremented multiple times.
>
> The struct arch_mbm_state holds both the previous msr value, and a count
> of the number of chunks. These two fields need to be updated atomically.
> Similarly __rmid_read() must write to one MSR and read from another,
> this must be proteted from re-entrance.
proteted -> protected
>
> Read the prev_msr before accessing the hardware, and cmpxchg() the value
> back. If the value has changed, the whole thing is re-attempted. To protect
> the MSR, __rmid_read() will retry reads for QM_CTR if QM_EVTSEL has changed
> from the selected value.
The latter part of the sentence does not seem to match with what the
patch does.
>
> Signed-off-by: James Morse <james.morse@arm.com>
>
> ---
> Changes since v4:
> * Added retry loop in __rmid_read() to protect the CPU MSRs.
> ---
> arch/x86/kernel/cpu/resctrl/internal.h | 5 +--
> arch/x86/kernel/cpu/resctrl/monitor.c | 45 ++++++++++++++++++++------
> 2 files changed, 38 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index a32d307292a1..7012f42a82ee 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -2,6 +2,7 @@
> #ifndef _ASM_X86_RESCTRL_INTERNAL_H
> #define _ASM_X86_RESCTRL_INTERNAL_H
>
> +#include <linux/atomic.h>
> #include <linux/resctrl.h>
> #include <linux/sched.h>
> #include <linux/kernfs.h>
> @@ -338,8 +339,8 @@ struct mbm_state {
> * find this struct.
> */
> struct arch_mbm_state {
> - u64 chunks;
> - u64 prev_msr;
> + atomic64_t chunks;
> + atomic64_t prev_msr;
> };
>
> /**
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index f0670795b446..62350bbd23e0 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -16,6 +16,7 @@
> */
>
> #include <linux/module.h>
> +#include <linux/percpu.h>
> #include <linux/sizes.h>
> #include <linux/slab.h>
>
> @@ -24,6 +25,9 @@
>
> #include "internal.h"
>
> +/* Sequence number for writes to IA32 QM_EVTSEL */
> +static DEFINE_PER_CPU(u64, qm_evtsel_seq);
> +
> struct rmid_entry {
> /*
> * Some architectures's resctrl_arch_rmid_read() needs the CLOSID value
> @@ -178,7 +182,7 @@ static inline struct rmid_entry *__rmid_entry(u32 idx)
>
> static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val)
> {
> - u64 msr_val;
> + u64 msr_val, seq;
>
> /*
> * As per the SDM, when IA32_QM_EVTSEL.EvtID (bits 7:0) is configured
> @@ -187,9 +191,16 @@ static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val)
> * IA32_QM_CTR.data (bits 61:0) reports the monitored data.
> * IA32_QM_CTR.Error (bit 63) and IA32_QM_CTR.Unavailable (bit 62)
> * are error bits.
> + * A per-cpu sequence counter is incremented each time QM_EVTSEL is
> + * written. This is used to detect if this function was interrupted by
> + * another call without re-reading the MSRs. Retry the MSR read when
> + * this happens as the QM_CTR value may belong to a different event.
> */
> - wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid);
> - rdmsrl(MSR_IA32_QM_CTR, msr_val);
> + do {
> + seq = this_cpu_inc_return(qm_evtsel_seq);
> + wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid);
> + rdmsrl(MSR_IA32_QM_CTR, msr_val);
> + } while (seq != this_cpu_read(qm_evtsel_seq));
>
> if (msr_val & RMID_VAL_ERROR)
> return -EIO;
> @@ -225,13 +236,15 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d,
> {
> struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
> struct arch_mbm_state *am;
> + u64 msr_val;
>
> am = get_arch_mbm_state(hw_dom, rmid, eventid);
> if (am) {
> memset(am, 0, sizeof(*am));
>
> /* Record any initial, non-zero count value. */
> - __rmid_read(rmid, eventid, &am->prev_msr);
> + __rmid_read(rmid, eventid, &msr_val);
> + atomic64_set(&am->prev_msr, msr_val);
> }
> }
>
> @@ -266,23 +279,35 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> {
> struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
> struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
> + u64 start_msr_val, old_msr_val, msr_val, chunks;
> struct arch_mbm_state *am;
> - u64 msr_val, chunks;
> - int ret;
> + int ret = 0;
>
> if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask))
> return -EINVAL;
>
> +interrupted:
> + am = get_arch_mbm_state(hw_dom, rmid, eventid);
> + if (am)
> + start_msr_val = atomic64_read(&am->prev_msr);
> +
> ret = __rmid_read(rmid, eventid, &msr_val);
> if (ret)
> return ret;
>
> am = get_arch_mbm_state(hw_dom, rmid, eventid);
> if (am) {
> - am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
> - hw_res->mbm_width);
> - chunks = get_corrected_mbm_count(rmid, am->chunks);
> - am->prev_msr = msr_val;
> + old_msr_val = atomic64_cmpxchg(&am->prev_msr, start_msr_val,
> + msr_val);
> + if (old_msr_val != start_msr_val)
> + goto interrupted;
> +
hmmm ... what if interruption occurs here?
> + chunks = mbm_overflow_count(start_msr_val, msr_val,
> + hw_res->mbm_width);
> + atomic64_add(chunks, &am->chunks);
> +
> + chunks = get_corrected_mbm_count(rmid,
> + atomic64_read(&am->chunks));
> } else {
> chunks = msr_val;
> }
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 12/24] x86/resctrl: Make resctrl_arch_rmid_read() retry when it is interrupted
2023-08-09 22:35 ` Reinette Chatre
@ 2023-08-24 16:55 ` James Morse
2023-08-24 23:01 ` Reinette Chatre
0 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-08-24 16:55 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 09/08/2023 23:35, Reinette Chatre wrote:
> On 7/28/2023 9:42 AM, James Morse wrote:
>> resctrl_arch_rmid_read() could be called by resctrl in process context,
>> and then called by the PMU driver from irq context on the same CPU.
>
> The changelog is written as a bug report of current behavior.
> This does not seem to describe current but instead planned future behavior.
I pulled this patch from much later in the tree as it's about to be a problem in this
series. I haven't yet decided if its an existing bug in resctrl....
... it doesn't look like this can affect the path through mon_event_read(), as
generic_exec_single() masks interrupts.
But an incoming IPI from mon_event_read can corrupt the values for the limbo worker, which
at the worst would result in early re-use. And the MBM overflow worker ... which would
corrupt the value seen by user-space.
free_rmid() is equally affected, the outcome for limbo is the same spurious delay or early
re-use.
I'll change the commit messages to describe that, and float this earlier in the series.
The backport will be a problem. This applies cleanly to v6.1.46, but for v5.15.127 there
are at least 13 dependencies ... its probably not worth trying to fix as chances are
no-one is seeing this happen in reality.
>> This could cause struct arch_mbm_state's prev_msr value to go backwards,
>> leading to the chunks value being incremented multiple times.
>>
>> The struct arch_mbm_state holds both the previous msr value, and a count
>> of the number of chunks. These two fields need to be updated atomically.
>> Similarly __rmid_read() must write to one MSR and read from another,
>> this must be proteted from re-entrance.
>
> proteted -> protected
>
>>
>> Read the prev_msr before accessing the hardware, and cmpxchg() the value
>> back. If the value has changed, the whole thing is re-attempted. To protect
>> the MSR, __rmid_read() will retry reads for QM_CTR if QM_EVTSEL has changed
>> from the selected value.
>
> The latter part of the sentence does not seem to match with what the
> patch does.
>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>> index f0670795b446..62350bbd23e0 100644
>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>> @@ -266,23 +279,35 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
>> {
>> struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
>> struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
>> + u64 start_msr_val, old_msr_val, msr_val, chunks;
>> struct arch_mbm_state *am;
>> - u64 msr_val, chunks;
>> - int ret;
>> + int ret = 0;
>>
>> if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask))
>> return -EINVAL;
>>
>> +interrupted:
>> + am = get_arch_mbm_state(hw_dom, rmid, eventid);
>> + if (am)
>> + start_msr_val = atomic64_read(&am->prev_msr);
>> +
>> ret = __rmid_read(rmid, eventid, &msr_val);
>> if (ret)
>> return ret;
>>
>> am = get_arch_mbm_state(hw_dom, rmid, eventid);
>> if (am) {
>> - am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
>> - hw_res->mbm_width);
>> - chunks = get_corrected_mbm_count(rmid, am->chunks);
>> - am->prev_msr = msr_val;
>> + old_msr_val = atomic64_cmpxchg(&am->prev_msr, start_msr_val,
>> + msr_val);
>> + if (old_msr_val != start_msr_val)
>> + goto interrupted;
>> +
> hmmm ... what if interruption occurs here?
This is after the MSR write/read, so this function can't get a torn value from the
hardware. (e.g. reads the wrong RMID). The operations on struct arch_mbm_state are atomic,
so are still safe if the function becomes re-entrant.
If the re-entrant call accessed the same RMID and the same counter, its atomic64_add()
would be based on the prev_msr value this call read - because the above cmpxchg succeeded.
(put another way:)
The interrupting call returns a lower value, consistent with the first call not having
finished yet. The interrupted call returns the correct value, which is larger than it
read, because it completed after the interrupting call.
>> + chunks = mbm_overflow_count(start_msr_val, msr_val,
>> + hw_res->mbm_width);
>> + atomic64_add(chunks, &am->chunks);
>> +
>> + chunks = get_corrected_mbm_count(rmid,
>> + atomic64_read(&am->chunks));
>> } else {
>> chunks = msr_val;
>> }
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 12/24] x86/resctrl: Make resctrl_arch_rmid_read() retry when it is interrupted
2023-08-24 16:55 ` James Morse
@ 2023-08-24 23:01 ` Reinette Chatre
2023-09-08 15:58 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-24 23:01 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 8/24/2023 9:55 AM, James Morse wrote:
> Hi Reinette,
>
> On 09/08/2023 23:35, Reinette Chatre wrote:
>> On 7/28/2023 9:42 AM, James Morse wrote:
>>> resctrl_arch_rmid_read() could be called by resctrl in process context,
>>> and then called by the PMU driver from irq context on the same CPU.
>>
>> The changelog is written as a bug report of current behavior.
>> This does not seem to describe current but instead planned future behavior.
>
> I pulled this patch from much later in the tree as it's about to be a problem in this
> series. I haven't yet decided if its an existing bug in resctrl....
>
> ... it doesn't look like this can affect the path through mon_event_read(), as
> generic_exec_single() masks interrupts.
> But an incoming IPI from mon_event_read can corrupt the values for the limbo worker, which
> at the worst would result in early re-use. And the MBM overflow worker ... which would
> corrupt the value seen by user-space.
> free_rmid() is equally affected, the outcome for limbo is the same spurious delay or early
> re-use.
Apologies but these races are not obvious to me. Let me take the first, where the
race could be between mon_event_read() and the limbo worker. From what I can tell
mon_event_read() can be called from user space when creating a new monitoring
group or when viewing data associated with a monitoring group. In both cases
rdtgroup_mutex is held from the time user space triggers the request until
all IPIs are completed. Compare that with the limbo worker, cqm_handle_limbo(),
that also obtains rdtgroup_mutex before it attempts to do its work.
Considering this example I am not able to see how an incoming IPI from
mon_event_read() can interfere with the limbo worker since both
holding rdtgroup_mutex prevents them from running concurrently.
Similarly, the MBM overflow worker takes rdtgroup_mutex, and free_rmid()
is run with rdtgroup_mutex held.
> I'll change the commit messages to describe that, and float this earlier in the series.
> The backport will be a problem. This applies cleanly to v6.1.46, but for v5.15.127 there
> are at least 13 dependencies ... its probably not worth trying to fix as chances are
> no-one is seeing this happen in reality.
>
>
>>> This could cause struct arch_mbm_state's prev_msr value to go backwards,
>>> leading to the chunks value being incremented multiple times.
>>>
>>> The struct arch_mbm_state holds both the previous msr value, and a count
>>> of the number of chunks. These two fields need to be updated atomically.
>>> Similarly __rmid_read() must write to one MSR and read from another,
>>> this must be proteted from re-entrance.
>>
>> proteted -> protected
>>
>>>
>>> Read the prev_msr before accessing the hardware, and cmpxchg() the value
>>> back. If the value has changed, the whole thing is re-attempted. To protect
>>> the MSR, __rmid_read() will retry reads for QM_CTR if QM_EVTSEL has changed
>>> from the selected value.
>>
>> The latter part of the sentence does not seem to match with what the
>> patch does.
>
>
>>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>>> index f0670795b446..62350bbd23e0 100644
>>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>>> @@ -266,23 +279,35 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
>>> {
>>> struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
>>> struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
>>> + u64 start_msr_val, old_msr_val, msr_val, chunks;
>>> struct arch_mbm_state *am;
>>> - u64 msr_val, chunks;
>>> - int ret;
>>> + int ret = 0;
>>>
>>> if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask))
>>> return -EINVAL;
>>>
>>> +interrupted:
>>> + am = get_arch_mbm_state(hw_dom, rmid, eventid);
>>> + if (am)
>>> + start_msr_val = atomic64_read(&am->prev_msr);
>>> +
>>> ret = __rmid_read(rmid, eventid, &msr_val);
>>> if (ret)
>>> return ret;
>>>
>>> am = get_arch_mbm_state(hw_dom, rmid, eventid);
>>> if (am) {
>>> - am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
>>> - hw_res->mbm_width);
>>> - chunks = get_corrected_mbm_count(rmid, am->chunks);
>>> - am->prev_msr = msr_val;
>>> + old_msr_val = atomic64_cmpxchg(&am->prev_msr, start_msr_val,
>>> + msr_val);
>>> + if (old_msr_val != start_msr_val)
>>> + goto interrupted;
>>> +
>
>> hmmm ... what if interruption occurs here?
>
> This is after the MSR write/read, so this function can't get a torn value from the
> hardware. (e.g. reads the wrong RMID). The operations on struct arch_mbm_state are atomic,
> so are still safe if the function becomes re-entrant.
>
> If the re-entrant call accessed the same RMID and the same counter, its atomic64_add()
> would be based on the prev_msr value this call read - because the above cmpxchg succeeded.
>
> (put another way:)
> The interrupting call returns a lower value, consistent with the first call not having
> finished yet. The interrupted call returns the correct value, which is larger than it
> read, because it completed after the interrupting call.
>
I see, thank you. If this does end up being needed for a future
concurrency issue, could you please add a comment describing
this behavior where a later call can return a lower value and
why that is ok? It looks to me, as accomplished with the use of
atomic64_add(), as though this scenario would
end with the correct arch_mbm_state even though the members
are not updated atomically as a unit.
>
>>> + chunks = mbm_overflow_count(start_msr_val, msr_val,
>>> + hw_res->mbm_width);
>>> + atomic64_add(chunks, &am->chunks);
>>> +
>>> + chunks = get_corrected_mbm_count(rmid,
>>> + atomic64_read(&am->chunks));
>>> } else {
>>> chunks = msr_val;
>>> }
>
>
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 12/24] x86/resctrl: Make resctrl_arch_rmid_read() retry when it is interrupted
2023-08-24 23:01 ` Reinette Chatre
@ 2023-09-08 15:58 ` James Morse
2023-09-08 20:15 ` Reinette Chatre
0 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-09-08 15:58 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 8/25/23 00:01, Reinette Chatre wrote:
> On 8/24/2023 9:55 AM, James Morse wrote:
>> On 09/08/2023 23:35, Reinette Chatre wrote:
>>> On 7/28/2023 9:42 AM, James Morse wrote:
>>>> resctrl_arch_rmid_read() could be called by resctrl in process context,
>>>> and then called by the PMU driver from irq context on the same CPU.
>>>
>>> The changelog is written as a bug report of current behavior.
>>> This does not seem to describe current but instead planned future behavior.
>>
>> I pulled this patch from much later in the tree as it's about to be a problem in this
>> series. I haven't yet decided if its an existing bug in resctrl....
>>
>> ... it doesn't look like this can affect the path through mon_event_read(), as
>> generic_exec_single() masks interrupts.
>> But an incoming IPI from mon_event_read can corrupt the values for the limbo worker, which
>> at the worst would result in early re-use. And the MBM overflow worker ... which would
>> corrupt the value seen by user-space.
>> free_rmid() is equally affected, the outcome for limbo is the same spurious delay or early
>> re-use.
>
> Apologies but these races are not obvious to me. Let me take the first, where the
> race could be between mon_event_read() and the limbo worker. From what I can tell
> mon_event_read() can be called from user space when creating a new monitoring
> group or when viewing data associated with a monitoring group. In both cases
> rdtgroup_mutex is held from the time user space triggers the request until
> all IPIs are completed. Compare that with the limbo worker, cqm_handle_limbo(),
> that also obtains rdtgroup_mutex before it attempts to do its work.
> Considering this example I am not able to see how an incoming IPI from
> mon_event_read() can interfere with the limbo worker since both
> holding rdtgroup_mutex prevents them from running concurrently.
>
> Similarly, the MBM overflow worker takes rdtgroup_mutex, and free_rmid()
> is run with rdtgroup_mutex held.
Yes, sorry -I'd forgotten about that! I'll need to dig into this again.
Part of the problem is I'm looking at this from a different angle - something I haven't described properly: the resctrl_arch_ calls shouldn't depend on lock that is private to resctrl.
This allows for multiple callers, (e.g. resctrl_pmu that I haven't posted yet), and allows MPAM's
overflow interrupt to eventually be something resctrl could support.
It also allows the resctrl_arch_ calls to have lockdep asserts for their dependencies.
Yes the resctrl_mutex is what prevents this from being a problem today.
(I haven't yet looked at how Peter's series solves the same problem.)
... it may be possible to move this patch back of the 'fold' to live with the PMU code ...
>> I'll change the commit messages to describe that, and float this earlier in the series.
>> The backport will be a problem. This applies cleanly to v6.1.46, but for v5.15.127 there
>> are at least 13 dependencies ... its probably not worth trying to fix as chances are
>> no-one is seeing this happen in reality.
I'll remove that wording around this and mention the mutex.
[...]
>>>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>>>> index f0670795b446..62350bbd23e0 100644
>>>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>>>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>>>> @@ -266,23 +279,35 @@ int :/(struct rdt_resource *r, struct rdt_domain *d,
>>>> {
>>>> struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
>>>> struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
>>>> + u64 start_msr_val, old_msr_val, msr_val, chunks;
>>>> struct arch_mbm_state *am;
>>>> - u64 msr_val, chunks;
>>>> - int ret;
>>>> + int ret = 0;
>>>>
>>>> if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask))
>>>> return -EINVAL;
>>>>
>>>> +interrupted:
>>>> + am = get_arch_mbm_state(hw_dom, rmid, eventid);
>>>> + if (am)
>>>> + start_msr_val = atomic64_read(&am->prev_msr);
>>>> +
>>>> ret = __rmid_read(rmid, eventid, &msr_val);
>>>> if (ret)
>>>> return ret;
>>>>
>>>> am = get_arch_mbm_state(hw_dom, rmid, eventid);
>>>> if (am) {
>>>> - am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
>>>> - hw_res->mbm_width);
>>>> - chunks = get_corrected_mbm_count(rmid, am->chunks);
>>>> - am->prev_msr = msr_val;
>>>> + old_msr_val = atomic64_cmpxchg(&am->prev_msr, start_msr_val,
>>>> + msr_val);
>>>> + if (old_msr_val != start_msr_val)
>>>> + goto interrupted;
>>>> +
>>
>>> hmmm ... what if interruption occurs here?
>>
>> This is after the MSR write/read, so this function can't get a torn value from the
>> hardware. (e.g. reads the wrong RMID). The operations on struct arch_mbm_state are atomic,
>> so are still safe if the function becomes re-entrant.
>>
>> If the re-entrant call accessed the same RMID and the same counter, its atomic64_add()
>> would be based on the prev_msr value this call read - because the above cmpxchg succeeded.
>>
>> (put another way:)
>> The interrupting call returns a lower value, consistent with the first call not having
>> finished yet. The interrupted call returns the correct value, which is larger than it
>> read, because it completed after the interrupting call.
> I see, thank you. If this does end up being needed for a future
> concurrency issue, could you please add a comment describing
> this behavior where a later call can return a lower value and
> why that is ok? It looks to me, as accomplished with the use of
> atomic64_add(), as though this scenario would
> end with the correct arch_mbm_state even though the members
> are not updated atomically as a unit.
Sure my stab at that is:
/*
* At this point the hardware values have been read without
* being interrupted. Interrupts that occur later will read
* the updated am->prev_msr and safely increment am->chunks
* with the new data using atomic64_add().
*/
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 12/24] x86/resctrl: Make resctrl_arch_rmid_read() retry when it is interrupted
2023-09-08 15:58 ` James Morse
@ 2023-09-08 20:15 ` Reinette Chatre
0 siblings, 0 replies; 77+ messages in thread
From: Reinette Chatre @ 2023-09-08 20:15 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 9/8/2023 8:58 AM, James Morse wrote:
> On 8/25/23 00:01, Reinette Chatre wrote:
>> On 8/24/2023 9:55 AM, James Morse wrote:
>>> On 09/08/2023 23:35, Reinette Chatre wrote:
>>>> On 7/28/2023 9:42 AM, James Morse wrote:
>>>>> resctrl_arch_rmid_read() could be called by resctrl in process context,
>>>>> and then called by the PMU driver from irq context on the same CPU.
>>>>
>>>> The changelog is written as a bug report of current behavior.
>>>> This does not seem to describe current but instead planned future behavior.
>>>
>>> I pulled this patch from much later in the tree as it's about to be a problem in this
>>> series. I haven't yet decided if its an existing bug in resctrl....
>>>
>>> ... it doesn't look like this can affect the path through mon_event_read(), as
>>> generic_exec_single() masks interrupts.
>>> But an incoming IPI from mon_event_read can corrupt the values for the limbo worker, which
>>> at the worst would result in early re-use. And the MBM overflow worker ... which would
>>> corrupt the value seen by user-space.
>>> free_rmid() is equally affected, the outcome for limbo is the same spurious delay or early
>>> re-use.
>>
>> Apologies but these races are not obvious to me. Let me take the first, where the
>> race could be between mon_event_read() and the limbo worker. From what I can tell
>> mon_event_read() can be called from user space when creating a new monitoring
>> group or when viewing data associated with a monitoring group. In both cases
>> rdtgroup_mutex is held from the time user space triggers the request until
>> all IPIs are completed. Compare that with the limbo worker, cqm_handle_limbo(),
>> that also obtains rdtgroup_mutex before it attempts to do its work.
>> Considering this example I am not able to see how an incoming IPI from
>> mon_event_read() can interfere with the limbo worker since both
>> holding rdtgroup_mutex prevents them from running concurrently.
>>
>> Similarly, the MBM overflow worker takes rdtgroup_mutex, and free_rmid()
>> is run with rdtgroup_mutex held.
>
> Yes, sorry -I'd forgotten about that! I'll need to dig into this again.
>
> Part of the problem is I'm looking at this from a different angle - something I haven't described properly: the resctrl_arch_ calls shouldn't depend on lock that is private to resctrl.
>
> This allows for multiple callers, (e.g. resctrl_pmu that I haven't posted yet), and allows MPAM's
> overflow interrupt to eventually be something resctrl could support.
> It also allows the resctrl_arch_ calls to have lockdep asserts for their dependencies.
>
> Yes the resctrl_mutex is what prevents this from being a problem today.
> (I haven't yet looked at how Peter's series solves the same problem.)
>
> ... it may be possible to move this patch back of the 'fold' to live with the PMU code ...
In its current form this patch does appear to be out of place in
this series.
>>> I'll change the commit messages to describe that, and float this earlier in the series.
>>> The backport will be a problem. This applies cleanly to v6.1.46, but for v5.15.127 there
>>> are at least 13 dependencies ... its probably not worth trying to fix as chances are
>>> no-one is seeing this happen in reality.
>
> I'll remove that wording around this and mention the mutex.
>
> [...]
>
>>>>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>>>>> index f0670795b446..62350bbd23e0 100644
>>>>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>>>>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>>>>> @@ -266,23 +279,35 @@ int :/(struct rdt_resource *r, struct rdt_domain *d,
>>>>> {
>>>>> struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
>>>>> struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
>>>>> + u64 start_msr_val, old_msr_val, msr_val, chunks;
>>>>> struct arch_mbm_state *am;
>>>>> - u64 msr_val, chunks;
>>>>> - int ret;
>>>>> + int ret = 0;
>>>>> if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask))
>>>>> return -EINVAL;
>>>>> +interrupted:
>>>>> + am = get_arch_mbm_state(hw_dom, rmid, eventid);
>>>>> + if (am)
>>>>> + start_msr_val = atomic64_read(&am->prev_msr);
>>>>> +
>>>>> ret = __rmid_read(rmid, eventid, &msr_val);
>>>>> if (ret)
>>>>> return ret;
>>>>> am = get_arch_mbm_state(hw_dom, rmid, eventid);
>>>>> if (am) {
>>>>> - am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
>>>>> - hw_res->mbm_width);
>>>>> - chunks = get_corrected_mbm_count(rmid, am->chunks);
>>>>> - am->prev_msr = msr_val;
>>>>> + old_msr_val = atomic64_cmpxchg(&am->prev_msr, start_msr_val,
>>>>> + msr_val);
>>>>> + if (old_msr_val != start_msr_val)
>>>>> + goto interrupted;
>>>>> +
>>>
>>>> hmmm ... what if interruption occurs here?
>>>
>>> This is after the MSR write/read, so this function can't get a torn value from the
>>> hardware. (e.g. reads the wrong RMID). The operations on struct arch_mbm_state are atomic,
>>> so are still safe if the function becomes re-entrant.
>>>
>>> If the re-entrant call accessed the same RMID and the same counter, its atomic64_add()
>>> would be based on the prev_msr value this call read - because the above cmpxchg succeeded.
>>>
>>> (put another way:)
>>> The interrupting call returns a lower value, consistent with the first call not having
>>> finished yet. The interrupted call returns the correct value, which is larger than it
>>> read, because it completed after the interrupting call.
>
>> I see, thank you. If this does end up being needed for a future
>> concurrency issue, could you please add a comment describing
>> this behavior where a later call can return a lower value and
>> why that is ok? It looks to me, as accomplished with the use of
>> atomic64_add(), as though this scenario would
>> end with the correct arch_mbm_state even though the members
>> are not updated atomically as a unit.
>
> Sure my stab at that is:
> /*
> * At this point the hardware values have been read without
> * being interrupted. Interrupts that occur later will read
> * the updated am->prev_msr and safely increment am->chunks
> * with the new data using atomic64_add().
> */
The comment is useful and appears to address that accurate
arch_mbm_state is maintained. My question was related to the
higher level behavior encountered by the callers. repeating my
question: "could you please add a comment describing this behavior where
a later call can return a lower value and why that is ok?"
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 13/24] x86/resctrl: Queue mon_event_read() instead of sending an IPI
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (11 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 12/24] x86/resctrl: Make resctrl_arch_rmid_read() retry when it is interrupted James Morse
@ 2023-07-28 16:42 ` James Morse
2023-07-28 16:42 ` [PATCH v5 14/24] x86/resctrl: Allow resctrl_arch_rmid_read() to sleep James Morse
` (12 subsequent siblings)
25 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
Intel is blessed with an abundance of monitors, one per RMID, that can be
read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC,
the number implemented is up to the manufacturer. This means when there are
fewer monitors than needed, they need to be allocated and freed.
MPAM's CSU monitors are used to back the 'llc_occupancy' monitor file. The
CSU counter is allowed to return 'not ready' for a small number of
micro-seconds after programming. To allow one CSU hardware monitor to be
used for multiple control or monitor groups, the CPU accessing the
monitor needs to be able to block when configuring and reading the
counter.
Worse, the domain may be broken up into slices, and the MMIO accesses
for each slice may need performing from different CPUs.
These two details mean MPAMs monitor code needs to be able to sleep, and
IPI another CPU in the domain to read from a resource that has been sliced.
mon_event_read() already invokes mon_event_count() via IPI, which means
this isn't possible. On systems using nohz-full, some CPUs need to be
interrupted to run kernel work as they otherwise stay in user-space
running realtime workloads. Interrupting these CPUs should be avoided,
and scheduling work on them may never complete.
Change mon_event_read() to pick a housekeeping CPU, (one that is not using
nohz_full) and schedule mon_event_count() and wait. If all the CPUs
in a domain are using nohz-full, then an IPI is used as the fallback.
This function is only used in response to a user-space filesystem request
(not the timing sensitive overflow code).
This allows MPAM to hide the slice behaviour from resctrl, and to keep
the monitor-allocation in monitor.c. When the IPI fallback is used on
machines where MPAM needs to make an access on multiple CPUs, the counter
read will always fail.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Reviewed-by: Peter Newman <peternewman@google.com>
Tested-by: Peter Newman <peternewman@google.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v2:
* Use cpumask_any_housekeeping() and fallback to an IPI if needed.
Changes since v3:
* Actually include the IPI fallback code.
Changes since v4:
* Tinkered with existing capitalisation.
---
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 28 +++++++++++++++++++++--
arch/x86/kernel/cpu/resctrl/monitor.c | 2 +-
2 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index b44c487727d4..bd263b9a0abd 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -19,6 +19,7 @@
#include <linux/kernfs.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
+#include <linux/tick.h>
#include "internal.h"
/*
@@ -520,12 +521,24 @@ int rdtgroup_schemata_show(struct kernfs_open_file *of,
return ret;
}
+static int smp_mon_event_count(void *arg)
+{
+ mon_event_count(arg);
+
+ return 0;
+}
+
void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
struct rdt_domain *d, struct rdtgroup *rdtgrp,
int evtid, int first)
{
+ int cpu;
+
+ /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */
+ lockdep_assert_held(&rdtgroup_mutex);
+
/*
- * setup the parameters to send to the IPI to read the data.
+ * Setup the parameters to pass to mon_event_count() to read the data.
*/
rr->rgrp = rdtgrp;
rr->evtid = evtid;
@@ -534,7 +547,18 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
rr->val = 0;
rr->first = first;
- smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1);
+ cpu = cpumask_any_housekeeping(&d->cpu_mask);
+
+ /*
+ * cpumask_any_housekeeping() prefers housekeeping CPUs, but
+ * are all the CPUs nohz_full? If yes, pick a CPU to IPI.
+ * MPAM's resctrl_arch_rmid_read() is unable to read the
+ * counters on some platforms if its called in irq context.
+ */
+ if (tick_nohz_full_cpu(cpu))
+ smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1);
+ else
+ smp_call_on_cpu(cpu, smp_mon_event_count, rr, false);
}
int rdtgroup_mondata_show(struct seq_file *m, void *arg)
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 62350bbd23e0..32569354c4f1 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -597,7 +597,7 @@ static void mbm_bw_count(u32 closid, u32 rmid, struct rmid_read *rr)
}
/*
- * This is called via IPI to read the CQM/MBM counters
+ * This is scheduled by mon_event_read() to read the CQM/MBM counters
* on a domain.
*/
void mon_event_count(void *info)
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* [PATCH v5 14/24] x86/resctrl: Allow resctrl_arch_rmid_read() to sleep
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (12 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 13/24] x86/resctrl: Queue mon_event_read() instead of sending an IPI James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:36 ` Reinette Chatre
2023-07-28 16:42 ` [PATCH v5 15/24] x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read() James Morse
` (11 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
MPAM's cache occupancy counters can take a little while to settle once
the monitor has been configured. The maximum settling time is described
to the driver via a firmware table. The value could be large enough
that it makes sense to sleep. To avoid exposing this to resctrl, it
should be hidden behind MPAM's resctrl_arch_rmid_read().
resctrl_arch_rmid_read() may be called via IPI meaning it is unable
to sleep. In this case resctrl_arch_rmid_read() should return an error
if it needs to sleep. This will only affect MPAM platforms where
the cache occupancy counter isn't available immediately, nohz_full is
in use, and there are there are no housekeeping CPUs in the necessary
domain.
There are three callers of resctrl_arch_rmid_read():
__mon_event_count() and __check_limbo() are both called from a
non-migrateable context. mon_event_read() invokes __mon_event_count()
using smp_call_on_cpu(), which adds work to the target CPUs workqueue.
rdtgroup_mutex() is held, meaning this cannot race with the resctrl
cpuhp callback. __check_limbo() is invoked via schedule_delayed_work_on()
also adds work to a per-cpu workqueue.
The remaining call is add_rmid_to_limbo() which is called in response
to a user-space syscall that frees an RMID. This opportunistically
reads the LLC occupancy counter on the current domain to see if the
RMID is over the dirty threshold. This has to disable preemption to
avoid reading the wrong domain's value. Disabling pre-emption here
prevents resctrl_arch_rmid_read() from sleeping.
add_rmid_to_limbo() walks each domain, but only reads the counter
on one domain. If the system has more than one domain, the RMID will
always be added to the limbo list. If the RMIDs usage was not over the
threshold, it will be removed from the list when __check_limbo() runs.
Make this the default behaviour. Free RMIDs are always added to the
limbo list for each domain.
The user visible effect of this is that a clean RMID is not available
for re-allocation immediately after 'rmdir()' completes, this behaviour
was never portable as it never happened on a machine with multiple
domains.
Removing this path allows resctrl_arch_rmid_read() to sleep if its called
with interrupts unmasked. Document this is the expected behaviour, and
add a might_sleep() annotation to catch changes that won't work on arm64.
Signed-off-by: James Morse <james.morse@arm.com>
---
The previous version allowed resctrl_arch_rmid_read() to be called on the
wrong CPUs, but now that this needs to take nohz_full and housekeeping into
account, its too complex.
Changes since v3:
* Removed error handling for smp_call_function_any(), this can't race
with the cpuhp callbacks as both hold rdtgroup_mutex.
* Switched to the alternative of removing the counter read, this simplifies
things dramatically.
Changes since v4:
* Messed with capitalisation.
* Removed some dead code now that entry->busy will never be zero in
add_rmid_to_limbo().
* Rephrased the comment above resctrl_arch_rmid_read_context_check().
---
arch/x86/kernel/cpu/resctrl/monitor.c | 24 +++++-------------------
include/linux/resctrl.h | 18 +++++++++++++++++-
2 files changed, 22 insertions(+), 20 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 32569354c4f1..08e3307863c3 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -283,6 +283,8 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
struct arch_mbm_state *am;
int ret = 0;
+ resctrl_arch_rmid_read_context_check();
+
if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask))
return -EINVAL;
@@ -470,8 +472,6 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
{
struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
struct rdt_domain *d;
- int cpu, err;
- u64 val = 0;
u32 idx;
lockdep_assert_held(&rdtgroup_mutex);
@@ -479,17 +479,7 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
entry->busy = 0;
- cpu = get_cpu();
list_for_each_entry(d, &r->domains, list) {
- if (cpumask_test_cpu(cpu, &d->cpu_mask)) {
- err = resctrl_arch_rmid_read(r, d, entry->closid,
- entry->rmid,
- QOS_L3_OCCUP_EVENT_ID,
- &val);
- if (err || val <= resctrl_rmid_realloc_threshold)
- continue;
- }
-
/*
* For the first limbo RMID in the domain,
* setup up the limbo worker.
@@ -499,14 +489,10 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
set_bit(idx, d->rmid_busy_llc);
entry->busy++;
}
- put_cpu();
- if (entry->busy) {
- rmid_limbo_count++;
- if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
- closid_num_dirty_rmid[entry->closid]++;
- } else
- list_add_tail(&entry->list, &rmid_free_lru);
+ rmid_limbo_count++;
+ if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
+ closid_num_dirty_rmid[entry->closid]++;
}
void free_rmid(u32 closid, u32 rmid)
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index 660752406174..f7311102e94c 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -236,7 +236,12 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
* @eventid: eventid to read, e.g. L3 occupancy.
* @val: result of the counter read in bytes.
*
- * Call from process context on a CPU that belongs to domain @d.
+ * Some architectures need to sleep when first programming some of the counters.
+ * (specifically: arm64's MPAM cache occupancy counters can return 'not ready'
+ * for a short period of time). Call from a non-migrateable process context on
+ * a CPU that belongs to domain @d. e.g. use smp_call_on_cpu() or
+ * schedule_work_on(). This function can be called with interrupts masked,
+ * e.g. using smp_call_function_any(), but may consistently return an error.
*
* Return:
* 0 on success, or -EIO, -EINVAL etc on error.
@@ -245,6 +250,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
u32 closid, u32 rmid, enum resctrl_event_id eventid,
u64 *val);
+/**
+ * resctrl_arch_rmid_read_context_check() - warn about invalid contexts
+ *
+ * When built with CONFIG_DEBUG_ATOMIC_SLEEP generate a warning when
+ * resctrl_arch_rmid_read() is called with preemption disabled.
+ */
+static inline void resctrl_arch_rmid_read_context_check(void)
+{
+ if (!irqs_disabled())
+ might_sleep();
+}
/**
* resctrl_arch_reset_rmid() - Reset any private state associated with rmid
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 14/24] x86/resctrl: Allow resctrl_arch_rmid_read() to sleep
2023-07-28 16:42 ` [PATCH v5 14/24] x86/resctrl: Allow resctrl_arch_rmid_read() to sleep James Morse
@ 2023-08-09 22:36 ` Reinette Chatre
2023-08-24 16:56 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:36 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> MPAM's cache occupancy counters can take a little while to settle once
> the monitor has been configured. The maximum settling time is described
> to the driver via a firmware table. The value could be large enough
> that it makes sense to sleep. To avoid exposing this to resctrl, it
> should be hidden behind MPAM's resctrl_arch_rmid_read().
>
> resctrl_arch_rmid_read() may be called via IPI meaning it is unable
> to sleep. In this case resctrl_arch_rmid_read() should return an error
> if it needs to sleep. This will only affect MPAM platforms where
> the cache occupancy counter isn't available immediately, nohz_full is
> in use, and there are there are no housekeeping CPUs in the necessary
> domain.
>
> There are three callers of resctrl_arch_rmid_read():
> __mon_event_count() and __check_limbo() are both called from a
> non-migrateable context. mon_event_read() invokes __mon_event_count()
> using smp_call_on_cpu(), which adds work to the target CPUs workqueue.
> rdtgroup_mutex() is held, meaning this cannot race with the resctrl
> cpuhp callback. __check_limbo() is invoked via schedule_delayed_work_on()
> also adds work to a per-cpu workqueue.
>
> The remaining call is add_rmid_to_limbo() which is called in response
> to a user-space syscall that frees an RMID. This opportunistically
> reads the LLC occupancy counter on the current domain to see if the
> RMID is over the dirty threshold. This has to disable preemption to
> avoid reading the wrong domain's value. Disabling pre-emption here
> prevents resctrl_arch_rmid_read() from sleeping.
>
> add_rmid_to_limbo() walks each domain, but only reads the counter
> on one domain. If the system has more than one domain, the RMID will
> always be added to the limbo list. If the RMIDs usage was not over the
> threshold, it will be removed from the list when __check_limbo() runs.
> Make this the default behaviour. Free RMIDs are always added to the
> limbo list for each domain.
>
> The user visible effect of this is that a clean RMID is not available
> for re-allocation immediately after 'rmdir()' completes, this behaviour
> was never portable as it never happened on a machine with multiple
> domains.
>
> Removing this path allows resctrl_arch_rmid_read() to sleep if its called
> with interrupts unmasked. Document this is the expected behaviour, and
> add a might_sleep() annotation to catch changes that won't work on arm64.
>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> The previous version allowed resctrl_arch_rmid_read() to be called on the
> wrong CPUs, but now that this needs to take nohz_full and housekeeping into
> account, its too complex.
>
> Changes since v3:
> * Removed error handling for smp_call_function_any(), this can't race
> with the cpuhp callbacks as both hold rdtgroup_mutex.
> * Switched to the alternative of removing the counter read, this simplifies
> things dramatically.
>
> Changes since v4:
> * Messed with capitalisation.
> * Removed some dead code now that entry->busy will never be zero in
> add_rmid_to_limbo().
> * Rephrased the comment above resctrl_arch_rmid_read_context_check().
> ---
> arch/x86/kernel/cpu/resctrl/monitor.c | 24 +++++-------------------
> include/linux/resctrl.h | 18 +++++++++++++++++-
> 2 files changed, 22 insertions(+), 20 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 32569354c4f1..08e3307863c3 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -283,6 +283,8 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> struct arch_mbm_state *am;
> int ret = 0;
>
> + resctrl_arch_rmid_read_context_check();
> +
> if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask))
> return -EINVAL;
>
> @@ -470,8 +472,6 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> {
> struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
> struct rdt_domain *d;
> - int cpu, err;
> - u64 val = 0;
> u32 idx;
>
> lockdep_assert_held(&rdtgroup_mutex);
> @@ -479,17 +479,7 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
>
> entry->busy = 0;
> - cpu = get_cpu();
> list_for_each_entry(d, &r->domains, list) {
> - if (cpumask_test_cpu(cpu, &d->cpu_mask)) {
> - err = resctrl_arch_rmid_read(r, d, entry->closid,
> - entry->rmid,
> - QOS_L3_OCCUP_EVENT_ID,
> - &val);
> - if (err || val <= resctrl_rmid_realloc_threshold)
> - continue;
> - }
> -
> /*
> * For the first limbo RMID in the domain,
> * setup up the limbo worker.
> @@ -499,14 +489,10 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> set_bit(idx, d->rmid_busy_llc);
> entry->busy++;
> }
> - put_cpu();
>
> - if (entry->busy) {
> - rmid_limbo_count++;
> - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> - closid_num_dirty_rmid[entry->closid]++;
> - } else
> - list_add_tail(&entry->list, &rmid_free_lru);
> + rmid_limbo_count++;
> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID))
> + closid_num_dirty_rmid[entry->closid]++;
> }
>
> void free_rmid(u32 closid, u32 rmid)
> diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
> index 660752406174..f7311102e94c 100644
> --- a/include/linux/resctrl.h
> +++ b/include/linux/resctrl.h
> @@ -236,7 +236,12 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
> * @eventid: eventid to read, e.g. L3 occupancy.
> * @val: result of the counter read in bytes.
> *
> - * Call from process context on a CPU that belongs to domain @d.
> + * Some architectures need to sleep when first programming some of the counters.
> + * (specifically: arm64's MPAM cache occupancy counters can return 'not ready'
> + * for a short period of time). Call from a non-migrateable process context on
> + * a CPU that belongs to domain @d. e.g. use smp_call_on_cpu() or
> + * schedule_work_on(). This function can be called with interrupts masked,
> + * e.g. using smp_call_function_any(), but may consistently return an error.
Considering that smp_call_function_any() explicitly disables preemption I
would like to learn more about why did you chose to word as "interrupts masked" vs
"preemption disabled"?
> *
> * Return:
> * 0 on success, or -EIO, -EINVAL etc on error.
> @@ -245,6 +250,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> u32 closid, u32 rmid, enum resctrl_event_id eventid,
> u64 *val);
>
> +/**
> + * resctrl_arch_rmid_read_context_check() - warn about invalid contexts
> + *
> + * When built with CONFIG_DEBUG_ATOMIC_SLEEP generate a warning when
> + * resctrl_arch_rmid_read() is called with preemption disabled.
> + */
> +static inline void resctrl_arch_rmid_read_context_check(void)
> +{
> + if (!irqs_disabled())
> + might_sleep();
> +}
Apologies but even after rereading the patch as well as your response to
the previous patch version several times I am not able to understand why the
code is looking like above. If, like according to the comment above, a
warning should be generated with preemption disabled, then should it not
just be "might_sleep()" without the "!irqs_disabled()" check?
I understand how for MPAM you want its code to be called in two different
contexts so I assume that the MPAM code would have two different paths,
one that can sleep and the other that cannot, both valid. It thus sounds
as though you want the x86 code to have context checks so that any issues
that could impact arm can be caught on x86? In that case, should the
x86 code also rather have two paths (one unused and the other has the
context check)?
>
> /**
> * resctrl_arch_reset_rmid() - Reset any private state associated with rmid
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 14/24] x86/resctrl: Allow resctrl_arch_rmid_read() to sleep
2023-08-09 22:36 ` Reinette Chatre
@ 2023-08-24 16:56 ` James Morse
2023-08-24 23:02 ` Reinette Chatre
0 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-08-24 16:56 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 09/08/2023 23:36, Reinette Chatre wrote:
> On 7/28/2023 9:42 AM, James Morse wrote:
>> MPAM's cache occupancy counters can take a little while to settle once
>> the monitor has been configured. The maximum settling time is described
>> to the driver via a firmware table. The value could be large enough
>> that it makes sense to sleep. To avoid exposing this to resctrl, it
>> should be hidden behind MPAM's resctrl_arch_rmid_read().
>>
>> resctrl_arch_rmid_read() may be called via IPI meaning it is unable
>> to sleep. In this case resctrl_arch_rmid_read() should return an error
>> if it needs to sleep. This will only affect MPAM platforms where
>> the cache occupancy counter isn't available immediately, nohz_full is
>> in use, and there are there are no housekeeping CPUs in the necessary
>> domain.
>>
>> There are three callers of resctrl_arch_rmid_read():
>> __mon_event_count() and __check_limbo() are both called from a
>> non-migrateable context. mon_event_read() invokes __mon_event_count()
>> using smp_call_on_cpu(), which adds work to the target CPUs workqueue.
>> rdtgroup_mutex() is held, meaning this cannot race with the resctrl
>> cpuhp callback. __check_limbo() is invoked via schedule_delayed_work_on()
>> also adds work to a per-cpu workqueue.
>>
>> The remaining call is add_rmid_to_limbo() which is called in response
>> to a user-space syscall that frees an RMID. This opportunistically
>> reads the LLC occupancy counter on the current domain to see if the
>> RMID is over the dirty threshold. This has to disable preemption to
>> avoid reading the wrong domain's value. Disabling pre-emption here
>> prevents resctrl_arch_rmid_read() from sleeping.
>>
>> add_rmid_to_limbo() walks each domain, but only reads the counter
>> on one domain. If the system has more than one domain, the RMID will
>> always be added to the limbo list. If the RMIDs usage was not over the
>> threshold, it will be removed from the list when __check_limbo() runs.
>> Make this the default behaviour. Free RMIDs are always added to the
>> limbo list for each domain.
>>
>> The user visible effect of this is that a clean RMID is not available
>> for re-allocation immediately after 'rmdir()' completes, this behaviour
>> was never portable as it never happened on a machine with multiple
>> domains.
>>
>> Removing this path allows resctrl_arch_rmid_read() to sleep if its called
>> with interrupts unmasked. Document this is the expected behaviour, and
>> add a might_sleep() annotation to catch changes that won't work on arm64.
>> diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
>> index 660752406174..f7311102e94c 100644
>> --- a/include/linux/resctrl.h
>> +++ b/include/linux/resctrl.h
>> @@ -236,7 +236,12 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
>> * @eventid: eventid to read, e.g. L3 occupancy.
>> * @val: result of the counter read in bytes.
>> *
>> - * Call from process context on a CPU that belongs to domain @d.
>> + * Some architectures need to sleep when first programming some of the counters.
>> + * (specifically: arm64's MPAM cache occupancy counters can return 'not ready'
>> + * for a short period of time). Call from a non-migrateable process context on
>> + * a CPU that belongs to domain @d. e.g. use smp_call_on_cpu() or
>> + * schedule_work_on(). This function can be called with interrupts masked,
>> + * e.g. using smp_call_function_any(), but may consistently return an error.
>
> Considering that smp_call_function_any() explicitly disables preemption I
> would like to learn more about why did you chose to word as "interrupts masked" vs
> "preemption disabled"?
smp_call_function_any() while it works out which CPU to run on, which may be this CPU. It
can't be migrated once it has picked the CPU to run on. But actually doing the work is
done by generic_exec_single(). This masks interrupts if calling locally, or invokes
__smp_call_single_queue() to raise the IPI. Obviously the other end of an IPI is running
with interrupts masked.
(If you wanted to schedule work on a remote CPU, that would be smp_call_on_cpu())
>> *
>> * Return:
>> * 0 on success, or -EIO, -EINVAL etc on error.
>> @@ -245,6 +250,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
>> u32 closid, u32 rmid, enum resctrl_event_id eventid,
>> u64 *val);
>>
>> +/**
>> + * resctrl_arch_rmid_read_context_check() - warn about invalid contexts
>> + *
>> + * When built with CONFIG_DEBUG_ATOMIC_SLEEP generate a warning when
>> + * resctrl_arch_rmid_read() is called with preemption disabled.
>> + */
>> +static inline void resctrl_arch_rmid_read_context_check(void)
>> +{
>> + if (!irqs_disabled())
>> + might_sleep();
>> +}
> Apologies but even after rereading the patch as well as your response to
> the previous patch version several times I am not able to understand why the
> code is looking like above. If, like according to the comment above, a
> warning should be generated with preemption disabled, then should it not
> just be "might_sleep()" without the "!irqs_disabled()" check?
This would be simpler. But for NOHZ_FULL you wanted to keep the IPI, so the contract with
resctrl_arch_rmid_read() is that if interrupts are unmasked, it can sleep.
If it needs to sleep, the arch code has to check.
A bare might_sleep() would fire when called via IPI when NOHZ_FULL is enabled.
This check is about ensuring all code paths get checked for this condition, as it doesn't
matter for x86.
This results in MPAM's implementation of resctrl_arch_rmid_read() checking if interrupts
are masked before sending an IPI when it has to read the counters from a set of CPUs. In
the NOHZ_FULL case it can't do this, so it will always return an error.
Platforms needing this should be few and far between, I'm hoping people running NOHZ_FULL
on them is even rarer... they'd need to carefully select their housekeeping CPUs to make
this work.
> I understand how for MPAM you want its code to be called in two different
> contexts so I assume that the MPAM code would have two different paths,
> one that can sleep and the other that cannot, both valid. It thus sounds
> as though you want the x86 code to have context checks so that any issues
> that could impact arm can be caught on x86? In that case, should the
> x86 code also rather have two paths (one unused and the other has the
> context check)?
I did toy with having resctrl_arch_rmid_read_nosleep() and resctrl_arch_rmid_read(). But
this resulted in more code for both architectures, I felt it was simpler to just document
this requirement with this check. It's what resctrl is already doing.
resctrl_arch_rmid_read_nosleep() could be called from irq context.
resctrl_arch_rmid_read() can sleep.
On x86 resctrl_arch_rmid_read() would call resctrl_arch_rmid_read_nosleep() ... and on
arm64 the exact same thing would happen as the interrupts_disabled() check is buried deep
in the mpam driver, the resctrl glue code doesn't need to check for this.
The split approach would be simpler to document - but much more confusing as both
architectures call one helper from the other.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 14/24] x86/resctrl: Allow resctrl_arch_rmid_read() to sleep
2023-08-24 16:56 ` James Morse
@ 2023-08-24 23:02 ` Reinette Chatre
2023-09-08 15:58 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-24 23:02 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 8/24/2023 9:56 AM, James Morse wrote:
> On 09/08/2023 23:36, Reinette Chatre wrote:
>> On 7/28/2023 9:42 AM, James Morse wrote:
>>> MPAM's cache occupancy counters can take a little while to settle once
>>> the monitor has been configured. The maximum settling time is described
>>> to the driver via a firmware table. The value could be large enough
>>> that it makes sense to sleep. To avoid exposing this to resctrl, it
>>> should be hidden behind MPAM's resctrl_arch_rmid_read().
>>>
>>> resctrl_arch_rmid_read() may be called via IPI meaning it is unable
>>> to sleep. In this case resctrl_arch_rmid_read() should return an error
>>> if it needs to sleep. This will only affect MPAM platforms where
>>> the cache occupancy counter isn't available immediately, nohz_full is
>>> in use, and there are there are no housekeeping CPUs in the necessary
>>> domain.
>>>
>>> There are three callers of resctrl_arch_rmid_read():
>>> __mon_event_count() and __check_limbo() are both called from a
>>> non-migrateable context. mon_event_read() invokes __mon_event_count()
>>> using smp_call_on_cpu(), which adds work to the target CPUs workqueue.
>>> rdtgroup_mutex() is held, meaning this cannot race with the resctrl
>>> cpuhp callback. __check_limbo() is invoked via schedule_delayed_work_on()
>>> also adds work to a per-cpu workqueue.
>>>
>>> The remaining call is add_rmid_to_limbo() which is called in response
>>> to a user-space syscall that frees an RMID. This opportunistically
>>> reads the LLC occupancy counter on the current domain to see if the
>>> RMID is over the dirty threshold. This has to disable preemption to
>>> avoid reading the wrong domain's value. Disabling pre-emption here
>>> prevents resctrl_arch_rmid_read() from sleeping.
>>>
>>> add_rmid_to_limbo() walks each domain, but only reads the counter
>>> on one domain. If the system has more than one domain, the RMID will
>>> always be added to the limbo list. If the RMIDs usage was not over the
>>> threshold, it will be removed from the list when __check_limbo() runs.
>>> Make this the default behaviour. Free RMIDs are always added to the
>>> limbo list for each domain.
>>>
>>> The user visible effect of this is that a clean RMID is not available
>>> for re-allocation immediately after 'rmdir()' completes, this behaviour
>>> was never portable as it never happened on a machine with multiple
>>> domains.
>>>
>>> Removing this path allows resctrl_arch_rmid_read() to sleep if its called
>>> with interrupts unmasked. Document this is the expected behaviour, and
>>> add a might_sleep() annotation to catch changes that won't work on arm64.
>
>
>>> diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
>>> index 660752406174..f7311102e94c 100644
>>> --- a/include/linux/resctrl.h
>>> +++ b/include/linux/resctrl.h
>>> @@ -236,7 +236,12 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
>>> * @eventid: eventid to read, e.g. L3 occupancy.
>>> * @val: result of the counter read in bytes.
>>> *
>>> - * Call from process context on a CPU that belongs to domain @d.
>>> + * Some architectures need to sleep when first programming some of the counters.
>>> + * (specifically: arm64's MPAM cache occupancy counters can return 'not ready'
>>> + * for a short period of time). Call from a non-migrateable process context on
>>> + * a CPU that belongs to domain @d. e.g. use smp_call_on_cpu() or
>>> + * schedule_work_on(). This function can be called with interrupts masked,
>>> + * e.g. using smp_call_function_any(), but may consistently return an error.
>>
>> Considering that smp_call_function_any() explicitly disables preemption I
>> would like to learn more about why did you chose to word as "interrupts masked" vs
>> "preemption disabled"?
>
> smp_call_function_any() while it works out which CPU to run on, which may be this CPU. It
> can't be migrated once it has picked the CPU to run on. But actually doing the work is
> done by generic_exec_single(). This masks interrupts if calling locally, or invokes
> __smp_call_single_queue() to raise the IPI. Obviously the other end of an IPI is running
> with interrupts masked.
I see, thank you for the detailed explanation.
>
> (If you wanted to schedule work on a remote CPU, that would be smp_call_on_cpu())
>
>
>>> *
>>> * Return:
>>> * 0 on success, or -EIO, -EINVAL etc on error.
>>> @@ -245,6 +250,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
>>> u32 closid, u32 rmid, enum resctrl_event_id eventid,
>>> u64 *val);
>>>
>>> +/**
>>> + * resctrl_arch_rmid_read_context_check() - warn about invalid contexts
>>> + *
>>> + * When built with CONFIG_DEBUG_ATOMIC_SLEEP generate a warning when
>>> + * resctrl_arch_rmid_read() is called with preemption disabled.
>>> + */
>>> +static inline void resctrl_arch_rmid_read_context_check(void)
>>> +{
>>> + if (!irqs_disabled())
>>> + might_sleep();
>>> +}
>
>> Apologies but even after rereading the patch as well as your response to
>> the previous patch version several times I am not able to understand why the
>> code is looking like above. If, like according to the comment above, a
>> warning should be generated with preemption disabled, then should it not
>> just be "might_sleep()" without the "!irqs_disabled()" check?
>
> This would be simpler. But for NOHZ_FULL you wanted to keep the IPI, so the contract with
> resctrl_arch_rmid_read() is that if interrupts are unmasked, it can sleep.
Thank you. This appears to be the key. Could you please add this
information to resctrl_arch_rmid_read_context_check()'s description?
> If it needs to sleep, the arch code has to check.
> A bare might_sleep() would fire when called via IPI when NOHZ_FULL is enabled.
>
> This check is about ensuring all code paths get checked for this condition, as it doesn't
> matter for x86.
>
>
> This results in MPAM's implementation of resctrl_arch_rmid_read() checking if interrupts
> are masked before sending an IPI when it has to read the counters from a set of CPUs. In
> the NOHZ_FULL case it can't do this, so it will always return an error.
> Platforms needing this should be few and far between, I'm hoping people running NOHZ_FULL
> on them is even rarer... they'd need to carefully select their housekeeping CPUs to make
> this work.
>
>
>> I understand how for MPAM you want its code to be called in two different
>> contexts so I assume that the MPAM code would have two different paths,
>> one that can sleep and the other that cannot, both valid. It thus sounds
>> as though you want the x86 code to have context checks so that any issues
>> that could impact arm can be caught on x86? In that case, should the
>> x86 code also rather have two paths (one unused and the other has the
>> context check)?
>
> I did toy with having resctrl_arch_rmid_read_nosleep() and resctrl_arch_rmid_read(). But
> this resulted in more code for both architectures, I felt it was simpler to just document
> this requirement with this check. It's what resctrl is already doing.
>
> resctrl_arch_rmid_read_nosleep() could be called from irq context.
> resctrl_arch_rmid_read() can sleep.
>
> On x86 resctrl_arch_rmid_read() would call resctrl_arch_rmid_read_nosleep() ... and on
> arm64 the exact same thing would happen as the interrupts_disabled() check is buried deep
> in the mpam driver, the resctrl glue code doesn't need to check for this.
>
> The split approach would be simpler to document - but much more confusing as both
> architectures call one helper from the other.
I see. Than you for considering the idea.
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 14/24] x86/resctrl: Allow resctrl_arch_rmid_read() to sleep
2023-08-24 23:02 ` Reinette Chatre
@ 2023-09-08 15:58 ` James Morse
2023-09-08 20:15 ` Reinette Chatre
0 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-09-08 15:58 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 8/25/23 00:02, Reinette Chatre wrote:
> On 8/24/2023 9:56 AM, James Morse wrote:
>> On 09/08/2023 23:36, Reinette Chatre wrote:
>>> On 7/28/2023 9:42 AM, James Morse wrote:
>>>> MPAM's cache occupancy counters can take a little while to settle once
>>>> the monitor has been configured. The maximum settling time is described
>>>> to the driver via a firmware table. The value could be large enough
>>>> that it makes sense to sleep. To avoid exposing this to resctrl, it
>>>> should be hidden behind MPAM's resctrl_arch_rmid_read().
>>>>
>>>> resctrl_arch_rmid_read() may be called via IPI meaning it is unable
>>>> to sleep. In this case resctrl_arch_rmid_read() should return an error
>>>> if it needs to sleep. This will only affect MPAM platforms where
>>>> the cache occupancy counter isn't available immediately, nohz_full is
>>>> in use, and there are there are no housekeeping CPUs in the necessary
>>>> domain.
>>>>
>>>> There are three callers of resctrl_arch_rmid_read():
>>>> __mon_event_count() and __check_limbo() are both called from a
>>>> non-migrateable context. mon_event_read() invokes __mon_event_count()
>>>> using smp_call_on_cpu(), which adds work to the target CPUs workqueue.
>>>> rdtgroup_mutex() is held, meaning this cannot race with the resctrl
>>>> cpuhp callback. __check_limbo() is invoked via schedule_delayed_work_on()
>>>> also adds work to a per-cpu workqueue.
>>>>
>>>> The remaining call is add_rmid_to_limbo() which is called in response
>>>> to a user-space syscall that frees an RMID. This opportunistically
>>>> reads the LLC occupancy counter on the current domain to see if the
>>>> RMID is over the dirty threshold. This has to disable preemption to
>>>> avoid reading the wrong domain's value. Disabling pre-emption here
>>>> prevents resctrl_arch_rmid_read() from sleeping.
>>>>
>>>> add_rmid_to_limbo() walks each domain, but only reads the counter
>>>> on one domain. If the system has more than one domain, the RMID will
>>>> always be added to the limbo list. If the RMIDs usage was not over the
>>>> threshold, it will be removed from the list when __check_limbo() runs.
>>>> Make this the default behaviour. Free RMIDs are always added to the
>>>> limbo list for each domain.
>>>>
>>>> The user visible effect of this is that a clean RMID is not available
>>>> for re-allocation immediately after 'rmdir()' completes, this behaviour
>>>> was never portable as it never happened on a machine with multiple
>>>> domains.
>>>>
>>>> Removing this path allows resctrl_arch_rmid_read() to sleep if its called
>>>> with interrupts unmasked. Document this is the expected behaviour, and
>>>> add a might_sleep() annotation to catch changes that won't work on arm64.
>>
>>
>>>> diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
>>>> index 660752406174..f7311102e94c 100644
>>>> --- a/include/linux/resctrl.h
>>>> +++ b/include/linux/resctrl.h
>>>> @@ -245,6 +250,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
>>>> u32 closid, u32 rmid, enum resctrl_event_id eventid,
>>>> u64 *val);
>>>>
>>>> +/**
>>>> + * resctrl_arch_rmid_read_context_check() - warn about invalid contexts
>>>> + *
>>>> + * When built with CONFIG_DEBUG_ATOMIC_SLEEP generate a warning when
>>>> + * resctrl_arch_rmid_read() is called with preemption disabled.
>>>> + */
>>>> +static inline void resctrl_arch_rmid_read_context_check(void)
>>>> +{
>>>> + if (!irqs_disabled())
>>>> + might_sleep();
>>>> +}
>>
>>> Apologies but even after rereading the patch as well as your response to
>>> the previous patch version several times I am not able to understand why the
>>> code is looking like above. If, like according to the comment above, a
>>> warning should be generated with preemption disabled, then should it not
>>> just be "might_sleep()" without the "!irqs_disabled()" check?
>>
>> This would be simpler. But for NOHZ_FULL you wanted to keep the IPI, so the contract with
>> resctrl_arch_rmid_read() is that if interrupts are unmasked, it can sleep.
>
> Thank you. This appears to be the key. Could you please add this
> information to resctrl_arch_rmid_read_context_check()'s description?
That comment now reads:
* resctrl_arch_rmid_read_context_check() - warn about invalid contexts
*
* When built with CONFIG_DEBUG_ATOMIC_SLEEP generate a warning when
* resctrl_arch_rmid_read() is called with preemption disabled.
*
* The contract with resctrl_arch_rmid_read() is that if interrupts
* are unmasked, it can sleep. This allows NOHZ_FULL systems to use an
* IPI, (and fail if the call needed to sleep), while most of the time
* the work is scheduled, allowing the call to sleep.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 14/24] x86/resctrl: Allow resctrl_arch_rmid_read() to sleep
2023-09-08 15:58 ` James Morse
@ 2023-09-08 20:15 ` Reinette Chatre
0 siblings, 0 replies; 77+ messages in thread
From: Reinette Chatre @ 2023-09-08 20:15 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 9/8/2023 8:58 AM, James Morse wrote:
>
> That comment now reads:
> * resctrl_arch_rmid_read_context_check() - warn about invalid contexts
> *
> * When built with CONFIG_DEBUG_ATOMIC_SLEEP generate a warning when
> * resctrl_arch_rmid_read() is called with preemption disabled.
> *
> * The contract with resctrl_arch_rmid_read() is that if interrupts
> * are unmasked, it can sleep. This allows NOHZ_FULL systems to use an
> * IPI, (and fail if the call needed to sleep), while most of the time
> * the work is scheduled, allowing the call to sleep.
>
Thank you very much.
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 15/24] x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read()
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (13 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 14/24] x86/resctrl: Allow resctrl_arch_rmid_read() to sleep James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:37 ` Reinette Chatre
2023-07-28 16:42 ` [PATCH v5 16/24] x86/resctrl: Make resctrl_mounted checks explicit James Morse
` (10 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
Depending on the number of monitors available, Arm's MPAM may need to
allocate a monitor prior to reading the counter value. Allocating a
contended resource may involve sleeping.
add_rmid_to_limbo() calls resctrl_arch_rmid_read() for multiple domains,
the allocation should be valid for all domains.
__check_limbo() and mon_event_count() each make multiple calls to
resctrl_arch_rmid_read(), to avoid extra work on contended systems,
the allocation should be valid for multiple invocations of
resctrl_arch_rmid_read().
Add arch hooks for this allocation, which need calling before
resctrl_arch_rmid_read(). The allocated monitor is passed to
resctrl_arch_rmid_read(), then freed again afterwards. The helper
can be called on any CPU, and can sleep.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v3:
* Expanded comment.
* Removed stray header include.
* Reworded commit message.
* Made ctx a void * instead of an int.
Changes since v4:
* Used IS_ERR() in more places.
---
arch/x86/include/asm/resctrl.h | 11 ++++++++++
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 5 +++++
arch/x86/kernel/cpu/resctrl/internal.h | 1 +
arch/x86/kernel/cpu/resctrl/monitor.c | 25 ++++++++++++++++++++---
include/linux/resctrl.h | 5 ++++-
5 files changed, 43 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
index 66d9e18cdc61..0986b5208d76 100644
--- a/arch/x86/include/asm/resctrl.h
+++ b/arch/x86/include/asm/resctrl.h
@@ -136,6 +136,17 @@ static inline u32 resctrl_arch_rmid_idx_encode(u32 ignored, u32 rmid)
return rmid;
}
+/* x86 can always read an rmid, nothing needs allocating */
+struct rdt_resource;
+static inline void *resctrl_arch_mon_ctx_alloc(struct rdt_resource *r, int evtid)
+{
+ might_sleep();
+ return NULL;
+};
+
+static inline void resctrl_arch_mon_ctx_free(struct rdt_resource *r, int evtid,
+ void *ctx) { };
+
void resctrl_cpu_detect(struct cpuinfo_x86 *c);
#else
diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index bd263b9a0abd..55bad57a7bd5 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -546,6 +546,9 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
rr->d = d;
rr->val = 0;
rr->first = first;
+ rr->arch_mon_ctx = resctrl_arch_mon_ctx_alloc(r, evtid);
+ if (IS_ERR(rr->arch_mon_ctx))
+ return;
cpu = cpumask_any_housekeeping(&d->cpu_mask);
@@ -559,6 +562,8 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1);
else
smp_call_on_cpu(cpu, smp_mon_event_count, rr, false);
+
+ resctrl_arch_mon_ctx_free(r, evtid, rr->arch_mon_ctx);
}
int rdtgroup_mondata_show(struct seq_file *m, void *arg)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 7012f42a82ee..45db51280ff4 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -136,6 +136,7 @@ struct rmid_read {
bool first;
int err;
u64 val;
+ void *arch_mon_ctx;
};
extern bool rdt_alloc_capable;
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 08e3307863c3..5eed8d0cbf36 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -275,7 +275,7 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width)
int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
u32 closid, u32 rmid, enum resctrl_event_id eventid,
- u64 *val)
+ u64 *val, void *ignored)
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
@@ -342,9 +342,14 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
u32 idx_limit = resctrl_arch_system_num_rmid_idx();
struct rmid_entry *entry;
u32 idx, cur_idx = 1;
+ void *arch_mon_ctx;
bool rmid_dirty;
u64 val = 0;
+ arch_mon_ctx = resctrl_arch_mon_ctx_alloc(r, QOS_L3_OCCUP_EVENT_ID);
+ if (IS_ERR(arch_mon_ctx))
+ return;
+
/*
* Skip RMID 0 and start from RMID 1 and check all the RMIDs that
* are marked as busy for occupancy < threshold. If the occupancy
@@ -358,7 +363,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
entry = __rmid_entry(idx);
if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid,
- QOS_L3_OCCUP_EVENT_ID, &val)) {
+ QOS_L3_OCCUP_EVENT_ID, &val,
+ arch_mon_ctx)) {
rmid_dirty = true;
} else {
rmid_dirty = (val >= resctrl_rmid_realloc_threshold);
@@ -371,6 +377,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
}
cur_idx = idx + 1;
}
+
+ resctrl_arch_mon_ctx_free(r, QOS_L3_OCCUP_EVENT_ID, arch_mon_ctx);
}
bool has_busy_rmid(struct rdt_domain *d)
@@ -544,7 +552,7 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
}
rr->err = resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, rr->evtid,
- &tval);
+ &tval, rr->arch_mon_ctx);
if (rr->err)
return rr->err;
@@ -754,11 +762,21 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d,
if (is_mbm_total_enabled()) {
rr.evtid = QOS_L3_MBM_TOTAL_EVENT_ID;
rr.val = 0;
+ rr.arch_mon_ctx = resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid);
+ if (IS_ERR(rr.arch_mon_ctx))
+ return;
+
__mon_event_count(closid, rmid, &rr);
+
+ resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx);
}
if (is_mbm_local_enabled()) {
rr.evtid = QOS_L3_MBM_LOCAL_EVENT_ID;
rr.val = 0;
+ rr.arch_mon_ctx = resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid);
+ if (IS_ERR(rr.arch_mon_ctx))
+ return;
+
__mon_event_count(closid, rmid, &rr);
/*
@@ -768,6 +786,7 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d,
*/
if (is_mba_sc(NULL))
mbm_bw_count(closid, rmid, &rr);
+ resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx);
}
}
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index f7311102e94c..5e4b4df9610b 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -235,6 +235,9 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
* @rmid: rmid of the counter to read.
* @eventid: eventid to read, e.g. L3 occupancy.
* @val: result of the counter read in bytes.
+ * @arch_mon_ctx: An architecture specific value from
+ * resctrl_arch_mon_ctx_alloc(), for MPAM this identifies
+ * the hardware monitor allocated for this read request.
*
* Some architectures need to sleep when first programming some of the counters.
* (specifically: arm64's MPAM cache occupancy counters can return 'not ready'
@@ -248,7 +251,7 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
*/
int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
u32 closid, u32 rmid, enum resctrl_event_id eventid,
- u64 *val);
+ u64 *val, void *arch_mon_ctx);
/**
* resctrl_arch_rmid_read_context_check() - warn about invalid contexts
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 15/24] x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read()
2023-07-28 16:42 ` [PATCH v5 15/24] x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read() James Morse
@ 2023-08-09 22:37 ` Reinette Chatre
2023-08-24 16:56 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:37 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> Depending on the number of monitors available, Arm's MPAM may need to
> allocate a monitor prior to reading the counter value. Allocating a
> contended resource may involve sleeping.
>
> add_rmid_to_limbo() calls resctrl_arch_rmid_read() for multiple domains,
> the allocation should be valid for all domains.
>
> __check_limbo() and mon_event_count() each make multiple calls to
> resctrl_arch_rmid_read(), to avoid extra work on contended systems,
> the allocation should be valid for multiple invocations of
> resctrl_arch_rmid_read().
>
> Add arch hooks for this allocation, which need calling before
> resctrl_arch_rmid_read(). The allocated monitor is passed to
> resctrl_arch_rmid_read(), then freed again afterwards. The helper
> can be called on any CPU, and can sleep.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v3:
> * Expanded comment.
> * Removed stray header include.
> * Reworded commit message.
> * Made ctx a void * instead of an int.
>
> Changes since v4:
> * Used IS_ERR() in more places.
> ---
> arch/x86/include/asm/resctrl.h | 11 ++++++++++
> arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 5 +++++
> arch/x86/kernel/cpu/resctrl/internal.h | 1 +
> arch/x86/kernel/cpu/resctrl/monitor.c | 25 ++++++++++++++++++++---
> include/linux/resctrl.h | 5 ++++-
> 5 files changed, 43 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
> index 66d9e18cdc61..0986b5208d76 100644
> --- a/arch/x86/include/asm/resctrl.h
> +++ b/arch/x86/include/asm/resctrl.h
> @@ -136,6 +136,17 @@ static inline u32 resctrl_arch_rmid_idx_encode(u32 ignored, u32 rmid)
> return rmid;
> }
>
> +/* x86 can always read an rmid, nothing needs allocating */
> +struct rdt_resource;
> +static inline void *resctrl_arch_mon_ctx_alloc(struct rdt_resource *r, int evtid)
> +{
> + might_sleep();
> + return NULL;
> +};
> +
> +static inline void resctrl_arch_mon_ctx_free(struct rdt_resource *r, int evtid,
> + void *ctx) { };
> +
> void resctrl_cpu_detect(struct cpuinfo_x86 *c);
>
> #else
> diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> index bd263b9a0abd..55bad57a7bd5 100644
> --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> @@ -546,6 +546,9 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
> rr->d = d;
> rr->val = 0;
> rr->first = first;
> + rr->arch_mon_ctx = resctrl_arch_mon_ctx_alloc(r, evtid);
> + if (IS_ERR(rr->arch_mon_ctx))
> + return;
>
> cpu = cpumask_any_housekeeping(&d->cpu_mask);
>
> @@ -559,6 +562,8 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
> smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1);
> else
> smp_call_on_cpu(cpu, smp_mon_event_count, rr, false);
> +
> + resctrl_arch_mon_ctx_free(r, evtid, rr->arch_mon_ctx);
> }
>
> int rdtgroup_mondata_show(struct seq_file *m, void *arg)
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index 7012f42a82ee..45db51280ff4 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -136,6 +136,7 @@ struct rmid_read {
> bool first;
> int err;
> u64 val;
> + void *arch_mon_ctx;
> };
>
> extern bool rdt_alloc_capable;
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 08e3307863c3..5eed8d0cbf36 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -275,7 +275,7 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width)
>
> int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> u32 closid, u32 rmid, enum resctrl_event_id eventid,
> - u64 *val)
> + u64 *val, void *ignored)
> {
> struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
> struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
> @@ -342,9 +342,14 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
> u32 idx_limit = resctrl_arch_system_num_rmid_idx();
> struct rmid_entry *entry;
> u32 idx, cur_idx = 1;
> + void *arch_mon_ctx;
> bool rmid_dirty;
> u64 val = 0;
>
> + arch_mon_ctx = resctrl_arch_mon_ctx_alloc(r, QOS_L3_OCCUP_EVENT_ID);
> + if (IS_ERR(arch_mon_ctx))
> + return;
> +
> /*
> * Skip RMID 0 and start from RMID 1 and check all the RMIDs that
> * are marked as busy for occupancy < threshold. If the occupancy
> @@ -358,7 +363,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
>
> entry = __rmid_entry(idx);
> if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid,
> - QOS_L3_OCCUP_EVENT_ID, &val)) {
> + QOS_L3_OCCUP_EVENT_ID, &val,
> + arch_mon_ctx)) {
> rmid_dirty = true;
> } else {
> rmid_dirty = (val >= resctrl_rmid_realloc_threshold);
> @@ -371,6 +377,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free)
> }
> cur_idx = idx + 1;
> }
> +
> + resctrl_arch_mon_ctx_free(r, QOS_L3_OCCUP_EVENT_ID, arch_mon_ctx);
> }
>
> bool has_busy_rmid(struct rdt_domain *d)
> @@ -544,7 +552,7 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
> }
>
> rr->err = resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, rr->evtid,
> - &tval);
> + &tval, rr->arch_mon_ctx);
> if (rr->err)
> return rr->err;
>
> @@ -754,11 +762,21 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d,
> if (is_mbm_total_enabled()) {
> rr.evtid = QOS_L3_MBM_TOTAL_EVENT_ID;
> rr.val = 0;
> + rr.arch_mon_ctx = resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid);
> + if (IS_ERR(rr.arch_mon_ctx))
> + return;
> +
> __mon_event_count(closid, rmid, &rr);
> +
> + resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx);
> }
> if (is_mbm_local_enabled()) {
> rr.evtid = QOS_L3_MBM_LOCAL_EVENT_ID;
> rr.val = 0;
> + rr.arch_mon_ctx = resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid);
> + if (IS_ERR(rr.arch_mon_ctx))
> + return;
> +
> __mon_event_count(closid, rmid, &rr);
>
> /*
> @@ -768,6 +786,7 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d,
> */
> if (is_mba_sc(NULL))
> mbm_bw_count(closid, rmid, &rr);
> + resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx);
> }
> }
>
> diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
> index f7311102e94c..5e4b4df9610b 100644
> --- a/include/linux/resctrl.h
> +++ b/include/linux/resctrl.h
> @@ -235,6 +235,9 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
> * @rmid: rmid of the counter to read.
> * @eventid: eventid to read, e.g. L3 occupancy.
> * @val: result of the counter read in bytes.
> + * @arch_mon_ctx: An architecture specific value from
> + * resctrl_arch_mon_ctx_alloc(), for MPAM this identifies
> + * the hardware monitor allocated for this read request.
> *
> * Some architectures need to sleep when first programming some of the counters.
> * (specifically: arm64's MPAM cache occupancy counters can return 'not ready'
> @@ -248,7 +251,7 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
> */
> int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> u32 closid, u32 rmid, enum resctrl_event_id eventid,
> - u64 *val);
> + u64 *val, void *arch_mon_ctx);
>
> /**
> * resctrl_arch_rmid_read_context_check() - warn about invalid contexts
Looking at the error paths all the errors are silent failures. On the
failure in mon_event_read() this could potentially be handled by setting
the "err" field in struct rmid_read ... at least then the caller can print
an error instead of displaying a zero count to the user. The other failures
are harder to handle though. Considering that these contexts are allocated and
freed so often, why not allocate them once (perhaps in struct rdt_hw_domain?)
on driver load with clear error handling?
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 15/24] x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read()
2023-08-09 22:37 ` Reinette Chatre
@ 2023-08-24 16:56 ` James Morse
2023-08-24 23:04 ` Reinette Chatre
0 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-08-24 16:56 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 09/08/2023 23:37, Reinette Chatre wrote:
> On 7/28/2023 9:42 AM, James Morse wrote:
>> Depending on the number of monitors available, Arm's MPAM may need to
>> allocate a monitor prior to reading the counter value. Allocating a
>> contended resource may involve sleeping.
>>
>> add_rmid_to_limbo() calls resctrl_arch_rmid_read() for multiple domains,
>> the allocation should be valid for all domains.
>>
>> __check_limbo() and mon_event_count() each make multiple calls to
>> resctrl_arch_rmid_read(), to avoid extra work on contended systems,
>> the allocation should be valid for multiple invocations of
>> resctrl_arch_rmid_read().
>>
>> Add arch hooks for this allocation, which need calling before
>> resctrl_arch_rmid_read(). The allocated monitor is passed to
>> resctrl_arch_rmid_read(), then freed again afterwards. The helper
>> can be called on any CPU, and can sleep.
> Looking at the error paths all the errors are silent failures.
Yeah, I don't really expect this to ever fail. The memory arm64 needs to allocate is
smaller than a pointer - if that fails, I think there are bigger problems. The hardware
resource is something the call will wait for.
As you note, it's hard to propagate an unlikely error back from here.
> On the
> failure in mon_event_read() this could potentially be handled by setting
> the "err" field in struct rmid_read ... at least then the caller can print
> an error instead of displaying a zero count to the user.
Sure, that covers the one a human being might see.
> The other failures are harder to handle though.
I don't think the silent failure is such a bad thing. For the limbo handler, no RMID moves
between the lists until the handler is able to make progress.
For the overflow handler, its possible an overflow will get missed (I still have an
overflow interrupt I can use here). But I don't think this will be the biggest problem on
a machine that is struggling to allocate 4 bytes.
> Considering that these contexts are allocated and
> freed so often, why not allocate them once (perhaps in struct rdt_hw_domain?)
> on driver load with clear error handling?
Because the resource they represent is scarce. You may have 100 control or monitor groups,
but only 10 hardware monitors. The hardware monitor has to be allocated and programmed
before it can be read.
This works well for the llc_occupancy counter, but not for bandwidth counters, which with
the current 'free running' ABI have to all be allocated and programmed at the beginning of
time. If there are enough monitors to do that - the MPAM driver will, and these
allocate/free calls will just be looking up the pre-allocated/pre-programmed monitor.
Doing the allocation like this keeps that logic in the mpam driver, and allows concurrent
access to resctrl_arch_rmid_read(), which is something any future PMU support will need.
I don't have any numbers how many monitors any platform is going to have, but I'm
confident there are some that won't have enough for each control-group or monitor-group to
have one.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 15/24] x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read()
2023-08-24 16:56 ` James Morse
@ 2023-08-24 23:04 ` Reinette Chatre
2023-09-15 17:37 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-24 23:04 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 8/24/2023 9:56 AM, James Morse wrote:
> Hi Reinette,
>
> On 09/08/2023 23:37, Reinette Chatre wrote:
>> On 7/28/2023 9:42 AM, James Morse wrote:
>>> Depending on the number of monitors available, Arm's MPAM may need to
>>> allocate a monitor prior to reading the counter value. Allocating a
>>> contended resource may involve sleeping.
>>>
>>> add_rmid_to_limbo() calls resctrl_arch_rmid_read() for multiple domains,
>>> the allocation should be valid for all domains.
>>>
>>> __check_limbo() and mon_event_count() each make multiple calls to
>>> resctrl_arch_rmid_read(), to avoid extra work on contended systems,
>>> the allocation should be valid for multiple invocations of
>>> resctrl_arch_rmid_read().
>>>
>>> Add arch hooks for this allocation, which need calling before
>>> resctrl_arch_rmid_read(). The allocated monitor is passed to
>>> resctrl_arch_rmid_read(), then freed again afterwards. The helper
>>> can be called on any CPU, and can sleep.
>
>> Looking at the error paths all the errors are silent failures.
>
> Yeah, I don't really expect this to ever fail. The memory arm64 needs to allocate is
> smaller than a pointer - if that fails, I think there are bigger problems. The hardware
> resource is something the call will wait for.
>
> As you note, it's hard to propagate an unlikely error back from here.
>
>
>> On the
>> failure in mon_event_read() this could potentially be handled by setting
>> the "err" field in struct rmid_read ... at least then the caller can print
>> an error instead of displaying a zero count to the user.
>
> Sure, that covers the one a human being might see.
Right.
>> The other failures are harder to handle though.
>
> I don't think the silent failure is such a bad thing. For the limbo handler, no RMID moves
> between the lists until the handler is able to make progress.
ok, so it needs to ensure that the handler is still rescheduled
when such a failure is encountered.
> For the overflow handler, its possible an overflow will get missed (I still have an
> overflow interrupt I can use here). But I don't think this will be the biggest problem on
> a machine that is struggling to allocate 4 bytes.
As I now (I think) better understand for MPAM it is 4 bytes of memory as well as
reservation of a hardware resource. Could something go wrong attempting to find an
available hardware resource that as you state later is indeed scarce? I wonder if
it would not be helpful to at least have resctrl log an error from the
places where it is not possible to propagate the error.
>> Considering that these contexts are allocated and
>> freed so often, why not allocate them once (perhaps in struct rdt_hw_domain?)
>> on driver load with clear error handling?
>
> Because the resource they represent is scarce. You may have 100 control or monitor groups,
> but only 10 hardware monitors. The hardware monitor has to be allocated and programmed
> before it can be read.
I think I misunderstood what "context" is when I wrote the above. I
was thinking about memory allocation that can be done early and
neglected to connect the "context" to be an actual hardware resource.
> This works well for the llc_occupancy counter, but not for bandwidth counters, which with
> the current 'free running' ABI have to all be allocated and programmed at the beginning of
> time. If there are enough monitors to do that - the MPAM driver will, and these
> allocate/free calls will just be looking up the pre-allocated/pre-programmed monitor.
> Doing the allocation like this keeps that logic in the mpam driver, and allows concurrent
> access to resctrl_arch_rmid_read(), which is something any future PMU support will need.
>
> I don't have any numbers how many monitors any platform is going to have, but I'm
> confident there are some that won't have enough for each control-group or monitor-group to
> have one.
Right. My question was not relevant to what this change does. Sorry for the noise.
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 15/24] x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read()
2023-08-24 23:04 ` Reinette Chatre
@ 2023-09-15 17:37 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-09-15 17:37 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 25/08/2023 00:04, Reinette Chatre wrote:
> On 8/24/2023 9:56 AM, James Morse wrote:
>> On 09/08/2023 23:37, Reinette Chatre wrote:
>>> On 7/28/2023 9:42 AM, James Morse wrote:
>>>> Depending on the number of monitors available, Arm's MPAM may need to
>>>> allocate a monitor prior to reading the counter value. Allocating a
>>>> contended resource may involve sleeping.
>>>>
>>>> add_rmid_to_limbo() calls resctrl_arch_rmid_read() for multiple domains,
>>>> the allocation should be valid for all domains.
>>>>
>>>> __check_limbo() and mon_event_count() each make multiple calls to
>>>> resctrl_arch_rmid_read(), to avoid extra work on contended systems,
>>>> the allocation should be valid for multiple invocations of
>>>> resctrl_arch_rmid_read().
>>>>
>>>> Add arch hooks for this allocation, which need calling before
>>>> resctrl_arch_rmid_read(). The allocated monitor is passed to
>>>> resctrl_arch_rmid_read(), then freed again afterwards. The helper
>>>> can be called on any CPU, and can sleep.
>>
>>> Looking at the error paths all the errors are silent failures.
>>
>> Yeah, I don't really expect this to ever fail. The memory arm64 needs to allocate is
>> smaller than a pointer - if that fails, I think there are bigger problems. The hardware
>> resource is something the call will wait for.
>>
>> As you note, it's hard to propagate an unlikely error back from here.
>>
>>
>>> On the
>>> failure in mon_event_read() this could potentially be handled by setting
>>> the "err" field in struct rmid_read ... at least then the caller can print
>>> an error instead of displaying a zero count to the user.
>>
>> Sure, that covers the one a human being might see.
>
> Right.
>
>>> The other failures are harder to handle though.
>>
>> I don't think the silent failure is such a bad thing. For the limbo handler, no RMID moves
>> between the lists until the handler is able to make progress.
>
> ok, so it needs to ensure that the handler is still rescheduled
> when such a failure is encountered.
Yup, the silent error occurs in __check_limbo(), and cqm_handle_limbo() will still
reschedule the worker. Similarly, for mbm_update(), mbm_handle_overflow() will still
reschedule the work.
>> For the overflow handler, its possible an overflow will get missed (I still have an
>> overflow interrupt I can use here). But I don't think this will be the biggest problem on
>> a machine that is struggling to allocate 4 bytes.
>
> As I now (I think) better understand for MPAM it is 4 bytes of memory as well as
> reservation of a hardware resource. Could something go wrong attempting to find an
> available hardware resource that as you state later is indeed scarce? I wonder if
> it would not be helpful to at least have resctrl log an error from the
> places where it is not possible to propagate the error.
If it can't allocate a monitor, it should block until one becomes available. Errors should
never occur during normal use.
I'll add pr_warn_ratelimited() for errors returned on this path.
>>> Considering that these contexts are allocated and
>>> freed so often, why not allocate them once (perhaps in struct rdt_hw_domain?)
>>> on driver load with clear error handling?
>>
>> Because the resource they represent is scarce. You may have 100 control or monitor groups,
>> but only 10 hardware monitors. The hardware monitor has to be allocated and programmed
>> before it can be read.
>
> I think I misunderstood what "context" is when I wrote the above. I
> was thinking about memory allocation that can be done early and
> neglected to connect the "context" to be an actual hardware resource.
Let me know if there is a better name. Obviously I had to avoid 'resource'!
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 16/24] x86/resctrl: Make resctrl_mounted checks explicit
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (14 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 15/24] x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read() James Morse
@ 2023-07-28 16:42 ` James Morse
2023-07-28 16:42 ` [PATCH v5 17/24] x86/resctrl: Move alloc/mon static keys into helpers James Morse
` (9 subsequent siblings)
25 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
The rdt_enable_key is switched when resctrl is mounted, and used to
prevent a second mount of the filesystem. It also enables the
architecture's context switch code.
This requires another architecture to have the same set of static-keys,
as resctrl depends on them too. The existing users of these static-keys
are implicitly also checking if the filesystem is mounted.
Make the resctrl_mounted checks explicit: resctrl can keep track of
whether it has been mounted once. This doesn't need to be combined with
whether the arch code is context switching the CLOSID.
rdt_mon_enable_key is never used just to test that resctrl is mounted,
but does also have this implication. Add a resctrl_mounted to all uses
of rdt_mon_enable_key. This will allow rdt_mon_enable_key to be swapped
with a helper in a subsequent patch.
This will allow the static-key changing to be moved behind resctrl_arch_
calls.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v3:
* Removed a newline.
* Rephrased commit message
Changes since v4:
* Rephrased comment.
---
arch/x86/kernel/cpu/resctrl/internal.h | 1 +
arch/x86/kernel/cpu/resctrl/monitor.c | 12 ++++++++++--
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 23 +++++++++++++++++------
3 files changed, 28 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 45db51280ff4..28751579abe6 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -143,6 +143,7 @@ extern bool rdt_alloc_capable;
extern bool rdt_mon_capable;
extern unsigned int rdt_mon_features;
extern struct list_head resctrl_schema_all;
+extern bool resctrl_mounted;
enum rdt_group_type {
RDTCTRL_GROUP = 0,
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 5eed8d0cbf36..5350d44b16b6 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -838,7 +838,11 @@ void mbm_handle_overflow(struct work_struct *work)
mutex_lock(&rdtgroup_mutex);
- if (!static_branch_likely(&rdt_mon_enable_key))
+ /*
+ * If the filesystem has been unmounted this work no longer needs to
+ * run.
+ */
+ if (!resctrl_mounted || !static_branch_likely(&rdt_mon_enable_key))
goto out_unlock;
r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
@@ -871,7 +875,11 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
unsigned long delay = msecs_to_jiffies(delay_ms);
int cpu;
- if (!static_branch_likely(&rdt_mon_enable_key))
+ /*
+ * When a domain comes online there is no guarantee the filesystem is
+ * mounted. If not, there is no need to catch counter overflow.
+ */
+ if (!resctrl_mounted || !static_branch_likely(&rdt_mon_enable_key))
return;
cpu = cpumask_any_housekeeping(&dom->cpu_mask);
dom->mbm_work_cpu = cpu;
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 775f6bede6f8..68fe2dde8887 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -42,6 +42,9 @@ LIST_HEAD(rdt_all_groups);
/* list of entries for the schemata file */
LIST_HEAD(resctrl_schema_all);
+/* The filesystem can only be mounted once. */
+bool resctrl_mounted;
+
/* Kernel fs node for "info" directory under root */
static struct kernfs_node *kn_info;
@@ -819,7 +822,7 @@ int proc_resctrl_show(struct seq_file *s, struct pid_namespace *ns,
mutex_lock(&rdtgroup_mutex);
/* Return empty if resctrl has not been mounted. */
- if (!static_branch_unlikely(&rdt_enable_key)) {
+ if (!resctrl_mounted) {
seq_puts(s, "res:\nmon:\n");
goto unlock;
}
@@ -2495,7 +2498,7 @@ static int rdt_get_tree(struct fs_context *fc)
/*
* resctrl file system can only be mounted once.
*/
- if (static_branch_unlikely(&rdt_enable_key)) {
+ if (resctrl_mounted) {
ret = -EBUSY;
goto out;
}
@@ -2543,8 +2546,10 @@ static int rdt_get_tree(struct fs_context *fc)
if (rdt_mon_capable)
static_branch_enable_cpuslocked(&rdt_mon_enable_key);
- if (rdt_alloc_capable || rdt_mon_capable)
+ if (rdt_alloc_capable || rdt_mon_capable) {
static_branch_enable_cpuslocked(&rdt_enable_key);
+ resctrl_mounted = true;
+ }
if (is_mbm_enabled()) {
r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
@@ -2815,6 +2820,7 @@ static void rdt_kill_sb(struct super_block *sb)
static_branch_disable_cpuslocked(&rdt_alloc_enable_key);
static_branch_disable_cpuslocked(&rdt_mon_enable_key);
static_branch_disable_cpuslocked(&rdt_enable_key);
+ resctrl_mounted = false;
kernfs_kill_sb(sb);
mutex_unlock(&rdtgroup_mutex);
cpus_read_unlock();
@@ -3774,7 +3780,7 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
* If resctrl is mounted, remove all the
* per domain monitor data directories.
*/
- if (static_branch_unlikely(&rdt_mon_enable_key))
+ if (resctrl_mounted && static_branch_unlikely(&rdt_mon_enable_key))
rmdir_mondata_subdir_allrdtgrp(r, d->id);
if (is_mbm_enabled())
@@ -3851,8 +3857,13 @@ int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
if (is_llc_occupancy_enabled())
INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo);
- /* If resctrl is mounted, add per domain monitor data directories. */
- if (static_branch_unlikely(&rdt_mon_enable_key))
+ /*
+ * If the filesystem is not mounted then only the default resource group
+ * exists. Creation of its directories is deferred until mount time
+ * by rdt_get_tree() calling mkdir_mondata_all().
+ * If resctrl is mounted, add per domain monitor data directories.
+ */
+ if (resctrl_mounted && static_branch_unlikely(&rdt_mon_enable_key))
mkdir_mondata_subdir_allrdtgrp(r, d);
return 0;
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* [PATCH v5 17/24] x86/resctrl: Move alloc/mon static keys into helpers
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (15 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 16/24] x86/resctrl: Make resctrl_mounted checks explicit James Morse
@ 2023-07-28 16:42 ` James Morse
2023-07-28 16:42 ` [PATCH v5 18/24] x86/resctrl: Make rdt_enable_key the arch's decision to switch James Morse
` (8 subsequent siblings)
25 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
resctrl enables three static keys depending on the features it has enabled.
Another architecture's context switch code may look different, any
static keys that control it should be buried behind helpers.
Move the alloc/mon logic into arch-specific helpers as a preparatory step
for making the rdt_enable_key's status something the arch code decides.
This means other architectures don't have to mirror the static keys.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
arch/x86/include/asm/resctrl.h | 20 ++++++++++++++++++++
arch/x86/kernel/cpu/resctrl/internal.h | 5 -----
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 8 ++++----
3 files changed, 24 insertions(+), 9 deletions(-)
diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
index 0986b5208d76..23010fce5f8f 100644
--- a/arch/x86/include/asm/resctrl.h
+++ b/arch/x86/include/asm/resctrl.h
@@ -42,6 +42,26 @@ DECLARE_STATIC_KEY_FALSE(rdt_enable_key);
DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
+static inline void resctrl_arch_enable_alloc(void)
+{
+ static_branch_enable_cpuslocked(&rdt_alloc_enable_key);
+}
+
+static inline void resctrl_arch_disable_alloc(void)
+{
+ static_branch_disable_cpuslocked(&rdt_alloc_enable_key);
+}
+
+static inline void resctrl_arch_enable_mon(void)
+{
+ static_branch_enable_cpuslocked(&rdt_mon_enable_key);
+}
+
+static inline void resctrl_arch_disable_mon(void)
+{
+ static_branch_disable_cpuslocked(&rdt_mon_enable_key);
+}
+
/*
* __resctrl_sched_in() - Writes the task's CLOSid/RMID to IA32_PQR_MSR
*
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 28751579abe6..ac39fecba4ca 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -93,9 +93,6 @@ static inline struct rdt_fs_context *rdt_fc2context(struct fs_context *fc)
return container_of(kfc, struct rdt_fs_context, kfc);
}
-DECLARE_STATIC_KEY_FALSE(rdt_enable_key);
-DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
-
/**
* struct mon_evt - Entry in the event list of a resource
* @evtid: event id
@@ -453,8 +450,6 @@ extern struct mutex rdtgroup_mutex;
extern struct rdt_hw_resource rdt_resources_all[];
extern struct rdtgroup rdtgroup_default;
-DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
-
extern struct dentry *debugfs_resctrl;
enum resctrl_res_level {
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 68fe2dde8887..91a740e10865 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -2542,9 +2542,9 @@ static int rdt_get_tree(struct fs_context *fc)
goto out_psl;
if (rdt_alloc_capable)
- static_branch_enable_cpuslocked(&rdt_alloc_enable_key);
+ resctrl_arch_enable_alloc();
if (rdt_mon_capable)
- static_branch_enable_cpuslocked(&rdt_mon_enable_key);
+ resctrl_arch_enable_mon();
if (rdt_alloc_capable || rdt_mon_capable) {
static_branch_enable_cpuslocked(&rdt_enable_key);
@@ -2817,8 +2817,8 @@ static void rdt_kill_sb(struct super_block *sb)
rdt_pseudo_lock_release();
rdtgroup_default.mode = RDT_MODE_SHAREABLE;
schemata_list_destroy();
- static_branch_disable_cpuslocked(&rdt_alloc_enable_key);
- static_branch_disable_cpuslocked(&rdt_mon_enable_key);
+ resctrl_arch_disable_alloc();
+ resctrl_arch_disable_mon();
static_branch_disable_cpuslocked(&rdt_enable_key);
resctrl_mounted = false;
kernfs_kill_sb(sb);
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* [PATCH v5 18/24] x86/resctrl: Make rdt_enable_key the arch's decision to switch
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (16 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 17/24] x86/resctrl: Move alloc/mon static keys into helpers James Morse
@ 2023-07-28 16:42 ` James Morse
2023-07-28 16:42 ` [PATCH v5 19/24] x86/resctrl: Add helpers for system wide mon/alloc capable James Morse
` (7 subsequent siblings)
25 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
rdt_enable_key is switched when resctrl is mounted. It was also previously
used to prevent a second mount of the filesystem.
Any other architecture that wants to support resctrl has to provide
identical static keys.
Now that there are helpers for enabling and disabling the alloc/mon keys,
resctrl doesn't need to switch this extra key, it can be done by the arch
code. Use the static-key increment and decrement helpers, and change
resctrl to ensure the calls are balanced.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
arch/x86/include/asm/resctrl.h | 4 ++++
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 11 +++++------
2 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
index 23010fce5f8f..3876d4bb4bed 100644
--- a/arch/x86/include/asm/resctrl.h
+++ b/arch/x86/include/asm/resctrl.h
@@ -45,21 +45,25 @@ DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
static inline void resctrl_arch_enable_alloc(void)
{
static_branch_enable_cpuslocked(&rdt_alloc_enable_key);
+ static_branch_inc_cpuslocked(&rdt_enable_key);
}
static inline void resctrl_arch_disable_alloc(void)
{
static_branch_disable_cpuslocked(&rdt_alloc_enable_key);
+ static_branch_dec_cpuslocked(&rdt_enable_key);
}
static inline void resctrl_arch_enable_mon(void)
{
static_branch_enable_cpuslocked(&rdt_mon_enable_key);
+ static_branch_inc_cpuslocked(&rdt_enable_key);
}
static inline void resctrl_arch_disable_mon(void)
{
static_branch_disable_cpuslocked(&rdt_mon_enable_key);
+ static_branch_dec_cpuslocked(&rdt_enable_key);
}
/*
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 91a740e10865..ce1ed485e4f7 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -2546,10 +2546,8 @@ static int rdt_get_tree(struct fs_context *fc)
if (rdt_mon_capable)
resctrl_arch_enable_mon();
- if (rdt_alloc_capable || rdt_mon_capable) {
- static_branch_enable_cpuslocked(&rdt_enable_key);
+ if (rdt_alloc_capable || rdt_mon_capable)
resctrl_mounted = true;
- }
if (is_mbm_enabled()) {
r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
@@ -2817,9 +2815,10 @@ static void rdt_kill_sb(struct super_block *sb)
rdt_pseudo_lock_release();
rdtgroup_default.mode = RDT_MODE_SHAREABLE;
schemata_list_destroy();
- resctrl_arch_disable_alloc();
- resctrl_arch_disable_mon();
- static_branch_disable_cpuslocked(&rdt_enable_key);
+ if (rdt_alloc_capable)
+ resctrl_arch_disable_alloc();
+ if (rdt_mon_capable)
+ resctrl_arch_disable_mon();
resctrl_mounted = false;
kernfs_kill_sb(sb);
mutex_unlock(&rdtgroup_mutex);
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* [PATCH v5 19/24] x86/resctrl: Add helpers for system wide mon/alloc capable
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (17 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 18/24] x86/resctrl: Make rdt_enable_key the arch's decision to switch James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-17 18:34 ` Fenghua Yu
2023-07-28 16:42 ` [PATCH v5 20/24] x86/resctrl: Add cpu online callback for resctrl work James Morse
` (6 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
resctrl reads rdt_alloc_capable or rdt_mon_capable to determine
whether any of the resources support the corresponding features.
resctrl also uses the static-keys that affect the architecture's
context-switch code to determine the same thing.
This forces another architecture to have the same static-keys.
As the static-key is enabled based on the capable flag, and none of
the filesystem uses of these are in the scheduler path, move the
capable flags behind helpers, and use these in the filesystem
code instead of the static-key.
After this change, only the architecture code manages and uses
the static-keys to ensure __resctrl_sched_in() does not need
runtime checks.
This avoids multiple architectures having to define the same
static-keys.
Cases where the static-key implicitly tested if the resctrl
filesystem was mounted all have an explicit check added by a
previous patch.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v1:
* Added missing conversion in mkdir_rdt_prepare_rmid_free()
Changes since v3:
* Expanded the commit message.
---
arch/x86/include/asm/resctrl.h | 13 +++++++++
arch/x86/kernel/cpu/resctrl/internal.h | 2 --
arch/x86/kernel/cpu/resctrl/monitor.c | 4 +--
arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 6 ++--
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 34 +++++++++++------------
5 files changed, 35 insertions(+), 24 deletions(-)
diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
index 3876d4bb4bed..63a4a2332d61 100644
--- a/arch/x86/include/asm/resctrl.h
+++ b/arch/x86/include/asm/resctrl.h
@@ -38,10 +38,18 @@ struct resctrl_pqr_state {
DECLARE_PER_CPU(struct resctrl_pqr_state, pqr_state);
+extern bool rdt_alloc_capable;
+extern bool rdt_mon_capable;
+
DECLARE_STATIC_KEY_FALSE(rdt_enable_key);
DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
+static inline bool resctrl_arch_alloc_capable(void)
+{
+ return rdt_alloc_capable;
+}
+
static inline void resctrl_arch_enable_alloc(void)
{
static_branch_enable_cpuslocked(&rdt_alloc_enable_key);
@@ -54,6 +62,11 @@ static inline void resctrl_arch_disable_alloc(void)
static_branch_dec_cpuslocked(&rdt_enable_key);
}
+static inline bool resctrl_arch_mon_capable(void)
+{
+ return rdt_mon_capable;
+}
+
static inline void resctrl_arch_enable_mon(void)
{
static_branch_enable_cpuslocked(&rdt_mon_enable_key);
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index ac39fecba4ca..f99e0a1f39c8 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -136,8 +136,6 @@ struct rmid_read {
void *arch_mon_ctx;
};
-extern bool rdt_alloc_capable;
-extern bool rdt_mon_capable;
extern unsigned int rdt_mon_features;
extern struct list_head resctrl_schema_all;
extern bool resctrl_mounted;
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 5350d44b16b6..c0b1ad8d8f6d 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -842,7 +842,7 @@ void mbm_handle_overflow(struct work_struct *work)
* If the filesystem has been unmounted this work no longer needs to
* run.
*/
- if (!resctrl_mounted || !static_branch_likely(&rdt_mon_enable_key))
+ if (!resctrl_mounted || !resctrl_arch_mon_capable())
goto out_unlock;
r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
@@ -879,7 +879,7 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
* When a domain comes online there is no guarantee the filesystem is
* mounted. If not, there is no need to catch counter overflow.
*/
- if (!resctrl_mounted || !static_branch_likely(&rdt_mon_enable_key))
+ if (!resctrl_mounted || !resctrl_arch_mon_capable())
return;
cpu = cpumask_any_housekeeping(&dom->cpu_mask);
dom->mbm_work_cpu = cpu;
diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
index 5ebd6e54c7f2..460421051abf 100644
--- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
@@ -567,7 +567,7 @@ static int rdtgroup_locksetup_user_restrict(struct rdtgroup *rdtgrp)
if (ret)
goto err_cpus;
- if (rdt_mon_capable) {
+ if (resctrl_arch_mon_capable()) {
ret = rdtgroup_kn_mode_restrict(rdtgrp, "mon_groups");
if (ret)
goto err_cpus_list;
@@ -614,7 +614,7 @@ static int rdtgroup_locksetup_user_restore(struct rdtgroup *rdtgrp)
if (ret)
goto err_cpus;
- if (rdt_mon_capable) {
+ if (resctrl_arch_mon_capable()) {
ret = rdtgroup_kn_mode_restore(rdtgrp, "mon_groups", 0777);
if (ret)
goto err_cpus_list;
@@ -762,7 +762,7 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp)
{
int ret;
- if (rdt_mon_capable) {
+ if (resctrl_arch_mon_capable()) {
ret = alloc_rmid(rdtgrp->closid);
if (ret < 0) {
rdt_last_cmd_puts("Out of RMIDs\n");
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index ce1ed485e4f7..fef78a3dc632 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -630,13 +630,13 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
static bool is_closid_match(struct task_struct *t, struct rdtgroup *r)
{
- return (rdt_alloc_capable && (r->type == RDTCTRL_GROUP) &&
+ return (resctrl_arch_alloc_capable() && (r->type == RDTCTRL_GROUP) &&
resctrl_arch_match_closid(t, r->closid));
}
static bool is_rmid_match(struct task_struct *t, struct rdtgroup *r)
{
- return (rdt_mon_capable && (r->type == RDTMON_GROUP) &&
+ return (resctrl_arch_mon_capable() && (r->type == RDTMON_GROUP) &&
resctrl_arch_match_rmid(t, r->mon.parent->closid,
r->mon.rmid));
}
@@ -2519,7 +2519,7 @@ static int rdt_get_tree(struct fs_context *fc)
if (ret < 0)
goto out_schemata_free;
- if (rdt_mon_capable) {
+ if (resctrl_arch_mon_capable()) {
ret = mongroup_create_dir(rdtgroup_default.kn,
&rdtgroup_default, "mon_groups",
&kn_mongrp);
@@ -2541,12 +2541,12 @@ static int rdt_get_tree(struct fs_context *fc)
if (ret < 0)
goto out_psl;
- if (rdt_alloc_capable)
+ if (resctrl_arch_alloc_capable())
resctrl_arch_enable_alloc();
- if (rdt_mon_capable)
+ if (resctrl_arch_mon_capable())
resctrl_arch_enable_mon();
- if (rdt_alloc_capable || rdt_mon_capable)
+ if (resctrl_arch_alloc_capable() || resctrl_arch_mon_capable())
resctrl_mounted = true;
if (is_mbm_enabled()) {
@@ -2560,10 +2560,10 @@ static int rdt_get_tree(struct fs_context *fc)
out_psl:
rdt_pseudo_lock_release();
out_mondata:
- if (rdt_mon_capable)
+ if (resctrl_arch_mon_capable())
kernfs_remove(kn_mondata);
out_mongrp:
- if (rdt_mon_capable)
+ if (resctrl_arch_mon_capable())
kernfs_remove(kn_mongrp);
out_info:
kernfs_remove(kn_info);
@@ -2815,9 +2815,9 @@ static void rdt_kill_sb(struct super_block *sb)
rdt_pseudo_lock_release();
rdtgroup_default.mode = RDT_MODE_SHAREABLE;
schemata_list_destroy();
- if (rdt_alloc_capable)
+ if (resctrl_arch_alloc_capable())
resctrl_arch_disable_alloc();
- if (rdt_mon_capable)
+ if (resctrl_arch_mon_capable())
resctrl_arch_disable_mon();
resctrl_mounted = false;
kernfs_kill_sb(sb);
@@ -3197,7 +3197,7 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
{
int ret;
- if (!rdt_mon_capable)
+ if (!resctrl_arch_mon_capable())
return 0;
ret = alloc_rmid(rdtgrp->closid);
@@ -3219,7 +3219,7 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
static void mkdir_rdt_prepare_rmid_free(struct rdtgroup *rgrp)
{
- if (rdt_mon_capable)
+ if (resctrl_arch_mon_capable())
free_rmid(rgrp->closid, rgrp->mon.rmid);
}
@@ -3385,7 +3385,7 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
list_add(&rdtgrp->rdtgroup_list, &rdt_all_groups);
- if (rdt_mon_capable) {
+ if (resctrl_arch_mon_capable()) {
/*
* Create an empty mon_groups directory to hold the subset
* of tasks and cpus to monitor.
@@ -3440,14 +3440,14 @@ static int rdtgroup_mkdir(struct kernfs_node *parent_kn, const char *name,
* allocation is supported, add a control and monitoring
* subdirectory
*/
- if (rdt_alloc_capable && parent_kn == rdtgroup_default.kn)
+ if (resctrl_arch_alloc_capable() && parent_kn == rdtgroup_default.kn)
return rdtgroup_mkdir_ctrl_mon(parent_kn, name, mode);
/*
* If RDT monitoring is supported and the parent directory is a valid
* "mon_groups" directory, add a monitoring subdirectory.
*/
- if (rdt_mon_capable && is_mon_groups(parent_kn, name))
+ if (resctrl_arch_mon_capable() && is_mon_groups(parent_kn, name))
return rdtgroup_mkdir_mon(parent_kn, name, mode);
return -EPERM;
@@ -3779,7 +3779,7 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
* If resctrl is mounted, remove all the
* per domain monitor data directories.
*/
- if (resctrl_mounted && static_branch_unlikely(&rdt_mon_enable_key))
+ if (resctrl_mounted && resctrl_arch_mon_capable())
rmdir_mondata_subdir_allrdtgrp(r, d->id);
if (is_mbm_enabled())
@@ -3862,7 +3862,7 @@ int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
* by rdt_get_tree() calling mkdir_mondata_all().
* If resctrl is mounted, add per domain monitor data directories.
*/
- if (resctrl_mounted && static_branch_unlikely(&rdt_mon_enable_key))
+ if (resctrl_mounted && resctrl_arch_mon_capable())
mkdir_mondata_subdir_allrdtgrp(r, d);
return 0;
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 19/24] x86/resctrl: Add helpers for system wide mon/alloc capable
2023-07-28 16:42 ` [PATCH v5 19/24] x86/resctrl: Add helpers for system wide mon/alloc capable James Morse
@ 2023-08-17 18:34 ` Fenghua Yu
2023-08-24 16:57 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Fenghua Yu @ 2023-08-17 18:34 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi, James,
On 7/28/23 09:42, James Morse wrote:
> resctrl reads rdt_alloc_capable or rdt_mon_capable to determine
> whether any of the resources support the corresponding features.
> resctrl also uses the static-keys that affect the architecture's
> context-switch code to determine the same thing.
>
> This forces another architecture to have the same static-keys.
>
> As the static-key is enabled based on the capable flag, and none of
> the filesystem uses of these are in the scheduler path, move the
> capable flags behind helpers, and use these in the filesystem
> code instead of the static-key.
>
> After this change, only the architecture code manages and uses
> the static-keys to ensure __resctrl_sched_in() does not need
> runtime checks.
>
> This avoids multiple architectures having to define the same
> static-keys.
>
> Cases where the static-key implicitly tested if the resctrl
> filesystem was mounted all have an explicit check added by a
> previous patch.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Signed-off-by: James Morse <james.morse@arm.com>
>
> ---
> Changes since v1:
> * Added missing conversion in mkdir_rdt_prepare_rmid_free()
>
> Changes since v3:
> * Expanded the commit message.
> ---
> arch/x86/include/asm/resctrl.h | 13 +++++++++
> arch/x86/kernel/cpu/resctrl/internal.h | 2 --
> arch/x86/kernel/cpu/resctrl/monitor.c | 4 +--
> arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 6 ++--
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 34 +++++++++++------------
> 5 files changed, 35 insertions(+), 24 deletions(-)
>
> diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
> index 3876d4bb4bed..63a4a2332d61 100644
> --- a/arch/x86/include/asm/resctrl.h
> +++ b/arch/x86/include/asm/resctrl.h
> @@ -38,10 +38,18 @@ struct resctrl_pqr_state {
>
> DECLARE_PER_CPU(struct resctrl_pqr_state, pqr_state);
>
> +extern bool rdt_alloc_capable;
> +extern bool rdt_mon_capable;
> +
> DECLARE_STATIC_KEY_FALSE(rdt_enable_key);
> DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
> DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
>
> +static inline bool resctrl_arch_alloc_capable(void)
> +{
> + return rdt_alloc_capable;
> +}
> +
> static inline void resctrl_arch_enable_alloc(void)
> {
> static_branch_enable_cpuslocked(&rdt_alloc_enable_key);
> @@ -54,6 +62,11 @@ static inline void resctrl_arch_disable_alloc(void)
> static_branch_dec_cpuslocked(&rdt_enable_key);
> }
>
> +static inline bool resctrl_arch_mon_capable(void)
> +{
> + return rdt_mon_capable;
> +}
> +
> static inline void resctrl_arch_enable_mon(void)
> {
> static_branch_enable_cpuslocked(&rdt_mon_enable_key);
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index ac39fecba4ca..f99e0a1f39c8 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -136,8 +136,6 @@ struct rmid_read {
> void *arch_mon_ctx;
> };
>
> -extern bool rdt_alloc_capable;
> -extern bool rdt_mon_capable;
> extern unsigned int rdt_mon_features;
> extern struct list_head resctrl_schema_all;
> extern bool resctrl_mounted;
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 5350d44b16b6..c0b1ad8d8f6d 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -842,7 +842,7 @@ void mbm_handle_overflow(struct work_struct *work)
> * If the filesystem has been unmounted this work no longer needs to
> * run.
> */
> - if (!resctrl_mounted || !static_branch_likely(&rdt_mon_enable_key))
> + if (!resctrl_mounted || !resctrl_arch_mon_capable())
> goto out_unlock;
>
> r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
> @@ -879,7 +879,7 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
> * When a domain comes online there is no guarantee the filesystem is
> * mounted. If not, there is no need to catch counter overflow.
> */
> - if (!resctrl_mounted || !static_branch_likely(&rdt_mon_enable_key))
> + if (!resctrl_mounted || !resctrl_arch_mon_capable())
> return;
> cpu = cpumask_any_housekeeping(&dom->cpu_mask);
> dom->mbm_work_cpu = cpu;
> diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> index 5ebd6e54c7f2..460421051abf 100644
> --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> @@ -567,7 +567,7 @@ static int rdtgroup_locksetup_user_restrict(struct rdtgroup *rdtgrp)
> if (ret)
> goto err_cpus;
>
> - if (rdt_mon_capable) {
> + if (resctrl_arch_mon_capable()) {
> ret = rdtgroup_kn_mode_restrict(rdtgrp, "mon_groups");
> if (ret)
> goto err_cpus_list;
> @@ -614,7 +614,7 @@ static int rdtgroup_locksetup_user_restore(struct rdtgroup *rdtgrp)
> if (ret)
> goto err_cpus;
>
> - if (rdt_mon_capable) {
> + if (resctrl_arch_mon_capable()) {
> ret = rdtgroup_kn_mode_restore(rdtgrp, "mon_groups", 0777);
> if (ret)
> goto err_cpus_list;
> @@ -762,7 +762,7 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp)
> {
> int ret;
>
> - if (rdt_mon_capable) {
> + if (resctrl_arch_mon_capable()) {
> ret = alloc_rmid(rdtgrp->closid);
> if (ret < 0) {
> rdt_last_cmd_puts("Out of RMIDs\n");
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index ce1ed485e4f7..fef78a3dc632 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -630,13 +630,13 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
>
> static bool is_closid_match(struct task_struct *t, struct rdtgroup *r)
> {
> - return (rdt_alloc_capable && (r->type == RDTCTRL_GROUP) &&
> + return (resctrl_arch_alloc_capable() && (r->type == RDTCTRL_GROUP) &&
> resctrl_arch_match_closid(t, r->closid));
> }
>
> static bool is_rmid_match(struct task_struct *t, struct rdtgroup *r)
> {
> - return (rdt_mon_capable && (r->type == RDTMON_GROUP) &&
> + return (resctrl_arch_mon_capable() && (r->type == RDTMON_GROUP) &&
> resctrl_arch_match_rmid(t, r->mon.parent->closid,
> r->mon.rmid));
> }
> @@ -2519,7 +2519,7 @@ static int rdt_get_tree(struct fs_context *fc)
> if (ret < 0)
> goto out_schemata_free;
>
> - if (rdt_mon_capable) {
> + if (resctrl_arch_mon_capable()) {
> ret = mongroup_create_dir(rdtgroup_default.kn,
> &rdtgroup_default, "mon_groups",
> &kn_mongrp);
> @@ -2541,12 +2541,12 @@ static int rdt_get_tree(struct fs_context *fc)
> if (ret < 0)
> goto out_psl;
>
> - if (rdt_alloc_capable)
> + if (resctrl_arch_alloc_capable())
> resctrl_arch_enable_alloc();
> - if (rdt_mon_capable)
> + if (resctrl_arch_mon_capable())
> resctrl_arch_enable_mon();
>
> - if (rdt_alloc_capable || rdt_mon_capable)
> + if (resctrl_arch_alloc_capable() || resctrl_arch_mon_capable())
> resctrl_mounted = true;
>
> if (is_mbm_enabled()) {
> @@ -2560,10 +2560,10 @@ static int rdt_get_tree(struct fs_context *fc)
> out_psl:
> rdt_pseudo_lock_release();
> out_mondata:
> - if (rdt_mon_capable)
> + if (resctrl_arch_mon_capable())
> kernfs_remove(kn_mondata);
> out_mongrp:
> - if (rdt_mon_capable)
> + if (resctrl_arch_mon_capable())
> kernfs_remove(kn_mongrp);
> out_info:
> kernfs_remove(kn_info);
> @@ -2815,9 +2815,9 @@ static void rdt_kill_sb(struct super_block *sb)
> rdt_pseudo_lock_release();
> rdtgroup_default.mode = RDT_MODE_SHAREABLE;
> schemata_list_destroy();
> - if (rdt_alloc_capable)
> + if (resctrl_arch_alloc_capable())
> resctrl_arch_disable_alloc();
> - if (rdt_mon_capable)
> + if (resctrl_arch_mon_capable())
> resctrl_arch_disable_mon();
> resctrl_mounted = false;
> kernfs_kill_sb(sb);
> @@ -3197,7 +3197,7 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
> {
> int ret;
>
> - if (!rdt_mon_capable)
> + if (!resctrl_arch_mon_capable())
> return 0;
>
> ret = alloc_rmid(rdtgrp->closid);
> @@ -3219,7 +3219,7 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp)
>
> static void mkdir_rdt_prepare_rmid_free(struct rdtgroup *rgrp)
> {
> - if (rdt_mon_capable)
> + if (resctrl_arch_mon_capable())
> free_rmid(rgrp->closid, rgrp->mon.rmid);
> }
>
> @@ -3385,7 +3385,7 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
>
> list_add(&rdtgrp->rdtgroup_list, &rdt_all_groups);
>
> - if (rdt_mon_capable) {
> + if (resctrl_arch_mon_capable()) {
> /*
> * Create an empty mon_groups directory to hold the subset
> * of tasks and cpus to monitor.
> @@ -3440,14 +3440,14 @@ static int rdtgroup_mkdir(struct kernfs_node *parent_kn, const char *name,
> * allocation is supported, add a control and monitoring
> * subdirectory
> */
> - if (rdt_alloc_capable && parent_kn == rdtgroup_default.kn)
> + if (resctrl_arch_alloc_capable() && parent_kn == rdtgroup_default.kn)
> return rdtgroup_mkdir_ctrl_mon(parent_kn, name, mode);
>
> /*
> * If RDT monitoring is supported and the parent directory is a valid
> * "mon_groups" directory, add a monitoring subdirectory.
> */
> - if (rdt_mon_capable && is_mon_groups(parent_kn, name))
> + if (resctrl_arch_mon_capable() && is_mon_groups(parent_kn, name))
> return rdtgroup_mkdir_mon(parent_kn, name, mode);
>
> return -EPERM;
> @@ -3779,7 +3779,7 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
> * If resctrl is mounted, remove all the
> * per domain monitor data directories.
> */
> - if (resctrl_mounted && static_branch_unlikely(&rdt_mon_enable_key))
> + if (resctrl_mounted && resctrl_arch_mon_capable())
> rmdir_mondata_subdir_allrdtgrp(r, d->id);
>
> if (is_mbm_enabled())
> @@ -3862,7 +3862,7 @@ int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
> * by rdt_get_tree() calling mkdir_mondata_all().
> * If resctrl is mounted, add per domain monitor data directories.
> */
> - if (resctrl_mounted && static_branch_unlikely(&rdt_mon_enable_key))
> + if (resctrl_mounted && resctrl_arch_mon_capable())
> mkdir_mondata_subdir_allrdtgrp(r, d);
>
> return 0;
Why isn't rdt_alloc_capable in get_rdt_alloc_resources() replaced by the
helper?
static __init bool get_rdt_alloc_resources(void)
{
...
if (rdt_alloc_capable)
...
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 19/24] x86/resctrl: Add helpers for system wide mon/alloc capable
2023-08-17 18:34 ` Fenghua Yu
@ 2023-08-24 16:57 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:57 UTC (permalink / raw)
To: Fenghua Yu, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Fenghua,
On 17/08/2023 19:34, Fenghua Yu wrote:
> On 7/28/23 09:42, James Morse wrote:
>> resctrl reads rdt_alloc_capable or rdt_mon_capable to determine
>> whether any of the resources support the corresponding features.
>> resctrl also uses the static-keys that affect the architecture's
>> context-switch code to determine the same thing.
>>
>> This forces another architecture to have the same static-keys.
>>
>> As the static-key is enabled based on the capable flag, and none of
>> the filesystem uses of these are in the scheduler path, move the
>> capable flags behind helpers, and use these in the filesystem
>> code instead of the static-key.
>>
>> After this change, only the architecture code manages and uses
>> the static-keys to ensure __resctrl_sched_in() does not need
>> runtime checks.
>>
>> This avoids multiple architectures having to define the same
>> static-keys.
>>
>> Cases where the static-key implicitly tested if the resctrl
>> filesystem was mounted all have an explicit check added by a
>> previous patch.
> Why isn't rdt_alloc_capable in get_rdt_alloc_resources() replaced by the helper?
>
> static __init bool get_rdt_alloc_resources(void)
> {
> ...
> if (rdt_alloc_capable)
> ...
Because its in core.c, and is only called by get_rdt_resources as part of the arch codes
resctrl_late_init(). This can stay as it is once the filesystem code is moved out to
/fs/resctrl, there was no need to touch it.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 20/24] x86/resctrl: Add cpu online callback for resctrl work
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (18 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 19/24] x86/resctrl: Add helpers for system wide mon/alloc capable James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:38 ` Reinette Chatre
2023-07-28 16:42 ` [PATCH v5 21/24] x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but cpu James Morse
` (5 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
The resctrl architecture specific code may need to create a domain when
a CPU comes online, it also needs to reset the CPUs PQR_ASSOC register.
The resctrl filesystem code needs to update the rdtgroup_default CPU
mask when CPUs are brought online.
Currently this is all done in one function, resctrl_online_cpu().
This will need to be split into architecture and filesystem parts
before resctrl can be moved to /fs/.
Pull the rdtgroup_default update work out as a filesystem specific
cpu_online helper. resctrl_online_cpu() is the obvious name for this,
which means the version in core.c needs renaming.
resctrl_online_cpu() is called by the arch code once it has done the
work to add the new CPU to any domains.
In future patches, resctrl_online_cpu() will take the rdtgroup_mutex
itself.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v3:
* Renamed err to ret
Changes since v4:
* Changes in capitalisation.
---
arch/x86/kernel/cpu/resctrl/core.c | 11 ++++++-----
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 10 ++++++++++
include/linux/resctrl.h | 1 +
3 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
index 8dfede01b0c9..a694563d3929 100644
--- a/arch/x86/kernel/cpu/resctrl/core.c
+++ b/arch/x86/kernel/cpu/resctrl/core.c
@@ -603,19 +603,20 @@ static void clear_closid_rmid(int cpu)
wrmsr(MSR_IA32_PQR_ASSOC, 0, RESCTRL_RESERVED_CLOSID);
}
-static int resctrl_online_cpu(unsigned int cpu)
+static int resctrl_arch_online_cpu(unsigned int cpu)
{
struct rdt_resource *r;
+ int ret;
mutex_lock(&rdtgroup_mutex);
for_each_capable_rdt_resource(r)
domain_add_cpu(cpu, r);
- /* The cpu is set in default rdtgroup after online. */
- cpumask_set_cpu(cpu, &rdtgroup_default.cpu_mask);
clear_closid_rmid(cpu);
+
+ ret = resctrl_online_cpu(cpu);
mutex_unlock(&rdtgroup_mutex);
- return 0;
+ return ret;
}
static void clear_childcpus(struct rdtgroup *r, unsigned int cpu)
@@ -965,7 +966,7 @@ static int __init resctrl_late_init(void)
state = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
"x86/resctrl/cat:online:",
- resctrl_online_cpu, resctrl_offline_cpu);
+ resctrl_arch_online_cpu, resctrl_offline_cpu);
if (state < 0)
return state;
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index fef78a3dc632..7bd3a3dc0f44 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -3868,6 +3868,16 @@ int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
return 0;
}
+int resctrl_online_cpu(unsigned int cpu)
+{
+ lockdep_assert_held(&rdtgroup_mutex);
+
+ /* The CPU is set in default rdtgroup after online. */
+ cpumask_set_cpu(cpu, &rdtgroup_default.cpu_mask);
+
+ return 0;
+}
+
/*
* rdtgroup_init - rdtgroup initialization
*
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index 5e4b4df9610b..35d3c97df212 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -223,6 +223,7 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_domain *d,
u32 closid, enum resctrl_conf_type type);
int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d);
void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
+int resctrl_online_cpu(unsigned int cpu);
/**
* resctrl_arch_rmid_read() - Read the eventid counter corresponding to rmid
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 20/24] x86/resctrl: Add cpu online callback for resctrl work
2023-07-28 16:42 ` [PATCH v5 20/24] x86/resctrl: Add cpu online callback for resctrl work James Morse
@ 2023-08-09 22:38 ` Reinette Chatre
0 siblings, 0 replies; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:38 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
(please also check subject lines for CPU instead of cpu)
On 7/28/2023 9:42 AM, James Morse wrote:
> The resctrl architecture specific code may need to create a domain when
> a CPU comes online, it also needs to reset the CPUs PQR_ASSOC register.
> The resctrl filesystem code needs to update the rdtgroup_default CPU
> mask when CPUs are brought online.
>
> Currently this is all done in one function, resctrl_online_cpu().
> This will need to be split into architecture and filesystem parts
> before resctrl can be moved to /fs/.
>
> Pull the rdtgroup_default update work out as a filesystem specific
> cpu_online helper. resctrl_online_cpu() is the obvious name for this,
> which means the version in core.c needs renaming.
>
> resctrl_online_cpu() is called by the arch code once it has done the
> work to add the new CPU to any domains.
>
> In future patches, resctrl_online_cpu() will take the rdtgroup_mutex
> itself.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v3:
> * Renamed err to ret
>
> Changes since v4:
> * Changes in capitalisation.
> ---
> arch/x86/kernel/cpu/resctrl/core.c | 11 ++++++-----
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 10 ++++++++++
> include/linux/resctrl.h | 1 +
> 3 files changed, 17 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
> index 8dfede01b0c9..a694563d3929 100644
> --- a/arch/x86/kernel/cpu/resctrl/core.c
> +++ b/arch/x86/kernel/cpu/resctrl/core.c
> @@ -603,19 +603,20 @@ static void clear_closid_rmid(int cpu)
> wrmsr(MSR_IA32_PQR_ASSOC, 0, RESCTRL_RESERVED_CLOSID);
> }
>
> -static int resctrl_online_cpu(unsigned int cpu)
> +static int resctrl_arch_online_cpu(unsigned int cpu)
> {
> struct rdt_resource *r;
> + int ret;
>
> mutex_lock(&rdtgroup_mutex);
> for_each_capable_rdt_resource(r)
> domain_add_cpu(cpu, r);
> - /* The cpu is set in default rdtgroup after online. */
> - cpumask_set_cpu(cpu, &rdtgroup_default.cpu_mask);
> clear_closid_rmid(cpu);
> +
> + ret = resctrl_online_cpu(cpu);
> mutex_unlock(&rdtgroup_mutex);
>
> - return 0;
> + return ret;
> }
This is unexpected that resctrl_online_cpu() returns an error ... and
then the caller exits with failure without error handling or unwinding
the previous work. Is this error return needed? The function just
always returns zero so it looks like it could just be void.
>
> static void clear_childcpus(struct rdtgroup *r, unsigned int cpu)
> @@ -965,7 +966,7 @@ static int __init resctrl_late_init(void)
>
> state = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
> "x86/resctrl/cat:online:",
> - resctrl_online_cpu, resctrl_offline_cpu);
> + resctrl_arch_online_cpu, resctrl_offline_cpu);
> if (state < 0)
> return state;
>
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index fef78a3dc632..7bd3a3dc0f44 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -3868,6 +3868,16 @@ int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
> return 0;
> }
>
> +int resctrl_online_cpu(unsigned int cpu)
> +{
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> + /* The CPU is set in default rdtgroup after online. */
> + cpumask_set_cpu(cpu, &rdtgroup_default.cpu_mask);
> +
> + return 0;
> +}
> +
> /*
> * rdtgroup_init - rdtgroup initialization
> *
> diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
> index 5e4b4df9610b..35d3c97df212 100644
> --- a/include/linux/resctrl.h
> +++ b/include/linux/resctrl.h
> @@ -223,6 +223,7 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_domain *d,
> u32 closid, enum resctrl_conf_type type);
> int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d);
> void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
> +int resctrl_online_cpu(unsigned int cpu);
>
> /**
> * resctrl_arch_rmid_read() - Read the eventid counter corresponding to rmid
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 21/24] x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but cpu
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (19 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 20/24] x86/resctrl: Add cpu online callback for resctrl work James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:38 ` Reinette Chatre
2023-07-28 16:42 ` [PATCH v5 22/24] x86/resctrl: Add cpu offline callback for resctrl work James Morse
` (4 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
When a CPU is taken offline resctrl may need to move the overflow or
limbo handlers to run on a different CPU.
Once the offline callbacks have been split, cqm_setup_limbo_handler()
will be called while the CPU that is going offline is still present
in the cpu_mask.
Pass the CPU to exclude to cqm_setup_limbo_handler() and
mbm_setup_overflow_handler(). These functions can use a variant of
cpumask_any_but() when selecting the CPU. -1 is used to indicate no CPUs
need excluding.
A subsequent patch moves these calls to be before CPUs have been removed,
so this exclude_cpus behaviour is temporary.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v2:
* Rephrased a comment to avoid a two letter bad-word. (we)
* Avoid assigning mbm_work_cpu if the domain is going to be free()d
* Added cpumask_any_housekeeping_but(), I dislike the name
Changes since v3:
* Marked an explanatory comment as temporary as the subsequent patch is
no longer adjacent.
Changes since v4:
* Check against RESCTRL_PICK_ANY_CPU instead of -1.
* Leave cqm_work_cpu as nr_cpu_ids when no CPU is available.
* Made cpumask_any_housekeeping_but() more readable.
---
arch/x86/kernel/cpu/resctrl/core.c | 8 +++--
arch/x86/kernel/cpu/resctrl/internal.h | 36 ++++++++++++++++++++--
arch/x86/kernel/cpu/resctrl/monitor.c | 42 +++++++++++++++++++++-----
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 6 ++--
include/linux/resctrl.h | 2 ++
5 files changed, 81 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
index a694563d3929..d39572a0a3cd 100644
--- a/arch/x86/kernel/cpu/resctrl/core.c
+++ b/arch/x86/kernel/cpu/resctrl/core.c
@@ -582,12 +582,16 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
if (r == &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl) {
if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
cancel_delayed_work(&d->mbm_over);
- mbm_setup_overflow_handler(d, 0);
+ /*
+ * temporary: exclude_cpu=-1 as this CPU has already
+ * been removed by cpumask_clear_cpu()d
+ */
+ mbm_setup_overflow_handler(d, 0, RESCTRL_PICK_ANY_CPU);
}
if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu &&
has_busy_rmid(d)) {
cancel_delayed_work(&d->cqm_limbo);
- cqm_setup_limbo_handler(d, 0);
+ cqm_setup_limbo_handler(d, 0, RESCTRL_PICK_ANY_CPU);
}
}
}
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index f99e0a1f39c8..655418c23c0e 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -79,6 +79,36 @@ static inline unsigned int cpumask_any_housekeeping(const struct cpumask *mask)
return cpu;
}
+/**
+ * cpumask_any_housekeeping_but() - Chose any cpu in @mask, preferring those
+ * that aren't marked nohz_full, excluding
+ * the provided CPU
+ * @mask: The mask to pick a CPU from.
+ * @exclude_cpu:The CPU to avoid picking.
+ *
+ * Returns a CPU from @mask, but not @exclude_cpus. If there are housekeeping
+ * CPUs that don't use nohz_full, these are preferred.
+ * Returns >= nr_cpu_ids if no CPUs are available.
+ */
+static inline unsigned int
+cpumask_any_housekeeping_but(const struct cpumask *mask, int exclude_cpu)
+{
+ unsigned int cpu, hk_cpu;
+
+ cpu = cpumask_any_but(mask, exclude_cpu);
+ if (!tick_nohz_full_cpu(cpu))
+ return cpu;
+
+ hk_cpu = cpumask_nth_andnot(0, mask, tick_nohz_full_mask);
+ if (hk_cpu == exclude_cpu)
+ hk_cpu = cpumask_nth_andnot(1, mask, tick_nohz_full_mask);
+
+ if (hk_cpu < nr_cpu_ids)
+ cpu = hk_cpu;
+
+ return cpu;
+}
+
struct rdt_fs_context {
struct kernfs_fs_context kfc;
bool enable_cdpl2;
@@ -564,11 +594,13 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
struct rdt_domain *d, struct rdtgroup *rdtgrp,
int evtid, int first);
void mbm_setup_overflow_handler(struct rdt_domain *dom,
- unsigned long delay_ms);
+ unsigned long delay_ms,
+ int exclude_cpu);
void mbm_handle_overflow(struct work_struct *work);
void __init intel_rdt_mbm_apply_quirk(void);
bool is_mba_sc(struct rdt_resource *r);
-void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms);
+void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms,
+ int exclude_cpu);
void cqm_handle_limbo(struct work_struct *work);
bool has_busy_rmid(struct rdt_domain *d);
void __check_limbo(struct rdt_domain *d, bool force_free);
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index c0b1ad8d8f6d..471cdc4e4eae 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -493,7 +493,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
* setup up the limbo worker.
*/
if (!has_busy_rmid(d))
- cqm_setup_limbo_handler(d, CQM_LIMBOCHECK_INTERVAL);
+ cqm_setup_limbo_handler(d, CQM_LIMBOCHECK_INTERVAL,
+ RESCTRL_PICK_ANY_CPU);
set_bit(idx, d->rmid_busy_llc);
entry->busy++;
}
@@ -816,15 +817,28 @@ void cqm_handle_limbo(struct work_struct *work)
mutex_unlock(&rdtgroup_mutex);
}
-void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms)
+/**
+ * cqm_setup_limbo_handler() - Schedule the limbo handler to run for this
+ * domain.
+ * @delay_ms: How far in the future the handler should run.
+ * @exclude_cpu: Which CPU the handler should not run on,
+ * RESCTRL_PICK_ANY_CPU to pick any CPU.
+ */
+void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms,
+ int exclude_cpu)
{
unsigned long delay = msecs_to_jiffies(delay_ms);
int cpu;
- cpu = cpumask_any_housekeeping(&dom->cpu_mask);
+ if (exclude_cpu == RESCTRL_PICK_ANY_CPU)
+ cpu = cpumask_any_housekeeping(&dom->cpu_mask);
+ else
+ cpu = cpumask_any_housekeeping_but(&dom->cpu_mask,
+ exclude_cpu);
dom->cqm_work_cpu = cpu;
- schedule_delayed_work_on(cpu, &dom->cqm_limbo, delay);
+ if (cpu < nr_cpu_ids)
+ schedule_delayed_work_on(cpu, &dom->cqm_limbo, delay);
}
void mbm_handle_overflow(struct work_struct *work)
@@ -870,7 +884,15 @@ void mbm_handle_overflow(struct work_struct *work)
mutex_unlock(&rdtgroup_mutex);
}
-void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
+/**
+ * mbm_setup_overflow_handler() - Schedule the overflow handler to run for this
+ * domain.
+ * @delay_ms: How far in the future the handler should run.
+ * @exclude_cpu: Which CPU the handler should not run on,
+ * RESCTRL_PICK_ANY_CPU to pick any CPU.
+ */
+void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms,
+ int exclude_cpu)
{
unsigned long delay = msecs_to_jiffies(delay_ms);
int cpu;
@@ -881,9 +903,15 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
*/
if (!resctrl_mounted || !resctrl_arch_mon_capable())
return;
- cpu = cpumask_any_housekeeping(&dom->cpu_mask);
+ if (exclude_cpu == RESCTRL_PICK_ANY_CPU)
+ cpu = cpumask_any_housekeeping(&dom->cpu_mask);
+ else
+ cpu = cpumask_any_housekeeping_but(&dom->cpu_mask,
+ exclude_cpu);
dom->mbm_work_cpu = cpu;
- schedule_delayed_work_on(cpu, &dom->mbm_over, delay);
+
+ if (cpu < nr_cpu_ids)
+ schedule_delayed_work_on(cpu, &dom->mbm_over, delay);
}
static int dom_data_init(struct rdt_resource *r)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 7bd3a3dc0f44..dac7ed7ac71a 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -2552,7 +2552,8 @@ static int rdt_get_tree(struct fs_context *fc)
if (is_mbm_enabled()) {
r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
list_for_each_entry(dom, &r->domains, list)
- mbm_setup_overflow_handler(dom, MBM_OVERFLOW_INTERVAL);
+ mbm_setup_overflow_handler(dom, MBM_OVERFLOW_INTERVAL,
+ RESCTRL_PICK_ANY_CPU);
}
goto out;
@@ -3850,7 +3851,8 @@ int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
if (is_mbm_enabled()) {
INIT_DELAYED_WORK(&d->mbm_over, mbm_handle_overflow);
- mbm_setup_overflow_handler(d, MBM_OVERFLOW_INTERVAL);
+ mbm_setup_overflow_handler(d, MBM_OVERFLOW_INTERVAL,
+ RESCTRL_PICK_ANY_CPU);
}
if (is_llc_occupancy_enabled())
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index 35d3c97df212..56b4645940a7 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -10,6 +10,8 @@
#define RESCTRL_RESERVED_CLOSID 0
#define RESCTRL_RESERVED_RMID 0
+#define RESCTRL_PICK_ANY_CPU -1
+
#ifdef CONFIG_PROC_CPU_RESCTRL
int proc_resctrl_show(struct seq_file *m,
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 21/24] x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but cpu
2023-07-28 16:42 ` [PATCH v5 21/24] x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but cpu James Morse
@ 2023-08-09 22:38 ` Reinette Chatre
2023-08-24 16:57 ` James Morse
0 siblings, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:38 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> When a CPU is taken offline resctrl may need to move the overflow or
> limbo handlers to run on a different CPU.
>
> Once the offline callbacks have been split, cqm_setup_limbo_handler()
> will be called while the CPU that is going offline is still present
> in the cpu_mask.
>
> Pass the CPU to exclude to cqm_setup_limbo_handler() and
> mbm_setup_overflow_handler(). These functions can use a variant of
> cpumask_any_but() when selecting the CPU. -1 is used to indicate no CPUs
> need excluding.
>
> A subsequent patch moves these calls to be before CPUs have been removed,
> so this exclude_cpus behaviour is temporary.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v2:
> * Rephrased a comment to avoid a two letter bad-word. (we)
> * Avoid assigning mbm_work_cpu if the domain is going to be free()d
> * Added cpumask_any_housekeeping_but(), I dislike the name
>
> Changes since v3:
> * Marked an explanatory comment as temporary as the subsequent patch is
> no longer adjacent.
>
> Changes since v4:
> * Check against RESCTRL_PICK_ANY_CPU instead of -1.
> * Leave cqm_work_cpu as nr_cpu_ids when no CPU is available.
> * Made cpumask_any_housekeeping_but() more readable.
> ---
> arch/x86/kernel/cpu/resctrl/core.c | 8 +++--
> arch/x86/kernel/cpu/resctrl/internal.h | 36 ++++++++++++++++++++--
> arch/x86/kernel/cpu/resctrl/monitor.c | 42 +++++++++++++++++++++-----
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 6 ++--
> include/linux/resctrl.h | 2 ++
> 5 files changed, 81 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
> index a694563d3929..d39572a0a3cd 100644
> --- a/arch/x86/kernel/cpu/resctrl/core.c
> +++ b/arch/x86/kernel/cpu/resctrl/core.c
> @@ -582,12 +582,16 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
> if (r == &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl) {
> if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
> cancel_delayed_work(&d->mbm_over);
> - mbm_setup_overflow_handler(d, 0);
> + /*
> + * temporary: exclude_cpu=-1 as this CPU has already
> + * been removed by cpumask_clear_cpu()d
> + */
> + mbm_setup_overflow_handler(d, 0, RESCTRL_PICK_ANY_CPU);
> }
> if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu &&
> has_busy_rmid(d)) {
> cancel_delayed_work(&d->cqm_limbo);
> - cqm_setup_limbo_handler(d, 0);
> + cqm_setup_limbo_handler(d, 0, RESCTRL_PICK_ANY_CPU);
> }
> }
> }
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index f99e0a1f39c8..655418c23c0e 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -79,6 +79,36 @@ static inline unsigned int cpumask_any_housekeeping(const struct cpumask *mask)
> return cpu;
> }
>
> +/**
> + * cpumask_any_housekeeping_but() - Chose any cpu in @mask, preferring those
cpu -> CPU
> + * that aren't marked nohz_full, excluding
> + * the provided CPU
> + * @mask: The mask to pick a CPU from.
> + * @exclude_cpu:The CPU to avoid picking.
> + *
> + * Returns a CPU from @mask, but not @exclude_cpus. If there are housekeeping
exclude_cpus -> exclude_cpu
> + * CPUs that don't use nohz_full, these are preferred.
> + * Returns >= nr_cpu_ids if no CPUs are available.
> + */
> +static inline unsigned int
> +cpumask_any_housekeeping_but(const struct cpumask *mask, int exclude_cpu)
> +{
> + unsigned int cpu, hk_cpu;
> +
> + cpu = cpumask_any_but(mask, exclude_cpu);
> + if (!tick_nohz_full_cpu(cpu))
> + return cpu;
> +
> + hk_cpu = cpumask_nth_andnot(0, mask, tick_nohz_full_mask);
> + if (hk_cpu == exclude_cpu)
> + hk_cpu = cpumask_nth_andnot(1, mask, tick_nohz_full_mask);
> +
> + if (hk_cpu < nr_cpu_ids)
> + cpu = hk_cpu;
> +
> + return cpu;
> +}
> +
> struct rdt_fs_context {
> struct kernfs_fs_context kfc;
> bool enable_cdpl2;
> @@ -564,11 +594,13 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
> struct rdt_domain *d, struct rdtgroup *rdtgrp,
> int evtid, int first);
> void mbm_setup_overflow_handler(struct rdt_domain *dom,
> - unsigned long delay_ms);
> + unsigned long delay_ms,
> + int exclude_cpu);
> void mbm_handle_overflow(struct work_struct *work);
> void __init intel_rdt_mbm_apply_quirk(void);
> bool is_mba_sc(struct rdt_resource *r);
> -void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms);
> +void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms,
> + int exclude_cpu);
> void cqm_handle_limbo(struct work_struct *work);
> bool has_busy_rmid(struct rdt_domain *d);
> void __check_limbo(struct rdt_domain *d, bool force_free);
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index c0b1ad8d8f6d..471cdc4e4eae 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -493,7 +493,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
> * setup up the limbo worker.
> */
> if (!has_busy_rmid(d))
> - cqm_setup_limbo_handler(d, CQM_LIMBOCHECK_INTERVAL);
> + cqm_setup_limbo_handler(d, CQM_LIMBOCHECK_INTERVAL,
> + RESCTRL_PICK_ANY_CPU);
> set_bit(idx, d->rmid_busy_llc);
> entry->busy++;
> }
> @@ -816,15 +817,28 @@ void cqm_handle_limbo(struct work_struct *work)
> mutex_unlock(&rdtgroup_mutex);
> }
>
> -void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms)
> +/**
> + * cqm_setup_limbo_handler() - Schedule the limbo handler to run for this
> + * domain.
> + * @delay_ms: How far in the future the handler should run.
> + * @exclude_cpu: Which CPU the handler should not run on,
> + * RESCTRL_PICK_ANY_CPU to pick any CPU.
> + */
> +void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms,
> + int exclude_cpu)
> {
> unsigned long delay = msecs_to_jiffies(delay_ms);
> int cpu;
>
> - cpu = cpumask_any_housekeeping(&dom->cpu_mask);
> + if (exclude_cpu == RESCTRL_PICK_ANY_CPU)
> + cpu = cpumask_any_housekeeping(&dom->cpu_mask);
> + else
> + cpu = cpumask_any_housekeeping_but(&dom->cpu_mask,
> + exclude_cpu);
Having callers need to do this checking seems unnecessary and makes the
code complicated. Can cpumask_any_housekeeping_but() instead be made
slightly smarter to handle the case where exclude_cpu == RESCTRL_PICK_ANY_CPU ?
Looks like there is a bit of duplication between
cpumask_any_housekeeping() and cpumask_any_housekeeping_but().
> dom->cqm_work_cpu = cpu;
>
> - schedule_delayed_work_on(cpu, &dom->cqm_limbo, delay);
> + if (cpu < nr_cpu_ids)
> + schedule_delayed_work_on(cpu, &dom->cqm_limbo, delay);
> }
>
> void mbm_handle_overflow(struct work_struct *work)
> @@ -870,7 +884,15 @@ void mbm_handle_overflow(struct work_struct *work)
> mutex_unlock(&rdtgroup_mutex);
> }
>
> -void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
> +/**
> + * mbm_setup_overflow_handler() - Schedule the overflow handler to run for this
> + * domain.
> + * @delay_ms: How far in the future the handler should run.
> + * @exclude_cpu: Which CPU the handler should not run on,
> + * RESCTRL_PICK_ANY_CPU to pick any CPU.
> + */
> +void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms,
> + int exclude_cpu)
> {
> unsigned long delay = msecs_to_jiffies(delay_ms);
> int cpu;
> @@ -881,9 +903,15 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
> */
> if (!resctrl_mounted || !resctrl_arch_mon_capable())
> return;
> - cpu = cpumask_any_housekeeping(&dom->cpu_mask);
> + if (exclude_cpu == RESCTRL_PICK_ANY_CPU)
> + cpu = cpumask_any_housekeeping(&dom->cpu_mask);
> + else
> + cpu = cpumask_any_housekeeping_but(&dom->cpu_mask,
> + exclude_cpu);
> dom->mbm_work_cpu = cpu;
> - schedule_delayed_work_on(cpu, &dom->mbm_over, delay);
> +
> + if (cpu < nr_cpu_ids)
> + schedule_delayed_work_on(cpu, &dom->mbm_over, delay);
> }
>
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 21/24] x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but cpu
2023-08-09 22:38 ` Reinette Chatre
@ 2023-08-24 16:57 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:57 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 09/08/2023 23:38, Reinette Chatre wrote:
> On 7/28/2023 9:42 AM, James Morse wrote:
>> When a CPU is taken offline resctrl may need to move the overflow or
>> limbo handlers to run on a different CPU.
>>
>> Once the offline callbacks have been split, cqm_setup_limbo_handler()
>> will be called while the CPU that is going offline is still present
>> in the cpu_mask.
>>
>> Pass the CPU to exclude to cqm_setup_limbo_handler() and
>> mbm_setup_overflow_handler(). These functions can use a variant of
>> cpumask_any_but() when selecting the CPU. -1 is used to indicate no CPUs
>> need excluding.
>>
>> A subsequent patch moves these calls to be before CPUs have been removed,
>> so this exclude_cpus behaviour is temporary.
>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>> index c0b1ad8d8f6d..471cdc4e4eae 100644
>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>> @@ -816,15 +817,28 @@ void cqm_handle_limbo(struct work_struct *work)
>> mutex_unlock(&rdtgroup_mutex);
>> }
>>
>> -void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms)
>> +/**
>> + * cqm_setup_limbo_handler() - Schedule the limbo handler to run for this
>> + * domain.
>> + * @delay_ms: How far in the future the handler should run.
>> + * @exclude_cpu: Which CPU the handler should not run on,
>> + * RESCTRL_PICK_ANY_CPU to pick any CPU.
>> + */
>> +void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms,
>> + int exclude_cpu)
>> {
>> unsigned long delay = msecs_to_jiffies(delay_ms);
>> int cpu;
>>
>> - cpu = cpumask_any_housekeeping(&dom->cpu_mask);
>> + if (exclude_cpu == RESCTRL_PICK_ANY_CPU)
>> + cpu = cpumask_any_housekeeping(&dom->cpu_mask);
>> + else
>> + cpu = cpumask_any_housekeeping_but(&dom->cpu_mask,
>> + exclude_cpu);
>
> Having callers need to do this checking seems unnecessary and makes the
> code complicated. Can cpumask_any_housekeeping_but() instead be made
> slightly smarter to handle the case where exclude_cpu == RESCTRL_PICK_ANY_CPU ?
>
> Looks like there is a bit of duplication between
> cpumask_any_housekeeping() and cpumask_any_housekeeping_but().
Yup, this was because I was originally going to add them to cpumask.h, but figured it
would be easier to leave them here - in a shape that could be moved to cpumask.h if anyone
else needs them.
Using one helper for both would simplify things for resctrl, I'll do that.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 22/24] x86/resctrl: Add cpu offline callback for resctrl work
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (20 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 21/24] x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but cpu James Morse
@ 2023-07-28 16:42 ` James Morse
2023-07-28 16:42 ` [PATCH v5 23/24] x86/resctrl: Move domain helper migration into resctrl_offline_cpu() James Morse
` (3 subsequent siblings)
25 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
The resctrl architecture specific code may need to free a domain when
a CPU goes offline, it also needs to reset the CPUs PQR_ASSOC register.
Amongst other things, the resctrl filesystem code needs to clear this
CPU from the cpu_mask of any control and monitor groups.
Currently this is all done in core.c and called from
resctrl_offline_cpu(), making the split between architecture and
filesystem code unclear.
Move the filesystem work to remove the CPU from the control and monitor
groups into a filesystem helper called resctrl_offline_cpu(), and rename
the one in core.c resctrl_arch_offline_cpu().
The rdtgroup_mutex is unlocked and locked again in the call in
preparation for changing the locking rules for the architecture
code.
Signed-off-by: James Morse <james.morse@arm.com>
---
arch/x86/kernel/cpu/resctrl/core.c | 25 +++++--------------------
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 24 ++++++++++++++++++++++++
include/linux/resctrl.h | 1 +
3 files changed, 30 insertions(+), 20 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
index d39572a0a3cd..6eb9408a942a 100644
--- a/arch/x86/kernel/cpu/resctrl/core.c
+++ b/arch/x86/kernel/cpu/resctrl/core.c
@@ -623,31 +623,15 @@ static int resctrl_arch_online_cpu(unsigned int cpu)
return ret;
}
-static void clear_childcpus(struct rdtgroup *r, unsigned int cpu)
+static int resctrl_arch_offline_cpu(unsigned int cpu)
{
- struct rdtgroup *cr;
-
- list_for_each_entry(cr, &r->mon.crdtgrp_list, mon.crdtgrp_list) {
- if (cpumask_test_and_clear_cpu(cpu, &cr->cpu_mask)) {
- break;
- }
- }
-}
-
-static int resctrl_offline_cpu(unsigned int cpu)
-{
- struct rdtgroup *rdtgrp;
struct rdt_resource *r;
mutex_lock(&rdtgroup_mutex);
+ resctrl_offline_cpu(cpu);
+
for_each_capable_rdt_resource(r)
domain_remove_cpu(cpu, r);
- list_for_each_entry(rdtgrp, &rdt_all_groups, rdtgroup_list) {
- if (cpumask_test_and_clear_cpu(cpu, &rdtgrp->cpu_mask)) {
- clear_childcpus(rdtgrp, cpu);
- break;
- }
- }
clear_closid_rmid(cpu);
mutex_unlock(&rdtgroup_mutex);
@@ -970,7 +954,8 @@ static int __init resctrl_late_init(void)
state = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
"x86/resctrl/cat:online:",
- resctrl_arch_online_cpu, resctrl_offline_cpu);
+ resctrl_arch_online_cpu,
+ resctrl_arch_offline_cpu);
if (state < 0)
return state;
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index dac7ed7ac71a..12a628b5d476 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -3880,6 +3880,30 @@ int resctrl_online_cpu(unsigned int cpu)
return 0;
}
+static void clear_childcpus(struct rdtgroup *r, unsigned int cpu)
+{
+ struct rdtgroup *cr;
+
+ list_for_each_entry(cr, &r->mon.crdtgrp_list, mon.crdtgrp_list) {
+ if (cpumask_test_and_clear_cpu(cpu, &cr->cpu_mask))
+ break;
+ }
+}
+
+void resctrl_offline_cpu(unsigned int cpu)
+{
+ struct rdtgroup *rdtgrp;
+
+ lockdep_assert_held(&rdtgroup_mutex);
+
+ list_for_each_entry(rdtgrp, &rdt_all_groups, rdtgroup_list) {
+ if (cpumask_test_and_clear_cpu(cpu, &rdtgrp->cpu_mask)) {
+ clear_childcpus(rdtgrp, cpu);
+ break;
+ }
+ }
+}
+
/*
* rdtgroup_init - rdtgroup initialization
*
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index 56b4645940a7..f3ef3ceb9c5e 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -226,6 +226,7 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_domain *d,
int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d);
void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d);
int resctrl_online_cpu(unsigned int cpu);
+void resctrl_offline_cpu(unsigned int cpu);
/**
* resctrl_arch_rmid_read() - Read the eventid counter corresponding to rmid
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* [PATCH v5 23/24] x86/resctrl: Move domain helper migration into resctrl_offline_cpu()
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (21 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 22/24] x86/resctrl: Add cpu offline callback for resctrl work James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:39 ` Reinette Chatre
2023-07-28 16:42 ` [PATCH v5 24/24] x86/resctrl: Separate arch and fs resctrl locks James Morse
` (2 subsequent siblings)
25 siblings, 1 reply; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
When a CPU is taken offline the resctrl filesystem code needs to check
if it was the CPU nominated to perform the periodic overflow and limbo
work. If so, another CPU needs to be chosen to do this work.
This is currently done in core.c, mixed in with the code that removes
the CPU from the domain's mask, and potentially free()s the domain.
Move the migration of the overflow and limbo helpers into the filesystem
code, into resctrl_offline_cpu(). As resctrl_offline_cpu() runs before
the architecture code has removed the CPU from the domain mask, the
callers need to be told which CPU is being removed, to avoid picking
it as the new CPU. This uses the exclude_cpu feature previously
added.
Signed-off-by: James Morse <james.morse@arm.com>
---
arch/x86/kernel/cpu/resctrl/core.c | 16 ----------------
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 15 +++++++++++++++
2 files changed, 15 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
index 6eb9408a942a..edc0dd123317 100644
--- a/arch/x86/kernel/cpu/resctrl/core.c
+++ b/arch/x86/kernel/cpu/resctrl/core.c
@@ -578,22 +578,6 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
return;
}
-
- if (r == &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl) {
- if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
- cancel_delayed_work(&d->mbm_over);
- /*
- * temporary: exclude_cpu=-1 as this CPU has already
- * been removed by cpumask_clear_cpu()d
- */
- mbm_setup_overflow_handler(d, 0, RESCTRL_PICK_ANY_CPU);
- }
- if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu &&
- has_busy_rmid(d)) {
- cancel_delayed_work(&d->cqm_limbo);
- cqm_setup_limbo_handler(d, 0, RESCTRL_PICK_ANY_CPU);
- }
- }
}
static void clear_closid_rmid(int cpu)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 12a628b5d476..a256a96df487 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -3892,7 +3892,9 @@ static void clear_childcpus(struct rdtgroup *r, unsigned int cpu)
void resctrl_offline_cpu(unsigned int cpu)
{
+ struct rdt_domain *d;
struct rdtgroup *rdtgrp;
+ struct rdt_resource *l3 = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
lockdep_assert_held(&rdtgroup_mutex);
@@ -3902,6 +3904,19 @@ void resctrl_offline_cpu(unsigned int cpu)
break;
}
}
+
+ d = get_domain_from_cpu(cpu, l3);
+ if (d) {
+ if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
+ cancel_delayed_work(&d->mbm_over);
+ mbm_setup_overflow_handler(d, 0, cpu);
+ }
+ if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu &&
+ has_busy_rmid(d)) {
+ cancel_delayed_work(&d->cqm_limbo);
+ cqm_setup_limbo_handler(d, 0, cpu);
+ }
+ }
}
/*
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 23/24] x86/resctrl: Move domain helper migration into resctrl_offline_cpu()
2023-07-28 16:42 ` [PATCH v5 23/24] x86/resctrl: Move domain helper migration into resctrl_offline_cpu() James Morse
@ 2023-08-09 22:39 ` Reinette Chatre
0 siblings, 0 replies; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:39 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> When a CPU is taken offline the resctrl filesystem code needs to check
> if it was the CPU nominated to perform the periodic overflow and limbo
> work. If so, another CPU needs to be chosen to do this work.
>
> This is currently done in core.c, mixed in with the code that removes
> the CPU from the domain's mask, and potentially free()s the domain.
>
> Move the migration of the overflow and limbo helpers into the filesystem
> code, into resctrl_offline_cpu(). As resctrl_offline_cpu() runs before
> the architecture code has removed the CPU from the domain mask, the
> callers need to be told which CPU is being removed, to avoid picking
> it as the new CPU. This uses the exclude_cpu feature previously
> added.
>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> arch/x86/kernel/cpu/resctrl/core.c | 16 ----------------
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 15 +++++++++++++++
> 2 files changed, 15 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
> index 6eb9408a942a..edc0dd123317 100644
> --- a/arch/x86/kernel/cpu/resctrl/core.c
> +++ b/arch/x86/kernel/cpu/resctrl/core.c
> @@ -578,22 +578,6 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
>
> return;
> }
> -
> - if (r == &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl) {
> - if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
> - cancel_delayed_work(&d->mbm_over);
> - /*
> - * temporary: exclude_cpu=-1 as this CPU has already
> - * been removed by cpumask_clear_cpu()d
> - */
> - mbm_setup_overflow_handler(d, 0, RESCTRL_PICK_ANY_CPU);
> - }
> - if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu &&
> - has_busy_rmid(d)) {
> - cancel_delayed_work(&d->cqm_limbo);
> - cqm_setup_limbo_handler(d, 0, RESCTRL_PICK_ANY_CPU);
> - }
> - }
> }
>
> static void clear_closid_rmid(int cpu)
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index 12a628b5d476..a256a96df487 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -3892,7 +3892,9 @@ static void clear_childcpus(struct rdtgroup *r, unsigned int cpu)
>
> void resctrl_offline_cpu(unsigned int cpu)
> {
> + struct rdt_domain *d;
> struct rdtgroup *rdtgrp;
> + struct rdt_resource *l3 = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
Please always keep reverse fir order.
>
> lockdep_assert_held(&rdtgroup_mutex);
>
> @@ -3902,6 +3904,19 @@ void resctrl_offline_cpu(unsigned int cpu)
> break;
> }
> }
> +
Can there be a l3->mon_capable check here to make things clear?
> + d = get_domain_from_cpu(cpu, l3);
> + if (d) {
> + if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
> + cancel_delayed_work(&d->mbm_over);
> + mbm_setup_overflow_handler(d, 0, cpu);
> + }
> + if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu &&
> + has_busy_rmid(d)) {
> + cancel_delayed_work(&d->cqm_limbo);
> + cqm_setup_limbo_handler(d, 0, cpu);
> + }
> + }
> }
>
> /*
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* [PATCH v5 24/24] x86/resctrl: Separate arch and fs resctrl locks
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (22 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 23/24] x86/resctrl: Move domain helper migration into resctrl_offline_cpu() James Morse
@ 2023-07-28 16:42 ` James Morse
2023-08-09 22:41 ` Reinette Chatre
2023-08-18 22:05 ` Fenghua Yu
2023-08-03 7:34 ` [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking Shaopeng Tan (Fujitsu)
2023-08-22 8:42 ` Peter Newman
25 siblings, 2 replies; 77+ messages in thread
From: James Morse @ 2023-07-28 16:42 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger, James Morse,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, peternewman, dfustini
resctrl has one mutex that is taken by the architecture specific code,
and the filesystem parts. The two interact via cpuhp, where the
architecture code updates the domain list. Filesystem handlers that
walk the domains list should not run concurrently with the cpuhp
callback modifying the list.
Exposing a lock from the filesystem code means the interface is not
cleanly defined, and creates the possibility of cross-architecture
lock ordering headaches. The interaction only exists so that certain
filesystem paths are serialised against cpu hotplug. The cpu hotplug
code already has a mechanism to do this using cpus_read_lock().
MPAM's monitors have an overflow interrupt, so it needs to be possible
to walk the domains list in irq context. RCU is ideal for this,
but some paths need to be able to sleep to allocate memory.
Because resctrl_{on,off}line_cpu() take the rdtgroup_mutex as part
of a cpuhp callback, cpus_read_lock() must always be taken first.
rdtgroup_schemata_write() already does this.
Most of the filesystem code's domain list walkers are currently
protected by the rdtgroup_mutex taken in rdtgroup_kn_lock_live().
The exceptions are rdt_bit_usage_show() and the mon_config helpers
which take the lock directly.
Make the domain list protected by RCU. An architecture-specific
lock prevents concurrent writers. rdt_bit_usage_show() can
walk the domain list under rcu_read_lock(). The mon_config helpers
send multiple IPIs, take the cpus_read_lock() in these cases.
The other filesystem list walkers need to be able to sleep.
Add cpus_read_lock() to rdtgroup_kn_lock_live() so that the
cpuhp callbacks can't be invoked when file system operations are
occurring.
Add lockdep_assert_cpus_held() in the cases where the
rdtgroup_kn_lock_live() call isn't obvious.
Resctrl's domain online/offline calls now need to take the
rdtgroup_mutex themselves.
Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v2:
* Reworded a comment,
* Added a lockdep assertion
* Moved clear_closid_rmid() outside the locked region of cpu
online/offline
Changes since v3:
* Added a header include
---
arch/x86/kernel/cpu/resctrl/core.c | 38 +++++++++-----
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 16 ++++--
arch/x86/kernel/cpu/resctrl/monitor.c | 4 ++
arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 3 ++
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 64 ++++++++++++++++++++---
include/linux/resctrl.h | 2 +-
6 files changed, 101 insertions(+), 26 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
index edc0dd123317..f106c68a9be8 100644
--- a/arch/x86/kernel/cpu/resctrl/core.c
+++ b/arch/x86/kernel/cpu/resctrl/core.c
@@ -25,8 +25,15 @@
#include <asm/resctrl.h>
#include "internal.h"
-/* Mutex to protect rdtgroup access. */
-DEFINE_MUTEX(rdtgroup_mutex);
+/*
+ * rdt_domain structures are kfree()d when their last CPU goes offline,
+ * and allocated when the first CPU in a new domain comes online.
+ * The rdt_resource's domain list is updated when this happens. Readers of
+ * the domain list must either take cpus_read_lock(), or rely on an RCU
+ * read-side critical section, to avoid observing concurrent modification.
+ * All writers take this mutex:
+ */
+static DEFINE_MUTEX(domain_list_lock);
/*
* The cached resctrl_pqr_state is strictly per CPU and can never be
@@ -508,6 +515,8 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
struct rdt_domain *d;
int err;
+ lockdep_assert_held(&domain_list_lock);
+
d = rdt_find_domain(r, id, &add_pos);
if (IS_ERR(d)) {
pr_warn("Couldn't find cache id for CPU %d\n", cpu);
@@ -541,11 +550,12 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
return;
}
- list_add_tail(&d->list, add_pos);
+ list_add_tail_rcu(&d->list, add_pos);
err = resctrl_online_domain(r, d);
if (err) {
- list_del(&d->list);
+ list_del_rcu(&d->list);
+ synchronize_rcu();
domain_free(hw_dom);
}
}
@@ -556,6 +566,8 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
struct rdt_hw_domain *hw_dom;
struct rdt_domain *d;
+ lockdep_assert_held(&domain_list_lock);
+
d = rdt_find_domain(r, id, NULL);
if (IS_ERR_OR_NULL(d)) {
pr_warn("Couldn't find cache id for CPU %d\n", cpu);
@@ -566,7 +578,8 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
cpumask_clear_cpu(cpu, &d->cpu_mask);
if (cpumask_empty(&d->cpu_mask)) {
resctrl_offline_domain(r, d);
- list_del(&d->list);
+ list_del_rcu(&d->list);
+ synchronize_rcu();
/*
* rdt_domain "d" is going to be freed below, so clear
@@ -594,30 +607,29 @@ static void clear_closid_rmid(int cpu)
static int resctrl_arch_online_cpu(unsigned int cpu)
{
struct rdt_resource *r;
- int ret;
- mutex_lock(&rdtgroup_mutex);
+ mutex_lock(&domain_list_lock);
for_each_capable_rdt_resource(r)
domain_add_cpu(cpu, r);
+ mutex_unlock(&domain_list_lock);
+
clear_closid_rmid(cpu);
- ret = resctrl_online_cpu(cpu);
- mutex_unlock(&rdtgroup_mutex);
-
- return ret;
+ return resctrl_online_cpu(cpu);
}
static int resctrl_arch_offline_cpu(unsigned int cpu)
{
struct rdt_resource *r;
- mutex_lock(&rdtgroup_mutex);
resctrl_offline_cpu(cpu);
+ mutex_lock(&domain_list_lock);
for_each_capable_rdt_resource(r)
domain_remove_cpu(cpu, r);
+ mutex_unlock(&domain_list_lock);
+
clear_closid_rmid(cpu);
- mutex_unlock(&rdtgroup_mutex);
return 0;
}
diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index 55bad57a7bd5..b4f611359d1e 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -209,6 +209,9 @@ static int parse_line(char *line, struct resctrl_schema *s,
struct rdt_domain *d;
unsigned long dom_id;
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP &&
(r->rid == RDT_RESOURCE_MBA || r->rid == RDT_RESOURCE_SMBA)) {
rdt_last_cmd_puts("Cannot pseudo-lock MBA resource\n");
@@ -313,6 +316,9 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
struct rdt_domain *d;
u32 idx;
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
return -ENOMEM;
@@ -378,11 +384,9 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
return -EINVAL;
buf[nbytes - 1] = '\0';
- cpus_read_lock();
rdtgrp = rdtgroup_kn_lock_live(of->kn);
if (!rdtgrp) {
rdtgroup_kn_unlock(of->kn);
- cpus_read_unlock();
return -ENOENT;
}
rdt_last_cmd_clear();
@@ -444,7 +448,6 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
out:
rdt_staged_configs_clear();
rdtgroup_kn_unlock(of->kn);
- cpus_read_unlock();
return ret ?: nbytes;
}
@@ -464,6 +467,9 @@ static void show_doms(struct seq_file *s, struct resctrl_schema *schema, int clo
bool sep = false;
u32 ctrl_val;
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
seq_printf(s, "%*s:", max_name_width, schema->name);
list_for_each_entry(dom, &r->domains, list) {
if (sep)
@@ -534,8 +540,8 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
{
int cpu;
- /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */
- lockdep_assert_held(&rdtgroup_mutex);
+ /* When picking a cpu from cpu_mask, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
/*
* Setup the parameters to pass to mon_event_count() to read the data.
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 471cdc4e4eae..11b5da93044d 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -15,6 +15,7 @@
* Software Developer Manual June 2016, volume 3, section 17.17.
*/
+#include <linux/cpu.h>
#include <linux/module.h>
#include <linux/percpu.h>
#include <linux/sizes.h>
@@ -484,6 +485,9 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
lockdep_assert_held(&rdtgroup_mutex);
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
entry->busy = 0;
diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
index 460421051abf..fc3ed917d173 100644
--- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
@@ -830,6 +830,9 @@ bool rdtgroup_pseudo_locked_in_hierarchy(struct rdt_domain *d)
struct rdt_domain *d_i;
bool ret = false;
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
if (!zalloc_cpumask_var(&cpu_with_psl, GFP_KERNEL))
return true;
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index a256a96df487..47dcf2cb76ca 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -35,6 +35,10 @@
DEFINE_STATIC_KEY_FALSE(rdt_enable_key);
DEFINE_STATIC_KEY_FALSE(rdt_mon_enable_key);
DEFINE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
+
+/* Mutex to protect rdtgroup access. */
+DEFINE_MUTEX(rdtgroup_mutex);
+
static struct kernfs_root *rdt_root;
struct rdtgroup rdtgroup_default;
LIST_HEAD(rdt_all_groups);
@@ -954,7 +958,8 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
mutex_lock(&rdtgroup_mutex);
hw_shareable = r->cache.shareable_bits;
- list_for_each_entry(dom, &r->domains, list) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(dom, &r->domains, list) {
if (sep)
seq_putc(seq, ';');
sw_shareable = 0;
@@ -1010,8 +1015,10 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
}
sep = true;
}
+ rcu_read_unlock();
seq_putc(seq, '\n');
mutex_unlock(&rdtgroup_mutex);
+
return 0;
}
@@ -1254,6 +1261,9 @@ static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp)
struct rdt_domain *d;
u32 ctrl;
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
list_for_each_entry(s, &resctrl_schema_all, list) {
r = s->res;
if (r->rid == RDT_RESOURCE_MBA || r->rid == RDT_RESOURCE_SMBA)
@@ -1520,6 +1530,7 @@ static int mbm_config_show(struct seq_file *s, struct rdt_resource *r, u32 evtid
struct rdt_domain *dom;
bool sep = false;
+ cpus_read_lock();
mutex_lock(&rdtgroup_mutex);
list_for_each_entry(dom, &r->domains, list) {
@@ -1536,6 +1547,7 @@ static int mbm_config_show(struct seq_file *s, struct rdt_resource *r, u32 evtid
seq_puts(s, "\n");
mutex_unlock(&rdtgroup_mutex);
+ cpus_read_unlock();
return 0;
}
@@ -1627,6 +1639,9 @@ static int mon_config_write(struct rdt_resource *r, char *tok, u32 evtid)
struct rdt_domain *d;
int ret = 0;
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
next:
if (!tok || tok[0] == '\0')
return 0;
@@ -1668,6 +1683,7 @@ static ssize_t mbm_total_bytes_config_write(struct kernfs_open_file *of,
if (nbytes == 0 || buf[nbytes - 1] != '\n')
return -EINVAL;
+ cpus_read_lock();
mutex_lock(&rdtgroup_mutex);
rdt_last_cmd_clear();
@@ -1677,6 +1693,7 @@ static ssize_t mbm_total_bytes_config_write(struct kernfs_open_file *of,
ret = mon_config_write(r, buf, QOS_L3_MBM_TOTAL_EVENT_ID);
mutex_unlock(&rdtgroup_mutex);
+ cpus_read_unlock();
return ret ?: nbytes;
}
@@ -1692,6 +1709,7 @@ static ssize_t mbm_local_bytes_config_write(struct kernfs_open_file *of,
if (nbytes == 0 || buf[nbytes - 1] != '\n')
return -EINVAL;
+ cpus_read_lock();
mutex_lock(&rdtgroup_mutex);
rdt_last_cmd_clear();
@@ -1701,6 +1719,7 @@ static ssize_t mbm_local_bytes_config_write(struct kernfs_open_file *of,
ret = mon_config_write(r, buf, QOS_L3_MBM_LOCAL_EVENT_ID);
mutex_unlock(&rdtgroup_mutex);
+ cpus_read_unlock();
return ret ?: nbytes;
}
@@ -2153,6 +2172,9 @@ static int set_cache_qos_cfg(int level, bool enable)
struct rdt_domain *d;
int cpu;
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
if (level == RDT_RESOURCE_L3)
update = l3_qos_cfg_update;
else if (level == RDT_RESOURCE_L2)
@@ -2360,6 +2382,7 @@ struct rdtgroup *rdtgroup_kn_lock_live(struct kernfs_node *kn)
rdtgroup_kn_get(rdtgrp, kn);
+ cpus_read_lock();
mutex_lock(&rdtgroup_mutex);
/* Was this group deleted while we waited? */
@@ -2377,6 +2400,8 @@ void rdtgroup_kn_unlock(struct kernfs_node *kn)
return;
mutex_unlock(&rdtgroup_mutex);
+ cpus_read_unlock();
+
rdtgroup_kn_put(rdtgrp, kn);
}
@@ -2664,6 +2689,9 @@ static int reset_all_ctrls(struct rdt_resource *r)
struct rdt_domain *d;
int i;
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
return -ENOMEM;
@@ -2948,6 +2976,9 @@ static int mkdir_mondata_subdir_alldom(struct kernfs_node *parent_kn,
struct rdt_domain *dom;
int ret;
+ /* Walking r->domains, ensure it can't race with cpuhp */
+ lockdep_assert_cpus_held();
+
list_for_each_entry(dom, &r->domains, list) {
ret = mkdir_mondata_subdir(parent_kn, dom, r, prgrp);
if (ret)
@@ -3766,7 +3797,8 @@ static void domain_destroy_mon_state(struct rdt_domain *d)
kfree(d->mbm_local);
}
-void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
+static void _resctrl_offline_domain(struct rdt_resource *r,
+ struct rdt_domain *d)
{
lockdep_assert_held(&rdtgroup_mutex);
@@ -3801,6 +3833,13 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
domain_destroy_mon_state(d);
}
+void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
+{
+ mutex_lock(&rdtgroup_mutex);
+ _resctrl_offline_domain(r, d);
+ mutex_unlock(&rdtgroup_mutex);
+}
+
static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
{
u32 idx_limit = resctrl_arch_system_num_rmid_idx();
@@ -3832,7 +3871,7 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
return 0;
}
-int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
+static int _resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
{
int err;
@@ -3870,12 +3909,23 @@ int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
return 0;
}
+int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
+{
+ int err;
+
+ mutex_lock(&rdtgroup_mutex);
+ err = _resctrl_online_domain(r, d);
+ mutex_unlock(&rdtgroup_mutex);
+
+ return err;
+}
+
int resctrl_online_cpu(unsigned int cpu)
{
- lockdep_assert_held(&rdtgroup_mutex);
-
+ mutex_lock(&rdtgroup_mutex);
/* The CPU is set in default rdtgroup after online. */
cpumask_set_cpu(cpu, &rdtgroup_default.cpu_mask);
+ mutex_unlock(&rdtgroup_mutex);
return 0;
}
@@ -3896,8 +3946,7 @@ void resctrl_offline_cpu(unsigned int cpu)
struct rdtgroup *rdtgrp;
struct rdt_resource *l3 = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
- lockdep_assert_held(&rdtgroup_mutex);
-
+ mutex_lock(&rdtgroup_mutex);
list_for_each_entry(rdtgrp, &rdt_all_groups, rdtgroup_list) {
if (cpumask_test_and_clear_cpu(cpu, &rdtgrp->cpu_mask)) {
clear_childcpus(rdtgrp, cpu);
@@ -3917,6 +3966,7 @@ void resctrl_offline_cpu(unsigned int cpu)
cqm_setup_limbo_handler(d, 0, cpu);
}
}
+ mutex_unlock(&rdtgroup_mutex);
}
/*
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index f3ef3ceb9c5e..2bbdd3d591ea 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -159,7 +159,7 @@ struct resctrl_schema;
* @cache_level: Which cache level defines scope of this resource
* @cache: Cache allocation related data
* @membw: If the component has bandwidth controls, their properties.
- * @domains: All domains for this resource
+ * @domains: RCU list of all domains for this resource
* @name: Name to use in "schemata" file.
* @data_width: Character width of data when displaying
* @default_ctrl: Specifies default cache cbm or memory B/W percent.
--
2.39.2
^ permalink raw reply related [flat|nested] 77+ messages in thread
* Re: [PATCH v5 24/24] x86/resctrl: Separate arch and fs resctrl locks
2023-07-28 16:42 ` [PATCH v5 24/24] x86/resctrl: Separate arch and fs resctrl locks James Morse
@ 2023-08-09 22:41 ` Reinette Chatre
2023-08-24 16:57 ` James Morse
2023-08-18 22:05 ` Fenghua Yu
1 sibling, 1 reply; 77+ messages in thread
From: Reinette Chatre @ 2023-08-09 22:41 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi James,
On 7/28/2023 9:42 AM, James Morse wrote:
> resctrl has one mutex that is taken by the architecture specific code,
> and the filesystem parts. The two interact via cpuhp, where the
> architecture code updates the domain list. Filesystem handlers that
> walk the domains list should not run concurrently with the cpuhp
> callback modifying the list.
>
> Exposing a lock from the filesystem code means the interface is not
> cleanly defined, and creates the possibility of cross-architecture
> lock ordering headaches. The interaction only exists so that certain
> filesystem paths are serialised against cpu hotplug. The cpu hotplug
cpu hotplug -> CPU hotplug
> code already has a mechanism to do this using cpus_read_lock().
>
> MPAM's monitors have an overflow interrupt, so it needs to be possible
> to walk the domains list in irq context. RCU is ideal for this,
> but some paths need to be able to sleep to allocate memory.
>
> Because resctrl_{on,off}line_cpu() take the rdtgroup_mutex as part
> of a cpuhp callback, cpus_read_lock() must always be taken first.
> rdtgroup_schemata_write() already does this.
>
> Most of the filesystem code's domain list walkers are currently
> protected by the rdtgroup_mutex taken in rdtgroup_kn_lock_live().
> The exceptions are rdt_bit_usage_show() and the mon_config helpers
> which take the lock directly.
>
> Make the domain list protected by RCU. An architecture-specific
> lock prevents concurrent writers. rdt_bit_usage_show() can
> walk the domain list under rcu_read_lock(). The mon_config helpers
> send multiple IPIs, take the cpus_read_lock() in these cases.
>
> The other filesystem list walkers need to be able to sleep.
> Add cpus_read_lock() to rdtgroup_kn_lock_live() so that the
> cpuhp callbacks can't be invoked when file system operations are
> occurring.
>
> Add lockdep_assert_cpus_held() in the cases where the
> rdtgroup_kn_lock_live() call isn't obvious.
>
> Resctrl's domain online/offline calls now need to take the
> rdtgroup_mutex themselves.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Signed-off-by: James Morse <james.morse@arm.com>
...
> @@ -464,6 +467,9 @@ static void show_doms(struct seq_file *s, struct resctrl_schema *schema, int clo
> bool sep = false;
> u32 ctrl_val;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> seq_printf(s, "%*s:", max_name_width, schema->name);
> list_for_each_entry(dom, &r->domains, list) {
> if (sep)
> @@ -534,8 +540,8 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
> {
> int cpu;
>
> - /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */
> - lockdep_assert_held(&rdtgroup_mutex);
> + /* When picking a cpu from cpu_mask, ensure it can't race with cpuhp */
cpu -> CPU
> + lockdep_assert_cpus_held();
>
> /*
> * Setup the parameters to pass to mon_event_count() to read the data.
...
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index a256a96df487..47dcf2cb76ca 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -35,6 +35,10 @@
> DEFINE_STATIC_KEY_FALSE(rdt_enable_key);
> DEFINE_STATIC_KEY_FALSE(rdt_mon_enable_key);
> DEFINE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
> +
> +/* Mutex to protect rdtgroup access. */
> +DEFINE_MUTEX(rdtgroup_mutex);
> +
> static struct kernfs_root *rdt_root;
> struct rdtgroup rdtgroup_default;
> LIST_HEAD(rdt_all_groups);
> @@ -954,7 +958,8 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
>
> mutex_lock(&rdtgroup_mutex);
> hw_shareable = r->cache.shareable_bits;
> - list_for_each_entry(dom, &r->domains, list) {
> + rcu_read_lock();
> + list_for_each_entry_rcu(dom, &r->domains, list) {
> if (sep)
> seq_putc(seq, ';');
> sw_shareable = 0;
Does rdt_bit_usage_show() really need RCU? It is another filesystem callback and I
do not see a reason why it should access the domain list in a different way. It
can follow the same pattern as all the other resctrl filesystem ops and use
cpus_read_lock().
> @@ -1010,8 +1015,10 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
> }
> sep = true;
> }
> + rcu_read_unlock();
> seq_putc(seq, '\n');
> mutex_unlock(&rdtgroup_mutex);
> +
Unnecessary empty line.
> return 0;
> }
Reinette
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 24/24] x86/resctrl: Separate arch and fs resctrl locks
2023-08-09 22:41 ` Reinette Chatre
@ 2023-08-24 16:57 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:57 UTC (permalink / raw)
To: Reinette Chatre, x86, linux-kernel
Cc: Fenghua Yu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Reinette,
On 09/08/2023 23:41, Reinette Chatre wrote:
> On 7/28/2023 9:42 AM, James Morse wrote:
>> resctrl has one mutex that is taken by the architecture specific code,
>> and the filesystem parts. The two interact via cpuhp, where the
>> architecture code updates the domain list. Filesystem handlers that
>> walk the domains list should not run concurrently with the cpuhp
>> callback modifying the list.
>>
>> Exposing a lock from the filesystem code means the interface is not
>> cleanly defined, and creates the possibility of cross-architecture
>> lock ordering headaches. The interaction only exists so that certain
>> filesystem paths are serialised against cpu hotplug. The cpu hotplug
>
> cpu hotplug -> CPU hotplug
>
>> code already has a mechanism to do this using cpus_read_lock().
>>
>> MPAM's monitors have an overflow interrupt, so it needs to be possible
>> to walk the domains list in irq context. RCU is ideal for this,
>> but some paths need to be able to sleep to allocate memory.
>>
>> Because resctrl_{on,off}line_cpu() take the rdtgroup_mutex as part
>> of a cpuhp callback, cpus_read_lock() must always be taken first.
>> rdtgroup_schemata_write() already does this.
>>
>> Most of the filesystem code's domain list walkers are currently
>> protected by the rdtgroup_mutex taken in rdtgroup_kn_lock_live().
>> The exceptions are rdt_bit_usage_show() and the mon_config helpers
>> which take the lock directly.
>>
>> Make the domain list protected by RCU. An architecture-specific
>> lock prevents concurrent writers. rdt_bit_usage_show() can
>> walk the domain list under rcu_read_lock(). The mon_config helpers
>> send multiple IPIs, take the cpus_read_lock() in these cases.
>>
>> The other filesystem list walkers need to be able to sleep.
>> Add cpus_read_lock() to rdtgroup_kn_lock_live() so that the
>> cpuhp callbacks can't be invoked when file system operations are
>> occurring.
>>
>> Add lockdep_assert_cpus_held() in the cases where the
>> rdtgroup_kn_lock_live() call isn't obvious.
>>
>> Resctrl's domain online/offline calls now need to take the
>> rdtgroup_mutex themselves.
>> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> index a256a96df487..47dcf2cb76ca 100644
>> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> @@ -954,7 +958,8 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
>>
>> mutex_lock(&rdtgroup_mutex);
>> hw_shareable = r->cache.shareable_bits;
>> - list_for_each_entry(dom, &r->domains, list) {
>> + rcu_read_lock();
>> + list_for_each_entry_rcu(dom, &r->domains, list) {
>> if (sep)
>> seq_putc(seq, ';');
>> sw_shareable = 0;
>
> Does rdt_bit_usage_show() really need RCU? It is another filesystem callback and I
> do not see a reason why it should access the domain list in a different way. It
> can follow the same pattern as all the other resctrl filesystem ops and use
> cpus_read_lock().
It doesn't today, and it was useful to have an example where RCU was used.
I'll make this call cpus_read_lock() instead.
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 24/24] x86/resctrl: Separate arch and fs resctrl locks
2023-07-28 16:42 ` [PATCH v5 24/24] x86/resctrl: Separate arch and fs resctrl locks James Morse
2023-08-09 22:41 ` Reinette Chatre
@ 2023-08-18 22:05 ` Fenghua Yu
2023-08-24 16:58 ` James Morse
1 sibling, 1 reply; 77+ messages in thread
From: Fenghua Yu @ 2023-08-18 22:05 UTC (permalink / raw)
To: James Morse, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi, James,
On 7/28/23 09:42, James Morse wrote:
> resctrl has one mutex that is taken by the architecture specific code,
> and the filesystem parts. The two interact via cpuhp, where the
> architecture code updates the domain list. Filesystem handlers that
> walk the domains list should not run concurrently with the cpuhp
> callback modifying the list.
>
> Exposing a lock from the filesystem code means the interface is not
> cleanly defined, and creates the possibility of cross-architecture
> lock ordering headaches. The interaction only exists so that certain
> filesystem paths are serialised against cpu hotplug. The cpu hotplug
> code already has a mechanism to do this using cpus_read_lock().
>
> MPAM's monitors have an overflow interrupt, so it needs to be possible
> to walk the domains list in irq context. RCU is ideal for this,
> but some paths need to be able to sleep to allocate memory.
>
> Because resctrl_{on,off}line_cpu() take the rdtgroup_mutex as part
> of a cpuhp callback, cpus_read_lock() must always be taken first.
> rdtgroup_schemata_write() already does this.
>
> Most of the filesystem code's domain list walkers are currently
> protected by the rdtgroup_mutex taken in rdtgroup_kn_lock_live().
> The exceptions are rdt_bit_usage_show() and the mon_config helpers
> which take the lock directly.
>
> Make the domain list protected by RCU. An architecture-specific
> lock prevents concurrent writers. rdt_bit_usage_show() can
> walk the domain list under rcu_read_lock(). The mon_config helpers
> send multiple IPIs, take the cpus_read_lock() in these cases.
>
> The other filesystem list walkers need to be able to sleep.
> Add cpus_read_lock() to rdtgroup_kn_lock_live() so that the
> cpuhp callbacks can't be invoked when file system operations are
> occurring.
>
> Add lockdep_assert_cpus_held() in the cases where the
> rdtgroup_kn_lock_live() call isn't obvious.
>
> Resctrl's domain online/offline calls now need to take the
> rdtgroup_mutex themselves.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@fujitsu.com>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v2:
> * Reworded a comment,
> * Added a lockdep assertion
> * Moved clear_closid_rmid() outside the locked region of cpu
> online/offline
>
> Changes since v3:
> * Added a header include
> ---
> arch/x86/kernel/cpu/resctrl/core.c | 38 +++++++++-----
> arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 16 ++++--
> arch/x86/kernel/cpu/resctrl/monitor.c | 4 ++
> arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 3 ++
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 64 ++++++++++++++++++++---
> include/linux/resctrl.h | 2 +-
> 6 files changed, 101 insertions(+), 26 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
> index edc0dd123317..f106c68a9be8 100644
> --- a/arch/x86/kernel/cpu/resctrl/core.c
> +++ b/arch/x86/kernel/cpu/resctrl/core.c
> @@ -25,8 +25,15 @@
> #include <asm/resctrl.h>
> #include "internal.h"
>
> -/* Mutex to protect rdtgroup access. */
> -DEFINE_MUTEX(rdtgroup_mutex);
> +/*
> + * rdt_domain structures are kfree()d when their last CPU goes offline,
> + * and allocated when the first CPU in a new domain comes online.
> + * The rdt_resource's domain list is updated when this happens. Readers of
> + * the domain list must either take cpus_read_lock(), or rely on an RCU
> + * read-side critical section, to avoid observing concurrent modification.
> + * All writers take this mutex:
> + */
> +static DEFINE_MUTEX(domain_list_lock);
>
> /*
> * The cached resctrl_pqr_state is strictly per CPU and can never be
> @@ -508,6 +515,8 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
> struct rdt_domain *d;
> int err;
>
> + lockdep_assert_held(&domain_list_lock);
> +
> d = rdt_find_domain(r, id, &add_pos);
> if (IS_ERR(d)) {
> pr_warn("Couldn't find cache id for CPU %d\n", cpu);
> @@ -541,11 +550,12 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
> return;
> }
>
> - list_add_tail(&d->list, add_pos);
> + list_add_tail_rcu(&d->list, add_pos);
>
> err = resctrl_online_domain(r, d);
> if (err) {
> - list_del(&d->list);
> + list_del_rcu(&d->list);
> + synchronize_rcu();
> domain_free(hw_dom);
> }
> }
> @@ -556,6 +566,8 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
> struct rdt_hw_domain *hw_dom;
> struct rdt_domain *d;
>
> + lockdep_assert_held(&domain_list_lock);
> +
> d = rdt_find_domain(r, id, NULL);
> if (IS_ERR_OR_NULL(d)) {
> pr_warn("Couldn't find cache id for CPU %d\n", cpu);
> @@ -566,7 +578,8 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
> cpumask_clear_cpu(cpu, &d->cpu_mask);
> if (cpumask_empty(&d->cpu_mask)) {
> resctrl_offline_domain(r, d);
> - list_del(&d->list);
> + list_del_rcu(&d->list);
> + synchronize_rcu();
>
> /*
> * rdt_domain "d" is going to be freed below, so clear
> @@ -594,30 +607,29 @@ static void clear_closid_rmid(int cpu)
> static int resctrl_arch_online_cpu(unsigned int cpu)
> {
> struct rdt_resource *r;
> - int ret;
>
> - mutex_lock(&rdtgroup_mutex);
> + mutex_lock(&domain_list_lock);
> for_each_capable_rdt_resource(r)
> domain_add_cpu(cpu, r);
> + mutex_unlock(&domain_list_lock);
> +
> clear_closid_rmid(cpu);
>
> - ret = resctrl_online_cpu(cpu);
> - mutex_unlock(&rdtgroup_mutex);
> -
> - return ret;
> + return resctrl_online_cpu(cpu);
> }
>
> static int resctrl_arch_offline_cpu(unsigned int cpu)
> {
> struct rdt_resource *r;
>
> - mutex_lock(&rdtgroup_mutex);
> resctrl_offline_cpu(cpu);
>
> + mutex_lock(&domain_list_lock);
> for_each_capable_rdt_resource(r)
> domain_remove_cpu(cpu, r);
> + mutex_unlock(&domain_list_lock);
> +
> clear_closid_rmid(cpu);
> - mutex_unlock(&rdtgroup_mutex);
>
> return 0;
> }
> diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> index 55bad57a7bd5..b4f611359d1e 100644
> --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> @@ -209,6 +209,9 @@ static int parse_line(char *line, struct resctrl_schema *s,
> struct rdt_domain *d;
> unsigned long dom_id;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP &&
> (r->rid == RDT_RESOURCE_MBA || r->rid == RDT_RESOURCE_SMBA)) {
> rdt_last_cmd_puts("Cannot pseudo-lock MBA resource\n");
> @@ -313,6 +316,9 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
> struct rdt_domain *d;
> u32 idx;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
When rdtgroup_schemata_write() calls resctrl_arch_update_domains(), I
don't see cpus lock is held. Is it held in the path?
> +
> if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
> return -ENOMEM;
>
> @@ -378,11 +384,9 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
> return -EINVAL;
> buf[nbytes - 1] = '\0';
>
> - cpus_read_lock();
> rdtgrp = rdtgroup_kn_lock_live(of->kn);
> if (!rdtgrp) {
> rdtgroup_kn_unlock(of->kn);
> - cpus_read_unlock();
> return -ENOENT;
> }
> rdt_last_cmd_clear();
> @@ -444,7 +448,6 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
> out:
> rdt_staged_configs_clear();
> rdtgroup_kn_unlock(of->kn);
> - cpus_read_unlock();
> return ret ?: nbytes;
> }
>
> @@ -464,6 +467,9 @@ static void show_doms(struct seq_file *s, struct resctrl_schema *schema, int clo
> bool sep = false;
> u32 ctrl_val;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> seq_printf(s, "%*s:", max_name_width, schema->name);
> list_for_each_entry(dom, &r->domains, list) {
> if (sep)
> @@ -534,8 +540,8 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
> {
> int cpu;
>
> - /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */
> - lockdep_assert_held(&rdtgroup_mutex);
> + /* When picking a cpu from cpu_mask, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
>
> /*
> * Setup the parameters to pass to mon_event_count() to read the data.
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 471cdc4e4eae..11b5da93044d 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -15,6 +15,7 @@
> * Software Developer Manual June 2016, volume 3, section 17.17.
> */
>
> +#include <linux/cpu.h>
> #include <linux/module.h>
> #include <linux/percpu.h>
> #include <linux/sizes.h>
> @@ -484,6 +485,9 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
>
> lockdep_assert_held(&rdtgroup_mutex);
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid);
>
> entry->busy = 0;
> diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> index 460421051abf..fc3ed917d173 100644
> --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> @@ -830,6 +830,9 @@ bool rdtgroup_pseudo_locked_in_hierarchy(struct rdt_domain *d)
> struct rdt_domain *d_i;
> bool ret = false;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> if (!zalloc_cpumask_var(&cpu_with_psl, GFP_KERNEL))
> return true;
>
> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> index a256a96df487..47dcf2cb76ca 100644
> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
> @@ -35,6 +35,10 @@
> DEFINE_STATIC_KEY_FALSE(rdt_enable_key);
> DEFINE_STATIC_KEY_FALSE(rdt_mon_enable_key);
> DEFINE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
> +
> +/* Mutex to protect rdtgroup access. */
> +DEFINE_MUTEX(rdtgroup_mutex);
> +
> static struct kernfs_root *rdt_root;
> struct rdtgroup rdtgroup_default;
> LIST_HEAD(rdt_all_groups);
> @@ -954,7 +958,8 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
>
> mutex_lock(&rdtgroup_mutex);
> hw_shareable = r->cache.shareable_bits;
> - list_for_each_entry(dom, &r->domains, list) {
> + rcu_read_lock();
> + list_for_each_entry_rcu(dom, &r->domains, list) {
> if (sep)
> seq_putc(seq, ';');
> sw_shareable = 0;
> @@ -1010,8 +1015,10 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
> }
> sep = true;
> }
> + rcu_read_unlock();
> seq_putc(seq, '\n');
> mutex_unlock(&rdtgroup_mutex);
> +
> return 0;
> }
>
> @@ -1254,6 +1261,9 @@ static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp)
> struct rdt_domain *d;
> u32 ctrl;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> list_for_each_entry(s, &resctrl_schema_all, list) {
> r = s->res;
> if (r->rid == RDT_RESOURCE_MBA || r->rid == RDT_RESOURCE_SMBA)
> @@ -1520,6 +1530,7 @@ static int mbm_config_show(struct seq_file *s, struct rdt_resource *r, u32 evtid
> struct rdt_domain *dom;
> bool sep = false;
>
> + cpus_read_lock();
> mutex_lock(&rdtgroup_mutex);
>
> list_for_each_entry(dom, &r->domains, list) {
> @@ -1536,6 +1547,7 @@ static int mbm_config_show(struct seq_file *s, struct rdt_resource *r, u32 evtid
> seq_puts(s, "\n");
>
> mutex_unlock(&rdtgroup_mutex);
> + cpus_read_unlock();
>
> return 0;
> }
> @@ -1627,6 +1639,9 @@ static int mon_config_write(struct rdt_resource *r, char *tok, u32 evtid)
> struct rdt_domain *d;
> int ret = 0;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> next:
> if (!tok || tok[0] == '\0')
> return 0;
> @@ -1668,6 +1683,7 @@ static ssize_t mbm_total_bytes_config_write(struct kernfs_open_file *of,
> if (nbytes == 0 || buf[nbytes - 1] != '\n')
> return -EINVAL;
>
> + cpus_read_lock();
> mutex_lock(&rdtgroup_mutex);
>
> rdt_last_cmd_clear();
> @@ -1677,6 +1693,7 @@ static ssize_t mbm_total_bytes_config_write(struct kernfs_open_file *of,
> ret = mon_config_write(r, buf, QOS_L3_MBM_TOTAL_EVENT_ID);
>
> mutex_unlock(&rdtgroup_mutex);
> + cpus_read_unlock();
>
> return ret ?: nbytes;
> }
> @@ -1692,6 +1709,7 @@ static ssize_t mbm_local_bytes_config_write(struct kernfs_open_file *of,
> if (nbytes == 0 || buf[nbytes - 1] != '\n')
> return -EINVAL;
>
> + cpus_read_lock();
> mutex_lock(&rdtgroup_mutex);
>
> rdt_last_cmd_clear();
> @@ -1701,6 +1719,7 @@ static ssize_t mbm_local_bytes_config_write(struct kernfs_open_file *of,
> ret = mon_config_write(r, buf, QOS_L3_MBM_LOCAL_EVENT_ID);
>
> mutex_unlock(&rdtgroup_mutex);
> + cpus_read_unlock();
>
> return ret ?: nbytes;
> }
> @@ -2153,6 +2172,9 @@ static int set_cache_qos_cfg(int level, bool enable)
> struct rdt_domain *d;
> int cpu;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> if (level == RDT_RESOURCE_L3)
> update = l3_qos_cfg_update;
> else if (level == RDT_RESOURCE_L2)
> @@ -2360,6 +2382,7 @@ struct rdtgroup *rdtgroup_kn_lock_live(struct kernfs_node *kn)
>
> rdtgroup_kn_get(rdtgrp, kn);
>
> + cpus_read_lock();
> mutex_lock(&rdtgroup_mutex);
>
> /* Was this group deleted while we waited? */
> @@ -2377,6 +2400,8 @@ void rdtgroup_kn_unlock(struct kernfs_node *kn)
> return;
>
> mutex_unlock(&rdtgroup_mutex);
> + cpus_read_unlock();
> +
> rdtgroup_kn_put(rdtgrp, kn);
> }
>
> @@ -2664,6 +2689,9 @@ static int reset_all_ctrls(struct rdt_resource *r)
> struct rdt_domain *d;
> int i;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
> return -ENOMEM;
>
> @@ -2948,6 +2976,9 @@ static int mkdir_mondata_subdir_alldom(struct kernfs_node *parent_kn,
> struct rdt_domain *dom;
> int ret;
>
> + /* Walking r->domains, ensure it can't race with cpuhp */
> + lockdep_assert_cpus_held();
> +
> list_for_each_entry(dom, &r->domains, list) {
> ret = mkdir_mondata_subdir(parent_kn, dom, r, prgrp);
> if (ret)
> @@ -3766,7 +3797,8 @@ static void domain_destroy_mon_state(struct rdt_domain *d)
> kfree(d->mbm_local);
> }
>
> -void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
> +static void _resctrl_offline_domain(struct rdt_resource *r,
> + struct rdt_domain *d)
> {
> lockdep_assert_held(&rdtgroup_mutex);
>
> @@ -3801,6 +3833,13 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
> domain_destroy_mon_state(d);
> }
>
> +void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d)
> +{
> + mutex_lock(&rdtgroup_mutex);
> + _resctrl_offline_domain(r, d);
> + mutex_unlock(&rdtgroup_mutex);
> +}
> +
> static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
> {
> u32 idx_limit = resctrl_arch_system_num_rmid_idx();
> @@ -3832,7 +3871,7 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
> return 0;
> }
>
> -int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
> +static int _resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
> {
> int err;
>
> @@ -3870,12 +3909,23 @@ int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
> return 0;
> }
>
> +int resctrl_online_domain(struct rdt_resource *r, struct rdt_domain *d)
> +{
> + int err;
> +
> + mutex_lock(&rdtgroup_mutex);
> + err = _resctrl_online_domain(r, d);
> + mutex_unlock(&rdtgroup_mutex);
> +
> + return err;
> +}
> +
> int resctrl_online_cpu(unsigned int cpu)
> {
> - lockdep_assert_held(&rdtgroup_mutex);
> -
> + mutex_lock(&rdtgroup_mutex);
> /* The CPU is set in default rdtgroup after online. */
> cpumask_set_cpu(cpu, &rdtgroup_default.cpu_mask);
> + mutex_unlock(&rdtgroup_mutex);
>
> return 0;
> }
> @@ -3896,8 +3946,7 @@ void resctrl_offline_cpu(unsigned int cpu)
> struct rdtgroup *rdtgrp;
> struct rdt_resource *l3 = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
>
> - lockdep_assert_held(&rdtgroup_mutex);
> -
> + mutex_lock(&rdtgroup_mutex);
> list_for_each_entry(rdtgrp, &rdt_all_groups, rdtgroup_list) {
> if (cpumask_test_and_clear_cpu(cpu, &rdtgrp->cpu_mask)) {
> clear_childcpus(rdtgrp, cpu);
> @@ -3917,6 +3966,7 @@ void resctrl_offline_cpu(unsigned int cpu)
> cqm_setup_limbo_handler(d, 0, cpu);
> }
> }
> + mutex_unlock(&rdtgroup_mutex);
> }
>
> /*
> diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
> index f3ef3ceb9c5e..2bbdd3d591ea 100644
> --- a/include/linux/resctrl.h
> +++ b/include/linux/resctrl.h
> @@ -159,7 +159,7 @@ struct resctrl_schema;
> * @cache_level: Which cache level defines scope of this resource
> * @cache: Cache allocation related data
> * @membw: If the component has bandwidth controls, their properties.
> - * @domains: All domains for this resource
> + * @domains: RCU list of all domains for this resource
> * @name: Name to use in "schemata" file.
> * @data_width: Character width of data when displaying
> * @default_ctrl: Specifies default cache cbm or memory B/W percent.
Thanks.
-Fenghua
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 24/24] x86/resctrl: Separate arch and fs resctrl locks
2023-08-18 22:05 ` Fenghua Yu
@ 2023-08-24 16:58 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:58 UTC (permalink / raw)
To: Fenghua Yu, x86, linux-kernel
Cc: Reinette Chatre, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
H Peter Anvin, Babu Moger, shameerali.kolothum.thodi,
D Scott Phillips OS, carl, lcherian, bobo.shaobowang,
tan.shaopeng, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hi Fenghua,
On 18/08/2023 23:05, Fenghua Yu wrote:
> On 7/28/23 09:42, James Morse wrote:
>> resctrl has one mutex that is taken by the architecture specific code,
>> and the filesystem parts. The two interact via cpuhp, where the
>> architecture code updates the domain list. Filesystem handlers that
>> walk the domains list should not run concurrently with the cpuhp
>> callback modifying the list.
>>
>> Exposing a lock from the filesystem code means the interface is not
>> cleanly defined, and creates the possibility of cross-architecture
>> lock ordering headaches. The interaction only exists so that certain
>> filesystem paths are serialised against cpu hotplug. The cpu hotplug
>> code already has a mechanism to do this using cpus_read_lock().
>>
>> MPAM's monitors have an overflow interrupt, so it needs to be possible
>> to walk the domains list in irq context. RCU is ideal for this,
>> but some paths need to be able to sleep to allocate memory.
>>
>> Because resctrl_{on,off}line_cpu() take the rdtgroup_mutex as part
>> of a cpuhp callback, cpus_read_lock() must always be taken first.
>> rdtgroup_schemata_write() already does this.
>>
>> Most of the filesystem code's domain list walkers are currently
>> protected by the rdtgroup_mutex taken in rdtgroup_kn_lock_live().
>> The exceptions are rdt_bit_usage_show() and the mon_config helpers
>> which take the lock directly.
>>
>> Make the domain list protected by RCU. An architecture-specific
>> lock prevents concurrent writers. rdt_bit_usage_show() can
>> walk the domain list under rcu_read_lock(). The mon_config helpers
>> send multiple IPIs, take the cpus_read_lock() in these cases.
>>
>> The other filesystem list walkers need to be able to sleep.
>> Add cpus_read_lock() to rdtgroup_kn_lock_live() so that the
>> cpuhp callbacks can't be invoked when file system operations are
>> occurring.
>>
>> Add lockdep_assert_cpus_held() in the cases where the
>> rdtgroup_kn_lock_live() call isn't obvious.
>>
>> Resctrl's domain online/offline calls now need to take the
>> rdtgroup_mutex themselves.
>> diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> index 55bad57a7bd5..b4f611359d1e 100644
>> --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> @@ -313,6 +316,9 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
>> struct rdt_domain *d;
>> u32 idx;
>> + /* Walking r->domains, ensure it can't race with cpuhp */
>> + lockdep_assert_cpus_held();
>
> When rdtgroup_schemata_write() calls resctrl_arch_update_domains(), I don't see cpus lock
> is held. Is it held in the path?
Good question: like most of the filesystem accesses, this is done by
rdtgroup_kn_lock_live() which now calls cpus_read_lock().
Thanks,
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* RE: [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (23 preceding siblings ...)
2023-07-28 16:42 ` [PATCH v5 24/24] x86/resctrl: Separate arch and fs resctrl locks James Morse
@ 2023-08-03 7:34 ` Shaopeng Tan (Fujitsu)
2023-08-24 16:58 ` James Morse
2023-08-22 8:42 ` Peter Newman
25 siblings, 1 reply; 77+ messages in thread
From: Shaopeng Tan (Fujitsu) @ 2023-08-03 7:34 UTC (permalink / raw)
To: 'James Morse', x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hello James,
I reviewed this patch series(v5) and it looks fine.
I ran resctrl selftest on Intel(R) Xeon(R) Gold 6254 CPU with nohz_full enabled/disabled, and there is no problem.
<reviewed-by:tan.shaopeng@jp.fujitsu.com>
<tested-by:tan.shaopeng@jp.fujitsu.com>
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking
2023-08-03 7:34 ` [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking Shaopeng Tan (Fujitsu)
@ 2023-08-24 16:58 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:58 UTC (permalink / raw)
To: Shaopeng Tan (Fujitsu), x86, linux-kernel
Cc: Fenghua Yu, Reinette Chatre, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H Peter Anvin, Babu Moger,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, xingxin.hx, baolin.wang, Jamie Iles, Xin Hao,
peternewman, dfustini
Hello!
On 03/08/2023 08:34, Shaopeng Tan (Fujitsu) wrote:
> Hello James,
>
> I reviewed this patch series(v5) and it looks fine.
> I ran resctrl selftest on Intel(R) Xeon(R) Gold 6254 CPU with nohz_full enabled/disabled, and there is no problem.
>
> <reviewed-by:tan.shaopeng@jp.fujitsu.com>
> <tested-by:tan.shaopeng@jp.fujitsu.com>
Thanks! - I've added these to all the patches in the series.
James
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking
2023-07-28 16:42 [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking James Morse
` (24 preceding siblings ...)
2023-08-03 7:34 ` [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking Shaopeng Tan (Fujitsu)
@ 2023-08-22 8:42 ` Peter Newman
2023-08-24 16:58 ` James Morse
25 siblings, 1 reply; 77+ messages in thread
From: Peter Newman @ 2023-08-22 8:42 UTC (permalink / raw)
To: James Morse
Cc: x86, linux-kernel, Fenghua Yu, Reinette Chatre, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, H Peter Anvin, Babu Moger,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, dfustini
Hi James,
On Fri, Jul 28, 2023 at 6:43 PM James Morse <james.morse@arm.com> wrote:
> James Morse (24):
> x86/resctrl: Track the closid with the rmid
> x86/resctrl: Access per-rmid structures by index
> x86/resctrl: Create helper for RMID allocation and mondata dir
> creation
> x86/resctrl: Move rmid allocation out of mkdir_rdt_prepare()
> x86/resctrl: Allow RMID allocation to be scoped by CLOSID
> x86/resctrl: Track the number of dirty RMID a CLOSID has
> x86/resctrl: Use set_bit()/clear_bit() instead of open coding
> x86/resctrl: Allocate the cleanest CLOSID by searching
> closid_num_dirty_rmid
> x86/resctrl: Move CLOSID/RMID matching and setting to use helpers
> tick/nohz: Move tick_nohz_full_mask declaration outside the #ifdef
> x86/resctrl: Add cpumask_any_housekeeping() for limbo/overflow
> x86/resctrl: Make resctrl_arch_rmid_read() retry when it is
> interrupted
> x86/resctrl: Queue mon_event_read() instead of sending an IPI
> x86/resctrl: Allow resctrl_arch_rmid_read() to sleep
> x86/resctrl: Allow arch to allocate memory needed in
> resctrl_arch_rmid_read()
> x86/resctrl: Make resctrl_mounted checks explicit
> x86/resctrl: Move alloc/mon static keys into helpers
> x86/resctrl: Make rdt_enable_key the arch's decision to switch
> x86/resctrl: Add helpers for system wide mon/alloc capable
> x86/resctrl: Add cpu online callback for resctrl work
> x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but
> cpu
> x86/resctrl: Add cpu offline callback for resctrl work
> x86/resctrl: Move domain helper migration into resctrl_offline_cpu()
> x86/resctrl: Separate arch and fs resctrl locks
>
> arch/x86/include/asm/resctrl.h | 90 +++++
> arch/x86/kernel/cpu/resctrl/core.c | 78 ++--
> arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 45 ++-
> arch/x86/kernel/cpu/resctrl/internal.h | 82 +++-
> arch/x86/kernel/cpu/resctrl/monitor.c | 440 ++++++++++++++++------
> arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 15 +-
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 340 ++++++++++++-----
> include/linux/resctrl.h | 43 ++-
> include/linux/tick.h | 9 +-
> 9 files changed, 870 insertions(+), 272 deletions(-)
I ran this series successfully against our internal test suites on the
following processors:
- Intel(R) Xeon(R) Platinum 8173M CPU @ 2.00GHz
- AMD EPYC 7B12 64-Core Processor
Tested-By: Peter Newman <peternewman@google.com>
Thanks!
-Peter
^ permalink raw reply [flat|nested] 77+ messages in thread
* Re: [PATCH v5 00/24] x86/resctrl: monitored closid+rmid together, separate arch/fs locking
2023-08-22 8:42 ` Peter Newman
@ 2023-08-24 16:58 ` James Morse
0 siblings, 0 replies; 77+ messages in thread
From: James Morse @ 2023-08-24 16:58 UTC (permalink / raw)
To: Peter Newman
Cc: x86, linux-kernel, Fenghua Yu, Reinette Chatre, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, H Peter Anvin, Babu Moger,
shameerali.kolothum.thodi, D Scott Phillips OS, carl, lcherian,
bobo.shaobowang, tan.shaopeng, xingxin.hx, baolin.wang,
Jamie Iles, Xin Hao, dfustini
Hi Peter,
On 22/08/2023 09:42, Peter Newman wrote:
> I ran this series successfully against our internal test suites on the
> following processors:
>
> - Intel(R) Xeon(R) Platinum 8173M CPU @ 2.00GHz
> - AMD EPYC 7B12 64-Core Processor
>
> Tested-By: Peter Newman <peternewman@google.com>
Great, Thanks!
James
^ permalink raw reply [flat|nested] 77+ messages in thread