From: Reinette Chatre <reinette.chatre@intel.com>
To: Borislav Petkov <bp@alien8.de>
Cc: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com,
kuo-lang.tseng@intel.com, mingo@redhat.com, hpa@zytor.com,
x86@kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH V2 09/10] x86/resctrl: Pseudo-lock portions of multiple resources
Date: Thu, 8 Aug 2019 13:13:46 -0700 [thread overview]
Message-ID: <a69981df-80f5-d8cf-e118-2ee639dcdb77@intel.com> (raw)
In-Reply-To: <20190808084416.GC20745@zn.tnic>
Hi Borislav,
On 8/8/2019 1:44 AM, Borislav Petkov wrote:
> On Wed, Aug 07, 2019 at 12:23:29PM -0700, Reinette Chatre wrote:
>> I do not fully understand this proposal. All those goto labels take care
>> of the the different failures that can be encountered during the
>> initialization of the pseudo-lock region. Each initialization failure is
>> associated with a goto where it jumps to the cleanup path. The
>> initialization starts with the constraining of the c-states
>> (initializing plr->pm_reqs), but if I move that I think it will not
>> reduce the goto labels, just change the order because of the other
>> initialization done (plr->size, plr->line_size, plr->cpu).
>
> Here's one possible way to do it, pasting the whole function here as it
> is easier to read it this way than an incremental diff ontop.
>
> You basically cache all attributes in local variables and assign them to
> the plr struct only on success, at the end. This way, no goto labels and
> the C-states constraining, i.e., the most expensive operation, happens
> last, only after all the other simpler checks have succeeded. And you
> don't have to call pseudo_lock_cstates_relax() prematurely, when one of
> those easier checks fail.
>
> Makes sense?
It does. This looks much better. Thank you very much.
>
> Btw, I've marked the cpu_online() check with "CPU hotplug
> lock?!?" question because I don't see you holding that lock with
> get_online_cpus()/put_online_cpus().
There is a locking order dependency between cpu_hotplug_lock and
rdtgroup_mutex (cpu_hotplug_lock before rdtgroup_mutex) that has to be
maintained. To do so in this flow you will find cpus_read_lock() in
rdtgroup_schemata_write(), so quite a distance from where it is needed.
Perhaps I should add a comment at the location where the lock is
required to document where the lock is obtained?
> static int pseudo_lock_l2_l3_portions_valid(struct pseudo_lock_region *plr,
> struct pseudo_lock_portion *l2_p,
> struct pseudo_lock_portion *l3_p)
> {
> unsigned int l2_size, l3_size, size, line_size, cpu;
> struct rdt_domain *l2_d, *l3_d;
>
> l2_d = rdt_find_domain(l2_p->r, l2_p->d_id, NULL);
> if (IS_ERR_OR_NULL(l2_d)) {
> rdt_last_cmd_puts("Cannot locate L2 cache domain\n");
> return -1;
> }
>
> l3_d = rdt_find_domain(l3_p->r, l3_p->d_id, NULL);
> if (IS_ERR_OR_NULL(l3_d)) {
> rdt_last_cmd_puts("Cannot locate L3 cache domain\n");
> return -1;
> }
>
> if (!cpumask_subset(&l2_d->cpu_mask, &l3_d->cpu_mask)) {
> rdt_last_cmd_puts("L2 and L3 caches need to be in same hierarchy\n");
> return -1;
> }
>
> l2_size = rdtgroup_cbm_to_size(l2_p->r, l2_d, l2_p->cbm);
> l3_size = rdtgroup_cbm_to_size(l3_p->r, l3_d, l3_p->cbm);
>
> if (l2_size > l3_size) {
> rdt_last_cmd_puts("L3 cache portion has to be same size or larger than L2 cache portion\n");
> return -1;
> }
>
> size = l2_size;
>
> l2_size = get_cache_line_size(cpumask_first(&l2_d->cpu_mask), l2_p->r->cache_level);
> l3_size = get_cache_line_size(cpumask_first(&l3_d->cpu_mask), l3_p->r->cache_level);
> if (l2_size != l3_size) {
> rdt_last_cmd_puts("L2 and L3 caches have different coherency cache line sizes\n");
> return -1;
> }
>
> line_size = l2_size;
>
> cpu = cpumask_first(&l2_d->cpu_mask);
>
> /*
> * CPU hotplug lock?!?
> */
> if (!cpu_online(cpu)) {
> rdt_last_cmd_printf("CPU %u associated with cache not online\n", cpu);
> return -1;
> }
>
> if (!get_cache_inclusive(cpu, l3_p->r->cache_level)) {
> rdt_last_cmd_puts("L3 cache not inclusive\n");
> return -1;
> }
>
> /*
> * All checks passed, constrain C-states:
> */
> if (pseudo_lock_cstates_constrain(plr, &l2_d->cpu_mask)) {
> rdt_last_cmd_puts("Cannot limit C-states\n");
> pseudo_lock_cstates_relax(plr);
> return -1;
> }
>
> plr->line_size = line_size;
> plr->size = size;
> plr->cpu = cpu;
>
> return 0;
> }
>
Thank you very much
Reinette
next prev parent reply other threads:[~2019-08-08 20:13 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-30 17:29 [PATCH V2 00/10] x86/CPU and x86/resctrl: Support pseudo-lock regions spanning L2 and L3 cache Reinette Chatre
2019-07-30 17:29 ` [PATCH V2 01/10] x86/CPU: Expose if cache is inclusive of lower level caches Reinette Chatre
2019-08-02 18:03 ` Borislav Petkov
2019-08-02 20:11 ` Reinette Chatre
2019-08-03 9:44 ` Borislav Petkov
2019-08-05 17:57 ` Reinette Chatre
2019-08-06 15:57 ` Borislav Petkov
2019-08-06 16:55 ` Reinette Chatre
2019-08-06 17:33 ` Borislav Petkov
2019-08-06 18:13 ` Reinette Chatre
2019-08-06 18:33 ` Borislav Petkov
2019-08-06 18:53 ` Reinette Chatre
2019-08-06 19:16 ` Borislav Petkov
2019-08-06 20:22 ` Reinette Chatre
2019-08-06 20:40 ` Borislav Petkov
2019-08-06 21:16 ` Reinette Chatre
2019-08-08 8:08 ` Borislav Petkov
2019-08-08 8:13 ` Borislav Petkov
2019-08-08 20:08 ` Reinette Chatre
2019-08-09 7:33 ` Borislav Petkov
2019-08-09 16:18 ` Reinette Chatre
2019-07-30 17:29 ` [PATCH V2 02/10] x86/resctrl: Remove unnecessary size compute Reinette Chatre
2019-07-30 17:29 ` [PATCH V2 03/10] x86/resctrl: Constrain C-states during pseudo-lock region init Reinette Chatre
2019-07-30 17:29 ` [PATCH V2 04/10] x86/resctrl: Set cache line size using new utility Reinette Chatre
2019-08-05 15:57 ` Borislav Petkov
2019-07-30 17:29 ` [PATCH V2 05/10] x86/resctrl: Associate pseudo-locked region's cache instance by id Reinette Chatre
2019-07-30 17:29 ` [PATCH V2 06/10] x86/resctrl: Introduce utility to return pseudo-locked cache portion Reinette Chatre
2019-08-05 16:07 ` Borislav Petkov
2019-07-30 17:29 ` [PATCH V2 07/10] x86/resctrl: Remove unnecessary pointer to pseudo-locked region Reinette Chatre
2019-07-30 17:29 ` [PATCH V2 08/10] x86/resctrl: Support pseudo-lock regions spanning resources Reinette Chatre
2019-08-07 9:18 ` Borislav Petkov
2019-08-07 19:07 ` Reinette Chatre
2019-07-30 17:29 ` [PATCH V2 09/10] x86/resctrl: Pseudo-lock portions of multiple resources Reinette Chatre
2019-08-07 15:25 ` Borislav Petkov
2019-08-07 19:23 ` Reinette Chatre
2019-08-08 8:44 ` Borislav Petkov
2019-08-08 20:13 ` Reinette Chatre [this message]
2019-08-09 7:38 ` Borislav Petkov
2019-08-09 16:20 ` Reinette Chatre
2019-07-30 17:29 ` [PATCH V2 10/10] x86/resctrl: Only pseudo-lock L3 cache when inclusive Reinette Chatre
2019-07-30 20:00 ` [PATCH V2 00/10] x86/CPU and x86/resctrl: Support pseudo-lock regions spanning L2 and L3 cache Thomas Gleixner
2019-07-30 20:10 ` Reinette Chatre
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a69981df-80f5-d8cf-e118-2ee639dcdb77@intel.com \
--to=reinette.chatre@intel.com \
--cc=bp@alien8.de \
--cc=fenghua.yu@intel.com \
--cc=hpa@zytor.com \
--cc=kuo-lang.tseng@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).