linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Newman <peternewman@google.com>
To: reinette.chatre@intel.com
Cc: bp@alien8.de, derkling@google.com, eranian@google.com,
	fenghua.yu@intel.com, hpa@zytor.com, james.morse@arm.com,
	jannh@google.com, kpsingh@google.com,
	linux-kernel@vger.kernel.org, mingo@redhat.com,
	peternewman@google.com, tglx@linutronix.de, x86@kernel.org
Subject: Re: [PATCH v4 1/2] x86/resctrl: Update task closid/rmid with task_call_func()
Date: Mon, 12 Dec 2022 18:36:38 +0100	[thread overview]
Message-ID: <20221212173638.1858573-1-peternewman@google.com> (raw)
In-Reply-To: <cdcfcd64-c76f-0d2d-6653-0229c956f2bc@intel.com>

Hi Reinette,

On Sat, Dec 10, 2022 at 12:54 AM Reinette Chatre <reinette.chatre@intel.com> wrote:
> On 12/8/2022 2:30 PM, Peter Newman wrote:
> > Based on this, I'll just sketch out the first scenario below and drop
> > (2) from the changelog. This also implies that the group update cases
>
> ok, thank you for doing that analysis.
>
> > can use a single smp_mb() to provide all the necessary ordering, because
> > there's a full barrier on context switch for it to pair with, so I don't
> > need to broadcast IPI anymore.  I don't know whether task_call_func() is
>
> This is not clear to me because rdt_move_group_tasks() seems to have the
> same code as shown below as vulnerable to re-ordering. Only difference
> is that it uses the "//false" checks to set a bit in the cpumask for a
> later IPI instead of an immediate IPI.

An smp_mb() between writing the new task_struct::{closid,rmid} and
calling task_curr() would prevent the reordering I described, but I
was worried about the cost of executing a full barrier for every
matching task.

I tried something like this:

for_each_process_thread(p, t) {
	if (!from || is_closid_match(t, from) ||
	    is_rmid_match(t, from)) {
		WRITE_ONCE(t->closid, to->closid);
		WRITE_ONCE(t->rmid, to->mon.rmid);
		/* group moves are serialized by rdt */
		t->resctrl_dirty = true;
	}
}
if (IS_ENABLED(CONFIG_SMP) && mask) {
	/* Order t->{closid,rmid} stores before loads in task_curr() */
	smp_mb();
	for_each_process_thread(p, t) {
		if (t->resctrl_dirty) {
			if (task_curr(t))
				cpumask_set_cpu(task_cpu(t), mask);
			t->resctrl_dirty = false;
		}
	}
}

I repeated the `perf bench sched messaging -g 40 -l100000 ` benchmark
from before[1] to compare this with the baseline, and found that it
only increased the time to delete the benchmark's group from 1.65ms to
1.66ms, so it's an alternative to what I last posted.

I could do something similar in the single-task move, but I don't think
it makes much of a performance difference in that case. It's only a win
for the group move because the synchronization cost doesn't grow with
the group size.

[1] https://lore.kernel.org/lkml/20221129111055.953833-3-peternewman@google.com/


>
> > faster than an smp_mb(). I'll take some measurements to see.
> >
> > The presumed behavior is __rdtgroup_move_task() not seeing t1 running
> > yet implies that it observes the updated values:
> >
> > CPU 0                                   CPU 1
> > -----                                   -----
> > (t1->{closid,rmid} -> {1,1})            (rq->curr -> t0)
> >
> > __rdtgroup_move_task():
> >   t1->{closid,rmid} <- {2,2}
> >   curr <- t1->cpu->rq->curr
> >                                         __schedule():
> >                                           rq->curr <- t1
> >                                         resctrl_sched_in():
> >                                           t1->{closid,rmid} -> {2,2}
> >   if (curr == t1) // false
> >     IPI(t1->cpu)
>
> I understand that the test is false when it may be expected to be true, but
> there does not seem to be a problem because of that. t1 was scheduled in with
> the correct CLOSID/RMID and its CPU did not get an unnecessary IPI.

Yes, this one was reminding the reader of the correct behavior. I can
just leave it out.

-Peter

  reply	other threads:[~2022-12-12 17:37 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-29 11:10 [PATCH v4 0/2] x86/resctrl: Fix task CLOSID update race Peter Newman
2022-11-29 11:10 ` [PATCH v4 1/2] x86/resctrl: Update task closid/rmid with task_call_func() Peter Newman
2022-12-06 18:56   ` Reinette Chatre
2022-12-07 10:58     ` Peter Newman
2022-12-07 18:38       ` Reinette Chatre
2022-12-08 22:30         ` Peter Newman
2022-12-09 23:54           ` Reinette Chatre
2022-12-12 17:36             ` Peter Newman [this message]
2022-12-13 18:33               ` Reinette Chatre
2022-12-14 10:05                 ` Peter Newman
2022-11-29 11:10 ` [PATCH v4 2/2] x86/resctrl: IPI all online CPUs for group updates Peter Newman
2022-12-06 18:57   ` Reinette Chatre
2022-12-07 11:04     ` Peter Newman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221212173638.1858573-1-peternewman@google.com \
    --to=peternewman@google.com \
    --cc=bp@alien8.de \
    --cc=derkling@google.com \
    --cc=eranian@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=hpa@zytor.com \
    --cc=james.morse@arm.com \
    --cc=jannh@google.com \
    --cc=kpsingh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=reinette.chatre@intel.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).