All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	linux-kernel@vger.kernel.org,
	Tvrtko Ursulin <tvrtko.ursulin@intel.com>,
	Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>
Subject: Re: [RFC 1/6] sched: Add nice value change notifier
Date: Fri, 1 Oct 2021 10:04:53 +0100	[thread overview]
Message-ID: <4aca656d-678f-4d61-38a4-d2e7a8fd89ab@linux.intel.com> (raw)
In-Reply-To: <20210930183316.GC4323@worktop.programming.kicks-ass.net>


Hi Peter,

On 30/09/2021 19:33, Peter Zijlstra wrote:
> On Thu, Sep 30, 2021 at 06:15:47PM +0100, Tvrtko Ursulin wrote:
>>   void set_user_nice(struct task_struct *p, long nice)
>>   {
>>   	bool queued, running;
>> -	int old_prio;
>> +	int old_prio, ret;
>>   	struct rq_flags rf;
>>   	struct rq *rq;
>>   
>> @@ -6913,6 +6945,9 @@ void set_user_nice(struct task_struct *p, long nice)
>>   	 */
>>   	p->sched_class->prio_changed(rq, p, old_prio);
>>   
>> +	ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
>> +	WARN_ON_ONCE(ret != NOTIFY_DONE);
>> +
>>   out_unlock:
>>   	task_rq_unlock(rq, p, &rf);
>>   }
> 
> No, we're not going to call out to exported, and potentially unbounded,
> functions under scheduler locks.

Agreed, that's another good point why it is even more hairy, as I have 
generally alluded in the cover letter.

Do you have any immediate thoughts on possible alternatives?

Like for instance if I did a queue_work from set_user_nice and then ran 
a notifier chain async from a worker? I haven't looked at yet what 
repercussion would that have in terms of having to cancel the pending 
workers when tasks exit. I can try and prototype that and see how it 
would look.

There is of course an example ioprio which solves the runtime 
adjustments via a dedicated system call. But I don't currently feel that 
a third one would be a good solution. At least I don't see a case for 
being able to decouple the priority of CPU and GPU and computations.

Have I opened a large can of worms? :)

Regards,

Tvrtko

WARNING: multiple messages have this Message-ID (diff)
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	linux-kernel@vger.kernel.org,
	Tvrtko Ursulin <tvrtko.ursulin@intel.com>,
	Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>
Subject: Re: [Intel-gfx] [RFC 1/6] sched: Add nice value change notifier
Date: Fri, 1 Oct 2021 10:04:53 +0100	[thread overview]
Message-ID: <4aca656d-678f-4d61-38a4-d2e7a8fd89ab@linux.intel.com> (raw)
In-Reply-To: <20210930183316.GC4323@worktop.programming.kicks-ass.net>


Hi Peter,

On 30/09/2021 19:33, Peter Zijlstra wrote:
> On Thu, Sep 30, 2021 at 06:15:47PM +0100, Tvrtko Ursulin wrote:
>>   void set_user_nice(struct task_struct *p, long nice)
>>   {
>>   	bool queued, running;
>> -	int old_prio;
>> +	int old_prio, ret;
>>   	struct rq_flags rf;
>>   	struct rq *rq;
>>   
>> @@ -6913,6 +6945,9 @@ void set_user_nice(struct task_struct *p, long nice)
>>   	 */
>>   	p->sched_class->prio_changed(rq, p, old_prio);
>>   
>> +	ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
>> +	WARN_ON_ONCE(ret != NOTIFY_DONE);
>> +
>>   out_unlock:
>>   	task_rq_unlock(rq, p, &rf);
>>   }
> 
> No, we're not going to call out to exported, and potentially unbounded,
> functions under scheduler locks.

Agreed, that's another good point why it is even more hairy, as I have 
generally alluded in the cover letter.

Do you have any immediate thoughts on possible alternatives?

Like for instance if I did a queue_work from set_user_nice and then ran 
a notifier chain async from a worker? I haven't looked at yet what 
repercussion would that have in terms of having to cancel the pending 
workers when tasks exit. I can try and prototype that and see how it 
would look.

There is of course an example ioprio which solves the runtime 
adjustments via a dedicated system call. But I don't currently feel that 
a third one would be a good solution. At least I don't see a case for 
being able to decouple the priority of CPU and GPU and computations.

Have I opened a large can of worms? :)

Regards,

Tvrtko

  reply	other threads:[~2021-10-01  9:05 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-30 17:15 [RFC 0/6] CPU + GPU synchronised priority scheduling Tvrtko Ursulin
2021-09-30 17:15 ` [Intel-gfx] " Tvrtko Ursulin
2021-09-30 17:15 ` [Intel-gfx] [RFC 1/6] sched: Add nice value change notifier Tvrtko Ursulin
2021-09-30 17:15   ` Tvrtko Ursulin
2021-09-30 18:33   ` Peter Zijlstra
2021-09-30 18:33     ` [Intel-gfx] " Peter Zijlstra
2021-10-01  9:04     ` Tvrtko Ursulin [this message]
2021-10-01  9:04       ` Tvrtko Ursulin
2021-10-01 10:32       ` Tvrtko Ursulin
2021-10-01 10:32         ` [Intel-gfx] " Tvrtko Ursulin
2021-10-01 15:48         ` Peter Zijlstra
2021-10-01 15:48           ` [Intel-gfx] " Peter Zijlstra
2021-10-04  8:12           ` Tvrtko Ursulin
2021-10-04  8:12             ` [Intel-gfx] " Tvrtko Ursulin
2021-10-04  8:39             ` Peter Zijlstra
2021-10-04  8:39               ` [Intel-gfx] " Peter Zijlstra
2021-09-30 17:15 ` [RFC 2/6] drm/i915: Explicitly track DRM clients Tvrtko Ursulin
2021-09-30 17:15   ` [Intel-gfx] " Tvrtko Ursulin
2021-09-30 17:15 ` [RFC 3/6] drm/i915: Make GEM contexts " Tvrtko Ursulin
2021-09-30 17:15   ` [Intel-gfx] " Tvrtko Ursulin
2021-09-30 17:15 ` [RFC 4/6] drm/i915: Track all user contexts per client Tvrtko Ursulin
2021-09-30 17:15   ` [Intel-gfx] " Tvrtko Ursulin
2021-09-30 17:15 ` [RFC 5/6] drm/i915: Keep track of registered clients indexed by task struct Tvrtko Ursulin
2021-09-30 17:15   ` [Intel-gfx] " Tvrtko Ursulin
2021-09-30 17:15 ` [RFC 6/6] drm/i915: Connect task and GPU scheduling priorities Tvrtko Ursulin
2021-09-30 17:15   ` [Intel-gfx] " Tvrtko Ursulin
2021-09-30 17:23 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for CPU + GPU synchronised priority scheduling Patchwork
2021-09-30 18:34 ` [RFC 0/6] " Peter Zijlstra
2021-09-30 18:34   ` [Intel-gfx] " Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4aca656d-678f-4d61-38a4-d2e7a8fd89ab@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tvrtko.ursulin@intel.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.