linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] why can't dynamic isolation just like the static way
@ 2020-02-11  8:17 王贇
  2020-02-11 11:43 ` Peter Zijlstra
  2020-02-11 13:54 ` Steven Rostedt
  0 siblings, 2 replies; 6+ messages in thread
From: 王贇 @ 2020-02-11  8:17 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	open list:SCHEDULER

Hi, folks

We are dealing with isolcpus these days and try to do the isolation
dynamically.

The kernel doc lead us into the cpuset.sched_load_balance, it's fine
to achieve the dynamic isolation with it, however we got problem with
the systemd stuff.

It's keeping create cgroup with sched_load_balance enabled on default,
while the cpus are overlapped with the isolated ones, which lead into
sched domain rebuild and these cpus become non-isolated.

We're just looking forward an easy way to dynamic isolate some cpus,
just like the isolation parameter, but sched_load_balance forcing us
to dealing with the management of cgroups, we really don't get the
point in here...

Why do we have to mix the isolation with cgroups? Why not just provide
a proc entry to read cpumask and rebuild the domains?

Please let us know if there is any good reason to make the dynamic
isolation in that way, appreciated in advance :-)

Regards,
Michael Wang

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] why can't dynamic isolation just like the static way
  2020-02-11  8:17 [RFC] why can't dynamic isolation just like the static way 王贇
@ 2020-02-11 11:43 ` Peter Zijlstra
  2020-02-11 14:00   ` Steven Rostedt
  2020-02-12  1:30   ` 王贇
  2020-02-11 13:54 ` Steven Rostedt
  1 sibling, 2 replies; 6+ messages in thread
From: Peter Zijlstra @ 2020-02-11 11:43 UTC (permalink / raw)
  To: 王贇
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, open list:SCHEDULER

On Tue, Feb 11, 2020 at 04:17:34PM +0800, 王贇 wrote:
> Hi, folks
> 
> We are dealing with isolcpus these days and try to do the isolation
> dynamically.
> 
> The kernel doc lead us into the cpuset.sched_load_balance, it's fine
> to achieve the dynamic isolation with it, however we got problem with
> the systemd stuff.

Then don't use systemd :-) Also, if systemd is the problem, why are you
bugging us?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] why can't dynamic isolation just like the static way
  2020-02-11  8:17 [RFC] why can't dynamic isolation just like the static way 王贇
  2020-02-11 11:43 ` Peter Zijlstra
@ 2020-02-11 13:54 ` Steven Rostedt
  1 sibling, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2020-02-11 13:54 UTC (permalink / raw)
  To: 王贇
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Ben Segall, Mel Gorman, open list:SCHEDULER,
	Tejun Heo, Li Zefan, Johannes Weiner, cgroups


You forgot to include the cgroup maintainers.

-- Steve


On Tue, 11 Feb 2020 16:17:34 +0800
王贇 <yun.wang@linux.alibaba.com> wrote:

> Hi, folks
> 
> We are dealing with isolcpus these days and try to do the isolation
> dynamically.
> 
> The kernel doc lead us into the cpuset.sched_load_balance, it's fine
> to achieve the dynamic isolation with it, however we got problem with
> the systemd stuff.
> 
> It's keeping create cgroup with sched_load_balance enabled on default,
> while the cpus are overlapped with the isolated ones, which lead into
> sched domain rebuild and these cpus become non-isolated.
> 
> We're just looking forward an easy way to dynamic isolate some cpus,
> just like the isolation parameter, but sched_load_balance forcing us
> to dealing with the management of cgroups, we really don't get the
> point in here...
> 
> Why do we have to mix the isolation with cgroups? Why not just provide
> a proc entry to read cpumask and rebuild the domains?
> 
> Please let us know if there is any good reason to make the dynamic
> isolation in that way, appreciated in advance :-)
> 
> Regards,
> Michael Wang


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] why can't dynamic isolation just like the static way
  2020-02-11 11:43 ` Peter Zijlstra
@ 2020-02-11 14:00   ` Steven Rostedt
  2020-02-12  1:35     ` 王贇
  2020-02-12  1:30   ` 王贇
  1 sibling, 1 reply; 6+ messages in thread
From: Steven Rostedt @ 2020-02-11 14:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: 王贇,
	Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Ben Segall, Mel Gorman, open list:SCHEDULER

On Tue, 11 Feb 2020 12:43:50 +0100
Peter Zijlstra <peterz@infradead.org> wrote:

> On Tue, Feb 11, 2020 at 04:17:34PM +0800, 王贇 wrote:
> > Hi, folks
> > 
> > We are dealing with isolcpus these days and try to do the isolation
> > dynamically.
> > 
> > The kernel doc lead us into the cpuset.sched_load_balance, it's fine
> > to achieve the dynamic isolation with it, however we got problem with
> > the systemd stuff.  
> 
> Then don't use systemd :-) Also, if systemd is the problem, why are you
> bugging us?

[ Background. Peter is someone that doesn't even use systemd. ;-) ]

-- Steve

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] why can't dynamic isolation just like the static way
  2020-02-11 11:43 ` Peter Zijlstra
  2020-02-11 14:00   ` Steven Rostedt
@ 2020-02-12  1:30   ` 王贇
  1 sibling, 0 replies; 6+ messages in thread
From: 王贇 @ 2020-02-12  1:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, open list:SCHEDULER



On 2020/2/11 下午7:43, Peter Zijlstra wrote:
> On Tue, Feb 11, 2020 at 04:17:34PM +0800, 王贇 wrote:
>> Hi, folks
>>
>> We are dealing with isolcpus these days and try to do the isolation
>> dynamically.
>>
>> The kernel doc lead us into the cpuset.sched_load_balance, it's fine
>> to achieve the dynamic isolation with it, however we got problem with
>> the systemd stuff.
> 
> Then don't use systemd :-) Also, if systemd is the problem, why are you
> bugging us?

Well, that's... fair enough :-P

What we try to understand is why dynamic isolation is so different with
the way of static isolation, is it not good to have a simple way instead?

Regards,
Michael Wang

> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] why can't dynamic isolation just like the static way
  2020-02-11 14:00   ` Steven Rostedt
@ 2020-02-12  1:35     ` 王贇
  0 siblings, 0 replies; 6+ messages in thread
From: 王贇 @ 2020-02-12  1:35 UTC (permalink / raw)
  To: Steven Rostedt, Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Ben Segall, Mel Gorman, open list:SCHEDULER



On 2020/2/11 下午10:00, Steven Rostedt wrote:
> On Tue, 11 Feb 2020 12:43:50 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
>> On Tue, Feb 11, 2020 at 04:17:34PM +0800, 王贇 wrote:
>>> Hi, folks
>>>
>>> We are dealing with isolcpus these days and try to do the isolation
>>> dynamically.
>>>
>>> The kernel doc lead us into the cpuset.sched_load_balance, it's fine
>>> to achieve the dynamic isolation with it, however we got problem with
>>> the systemd stuff.  
>>
>> Then don't use systemd :-) Also, if systemd is the problem, why are you
>> bugging us?
> 
> [ Background. Peter is someone that doesn't even use systemd. ;-) ]

I would be happy to get rid of that too ;-) but seems like it's getting
popular now as the basic init stuff, and I guess they have no idea about
how they are breaking the dynamic isolation.

Regards,
Michael Wang

> 
> -- Steve
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-02-12  1:35 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-11  8:17 [RFC] why can't dynamic isolation just like the static way 王贇
2020-02-11 11:43 ` Peter Zijlstra
2020-02-11 14:00   ` Steven Rostedt
2020-02-12  1:35     ` 王贇
2020-02-12  1:30   ` 王贇
2020-02-11 13:54 ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).