linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RE: 1:1 threading vs. scheduler activations (was: Re: [ANNOUNCE]  Native POSIX Thread Library 0.1)
@ 2002-09-24 16:49 Perez-Gonzalez, Inaky
  0 siblings, 0 replies; 3+ messages in thread
From: Perez-Gonzalez, Inaky @ 2002-09-24 16:49 UTC (permalink / raw)
  To: 'Ingo Molnar', Andy Isaacson
  Cc: Larry McVoy, Peter Wächtler, Bill Davidsen, linux-kernel


> and there are some things that i'm not at all sure can be fixed in any
> reasonable way - eg. RT scheduling. [the userspace library 
> would have to
> raise/drop the priority of threads in the userspace 
> scheduler, causing an
> additional kernel entry/exit, eliminating even the 
> theoretical advantage
> it had for pure user<->user context switches.]

So far, the only reasonable way I have found to put RT *scheduling* on NGPT
has been to modify the priority queues on the scheduler [using a simplified
model of your O(1) scheduler]. That gives you, at least, "real time" versus
the other threads. If you want it versus the whole system, then you can
change the attrs of the thread to be SYSTEM scope, so that they compete for
system resources against everybody else [of course, this is cheating, it is
falling back to 1:1 for the real time case].

There are rough edges still, for mutex (futex) waiter selection, signal
delivery, etc ... but so far, I think it is the best sollution [and I'd love
to hear others :)].

Inaky Perez-Gonzalez -- Not speaking for Intel - opinions are my own [or my
fault]


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: 1:1 threading vs. scheduler activations (was: Re: [ANNOUNCE] Native POSIX Thread Library 0.1)
  2002-09-24  6:32 ` 1:1 threading vs. scheduler activations (was: Re: [ANNOUNCE] Native POSIX Thread Library 0.1) Ingo Molnar
@ 2002-09-25  3:08   ` Bill Huey
  0 siblings, 0 replies; 3+ messages in thread
From: Bill Huey @ 2002-09-25  3:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Andy Isaacson, Larry McVoy, Peter W?chtler, Bill Davidsen,
	linux-kernel, Bill Huey (Hui)

On Tue, Sep 24, 2002 at 08:32:16AM +0200, Ingo Molnar wrote:
> yes, SA's (and KSA's) are an interesting concept, but i personally think
> they are way too much complexity - and history has shows that complexity
> never leads to anything good, especially not in OS design.

FreeBSD's KSEs .;)

> Eg. SA's, like every M:N concept, must have a userspace component of the
> scheduler, which gets very funny when you try to implement all the things
> the kernel scheduler has had for years: fairness, SMP balancing, RT
> scheduling (!), preemption and more.

Yeah, I understand. These folks are doing some interesting stuff and
might provide some answers for you:

	http://www.research.ibm.com/K42/

This paper specifically:

	http://www.research.ibm.com/K42/white-papers/Scheduling.pdf

Their stuff isn't too much different than FreeBSD's KSE project, different
names for the primitives, different communication, etc...

> And then i havent mentioned things like upcall costs - what's the point in
> upcalling userspace which then has to schedule, instead of doing this
> stuff right in the kernel? Scheduler activations concentrate too much on
> the 5% of cases that have more userspace<->userspace context switching
> than some sort of kernel-provoked context switching. Sure, scheduler
> activations can be done, but i cannot see how they can be any better than
> 'just as fast' as a 1:1 implementation - at a much higher complexity and
> robustness cost.

Folks have been experimenting with other means of kernel/userspace using
a chunk of shared memory, notification and polling when the UTS gets entered
by a block on a mutex or other operation. Upcalls are what was used in
the original Topaz OS paper that implemented SAs, Mach was the other.
It doesn't mean that it's used universally for all implementations.

> the biggest part of Linux's kernel-space context switching is the cost of
> kernel entry - and the cost of kernel entry gets cheaper with every new
> generation of CPUs. Basing the whole threading design on the avoidance of
> the kernel scheduler is like basing your tent on a glacier, in a hot
> summer day.
> 
> Plus in an M:N model all the development toolchain suddenly has to
> understand the new set of contexts, debuggers, tracers, everything.

That's not an issue. Folks expect that to be so when working with any
new threading system.

> Plus there are other issues like security - it's perfectly reasonable in
> the 1:1 model for a certain set of server threads to drop all privileges
> to do the more dangerous stuff. (while there is no such thing as absolute
> security and separation in a threaded app, dropping privileges can avoid
> certain classes of exploits.)

> generally the whole SA/M:N concept creaks under the huge change that is
> introduced by having multiple userspace contexts of execution per a single
> kernel-space context of execution. Such detaching of concepts, no matter
> which kernel subsystem you look at, causes problems everywhere.

Maybe, it's probably implementation specific. I'm curious as to how K42
performs.

> eg. the VM. There's no way you can get an 'upcall' from the VM that you
> need to wait for free RAM - most of the related kernel code is simply not
> ready and restartable. So VM load can end up blocking kernel contexts
> without giving any chance to user contexts to be 'scheduled' by the
> userspace scheduler. This happens exactly in the worst moment, when load
> increases and stuff starts swapping.

That's solved by refashioning the kernel to pump out a blocking notification
to the UTS for that backing kernel thread. It's expected out of an SA style
system.

> and there are some things that i'm not at all sure can be fixed in any
> reasonable way - eg. RT scheduling. [the userspace library would have to
> raise/drop the priority of threads in the userspace scheduler, causing an
> additional kernel entry/exit, eliminating even the theoretical advantage
> it had for pure user<->user context switches.]

KSEs have a RT scheduling category, but the issue of preemption is not clearly
understood by me just yet so can't comment on it. I was in the process of trying
to understand this stuff at one time since I was thinking about work on that
project.

> plus basic performance issues. If you have a healthy mix of userspace and
> kernelspace scheduler activity then you've at least doubled your icache
> footprint by having two scheduler - the dcache footprint is higher as
> well. A *single* bad cachemiss on a P4 is already almost as expensive as a
> kernel entry - and it's not like the growing gap between RAM access
> latency and CPU performance will shrink in the future. And we arent even
> using SYSENTER/SYSEXIT in the Linux kernel yet, which will shave off
> another 40% from the syscall entry (and kernel context switching) cost.

It'll be localized to the UTS, while threads that blocked in the kernel
are mostly going to be IO driven. Don't know about the situation where
you might have a a mixture of those activities.

The infrastructure for the upcalls might incur significant overhead.

> so my current take on threading models is: if you *can* do a really fast
> and lightweight kernel based 1:1 threading implementation then you have
> won. Anything else is barely more than workarounds for (fixable)  
> architectural problems. Concentrate your execution abstraction into the
> kernel and make it *really* fast and scalable - that will improve
> everything else. OTOH any improvement to the userspace thread scheduler
> only improves threaded applications - which are still the minority. Sure,
> some of the above problems can be helped, but it's not trivial - and some
> problems i dont think can be solved at all.

> But we'll see, the FreeBSD folks i think are working on KSA's so we'll
> know for sure in a couple of years.

There's a lot of ways folks can do this kind of stuff. Who knows ? The current
method you folks are doing could very well be the best for Linux.

I don't have much more to say about this topic.

bill


^ permalink raw reply	[flat|nested] 3+ messages in thread

* 1:1 threading vs. scheduler activations (was: Re: [ANNOUNCE] Native POSIX Thread Library 0.1)
  2002-09-23 23:57 [ANNOUNCE] Native POSIX Thread Library 0.1 Andy Isaacson
@ 2002-09-24  6:32 ` Ingo Molnar
  2002-09-25  3:08   ` Bill Huey
  0 siblings, 1 reply; 3+ messages in thread
From: Ingo Molnar @ 2002-09-24  6:32 UTC (permalink / raw)
  To: Andy Isaacson
  Cc: Larry McVoy, Peter Wächtler, Bill Davidsen, linux-kernel


On Mon, 23 Sep 2002, Andy Isaacson wrote:

> Excellent points, Ingo.  An alternative that I haven't seen considered
> is the M:N threading model that NetBSD is adopting, called Scheduler
> Activations. [...]

yes, SA's (and KSA's) are an interesting concept, but i personally think
they are way too much complexity - and history has shows that complexity
never leads to anything good, especially not in OS design.

Eg. SA's, like every M:N concept, must have a userspace component of the
scheduler, which gets very funny when you try to implement all the things
the kernel scheduler has had for years: fairness, SMP balancing, RT
scheduling (!), preemption and more.

Eg. 2.0.2 NGPT's current userspace scheduler is still cooperative - a
single userspace thread burning CPU cycles monopolizes the full context.
Obviously this can be fixed, but it gets nastier and nastier as you add
the features - for no good reason, the same functionality is already
present in the kernel's scheduler, which can generally do much better
scheduling decisions - it has direct and reliable access to various
statistics, it knows exactly how much CPU time has been used up. To give
all this information to the userspace scheduler takes alot of effort. I'm
no wimp when it comes to scheduler complexity, but a coupled kernel/user
scheduling concept scares the *hit out of me.

And then i havent mentioned things like upcall costs - what's the point in
upcalling userspace which then has to schedule, instead of doing this
stuff right in the kernel? Scheduler activations concentrate too much on
the 5% of cases that have more userspace<->userspace context switching
than some sort of kernel-provoked context switching. Sure, scheduler
activations can be done, but i cannot see how they can be any better than
'just as fast' as a 1:1 implementation - at a much higher complexity and
robustness cost.

the biggest part of Linux's kernel-space context switching is the cost of
kernel entry - and the cost of kernel entry gets cheaper with every new
generation of CPUs. Basing the whole threading design on the avoidance of
the kernel scheduler is like basing your tent on a glacier, in a hot
summer day.

Plus in an M:N model all the development toolchain suddenly has to
understand the new set of contexts, debuggers, tracers, everything.

Plus there are other issues like security - it's perfectly reasonable in
the 1:1 model for a certain set of server threads to drop all privileges
to do the more dangerous stuff. (while there is no such thing as absolute
security and separation in a threaded app, dropping privileges can avoid
certain classes of exploits.)

generally the whole SA/M:N concept creaks under the huge change that is
introduced by having multiple userspace contexts of execution per a single
kernel-space context of execution. Such detaching of concepts, no matter
which kernel subsystem you look at, causes problems everywhere.

eg. the VM. There's no way you can get an 'upcall' from the VM that you
need to wait for free RAM - most of the related kernel code is simply not
ready and restartable. So VM load can end up blocking kernel contexts
without giving any chance to user contexts to be 'scheduled' by the
userspace scheduler. This happens exactly in the worst moment, when load
increases and stuff starts swapping.

and there are some things that i'm not at all sure can be fixed in any
reasonable way - eg. RT scheduling. [the userspace library would have to
raise/drop the priority of threads in the userspace scheduler, causing an
additional kernel entry/exit, eliminating even the theoretical advantage
it had for pure user<->user context switches.]

plus basic performance issues. If you have a healthy mix of userspace and
kernelspace scheduler activity then you've at least doubled your icache
footprint by having two scheduler - the dcache footprint is higher as
well. A *single* bad cachemiss on a P4 is already almost as expensive as a
kernel entry - and it's not like the growing gap between RAM access
latency and CPU performance will shrink in the future. And we arent even
using SYSENTER/SYSEXIT in the Linux kernel yet, which will shave off
another 40% from the syscall entry (and kernel context switching) cost.

so my current take on threading models is: if you *can* do a really fast
and lightweight kernel based 1:1 threading implementation then you have
won. Anything else is barely more than workarounds for (fixable)  
architectural problems. Concentrate your execution abstraction into the
kernel and make it *really* fast and scalable - that will improve
everything else. OTOH any improvement to the userspace thread scheduler
only improves threaded applications - which are still the minority. Sure,
some of the above problems can be helped, but it's not trivial - and some
problems i dont think can be solved at all.

But we'll see, the FreeBSD folks i think are working on KSA's so we'll
know for sure in a couple of years.

	Ingo


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2002-09-25  3:03 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-09-24 16:49 1:1 threading vs. scheduler activations (was: Re: [ANNOUNCE] Native POSIX Thread Library 0.1) Perez-Gonzalez, Inaky
  -- strict thread matches above, loose matches on Subject: below --
2002-09-23 23:57 [ANNOUNCE] Native POSIX Thread Library 0.1 Andy Isaacson
2002-09-24  6:32 ` 1:1 threading vs. scheduler activations (was: Re: [ANNOUNCE] Native POSIX Thread Library 0.1) Ingo Molnar
2002-09-25  3:08   ` Bill Huey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).