linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* low-latency scheduling patch for 2.4.0
@ 2001-01-07  2:53 ` Andrew Morton
  2001-01-11  3:12   ` [linux-audio-dev] " Jay Ts
  2001-01-14 11:35   ` Andrew Morton
  0 siblings, 2 replies; 55+ messages in thread
From: Andrew Morton @ 2001-01-07  2:53 UTC (permalink / raw)
  To: lkml, lad


A patch against kernel 2.4.0 final which provides low-latency
scheduling is at

	http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads

Some notes:

- Worst-case scheduling latency with *very* intense workloads is now
  0.8 milliseconds on a 500MHz uniprocessor.

  For normal workloads you can expect to achieve better than 0.5
  milliseconds for ever.  For example, worst-case latency between entry
  to an interrupt routine and activation of a usermode process during a
  `make clean && make bzImage' is 0.35 milliseconds.  This is one to
  three orders of magnitude better than BeOS, MacOS and the Windowses.

- Low latency is enabled from the `Processor type and features'
  kernel configuration menu for all architectures.  It would be nice to
  hear from non-x86 users.

- The SMP problem hasn't been addressed.  Enabling low-latency for
  SMP works well under normal workloads but comes unstuck under very
  heavy workloads.  I'll be taking a further look at this.

- The supporting tools `rtc_debug' and `amlat' have been updated. 
  These are quite useful tools for providing accurate measurement of
  latencies.  They may also be used to identify the causes of poor
  latency in the kernel.

- Remaining problem areas (the Don't Do That list) is pretty small:

  - Scrolling the fb console.
  - Running hdparm.
  - Using LILO
  - Starting the X server

- Low latency will probably only be achieved when using the ext2 and
  NFS filesystems.

- If you care about latency, be *very* cautious about upgrading to
  XFree86 4.x.  I'll cover this issue in a separate email, copied
  to the XFree team.

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-07  2:53 ` Andrew Morton
@ 2001-01-11  3:12   ` Jay Ts
  2001-01-11  3:22     ` Cort Dougan
  2001-01-11  5:19     ` David S. Miller
  2001-01-14 11:35   ` Andrew Morton
  1 sibling, 2 replies; 55+ messages in thread
From: Jay Ts @ 2001-01-11  3:12 UTC (permalink / raw)
  To: Andrew Morton; +Cc: lkml, lad

> A patch against kernel 2.4.0 final which provides low-latency
> scheduling is at
> 
> 	http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads
> 
> Some notes:
> 
> - Worst-case scheduling latency with *very* intense workloads is now
>   0.8 milliseconds on a 500MHz uniprocessor.

Wow!  That's super.  Now about the only thing left is to get it included
in the standard kernel.  Do you think Linus Torvalds is more likely
to accept these patches than Ingo's?  I sure hope this one works out.

>   This is one to
>   three orders of magnitude better than BeOS, MacOS and the Windowses.

** salivates **

> - Low latency will probably only be achieved when using the ext2 and
>   NFS filesystems.

Well it's extremely nice to see NFS included at least.  I was really
worried about that one.  What about Samba?  (Keeping in mind that
serious "professional" musicians will likely have their Linux systems
networked to a Windows box, at least until they have all the necessary
tools on Linux.

> - If you care about latency, be *very* cautious about upgrading to
>   XFree86 4.x.  I'll cover this issue in a separate email, copied
>   to the XFree team.

Did that email pass by me unnoticed?  What's the prob with XF86 4.0?

- Jay Ts
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11  3:12   ` [linux-audio-dev] " Jay Ts
@ 2001-01-11  3:22     ` Cort Dougan
  2001-01-11 12:38       ` Alan Cox
  2001-01-11  5:19     ` David S. Miller
  1 sibling, 1 reply; 55+ messages in thread
From: Cort Dougan @ 2001-01-11  3:22 UTC (permalink / raw)
  To: jayts; +Cc: Andrew Morton, lkml, lad

} > - If you care about latency, be *very* cautious about upgrading to
} >   XFree86 4.x.  I'll cover this issue in a separate email, copied
} >   to the XFree team.
} 
} Did that email pass by me unnoticed?  What's the prob with XF86 4.0?

The darn thing disables intrs on its own for quite some time with some of
the more aggressive drivers.  We saw our 20us latencies under RTLinux go up
a lot with some of those drivers.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11  3:12   ` [linux-audio-dev] " Jay Ts
  2001-01-11  3:22     ` Cort Dougan
@ 2001-01-11  5:19     ` David S. Miller
  2001-01-11 13:57       ` Daniel Phillips
                         ` (3 more replies)
  1 sibling, 4 replies; 55+ messages in thread
From: David S. Miller @ 2001-01-11  5:19 UTC (permalink / raw)
  To: andrewm; +Cc: jayts, linux-kernel, linux-audio-dev, xpert, mcrichto


Just some commentary and a bug report on your patch Andrew:

Opinion: Personally, I think the approach in Andrew's patch
	 is the way to go.

	 Not because it can give the absolute best results.
	 But rather, it is because it says "here is where a lot
         of time is spent".

	 This has two huge benefits:
	 1) It tells us where possible algorithmic improvements may
	    be possible.  In some cases we may be able to improve the
	    code to the point where the pre-emption points are no
	    longer necessary and can thus be removed.
	 2) It affects only code which can burn a lot of cpu without
	    scheduling.  Compare this to schemes which make the kernel
	    fully pre-emptable, causing _EVERYONE_ to pay the price of
	    low-latency.  If we were to later fine algorithmic
	    improvements to the high-latency pieces of code, we
            couldn't then just "undo" support for pre-emption because
	    dependencies will have swept across the whole kernel
	    already.

            Pre-emption, by itself, also doesn't help in situations
	    where lots of time is spent while holding spinlocks.
	    There are several other operating systems which support
	    pre-emption where you will find hard coded calls to the
	    scheduler in time-consuming code.  Heh, it's almost like,
	    "what's the frigging point of pre-emption then if you
	    still have to manually check in some spots?"

Bug:	In the tcp_minisock.c changes, if you bail out of the loop
	early (ie. max_killed=1) you do not decrement tcp_tw_count
	by killed, which corrupts the state of the TIME_WAIT socket
	reaper.  The fix is simple, just duplicate the tcp_tw_count
	decrement into the "if (max_killed)" code block.

Later,
David S. Miller
davem@redhat.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11  3:12   ` [linux-audio-dev] " Jay Ts
@ 2001-01-11 11:30 Andrew Morton
  2001-01-07  2:53 ` Andrew Morton
                   ` (2 more replies)
  1 sibling, 3 replies; 55+ messages in thread
From: Andrew Morton @ 2001-01-11 11:30 UTC (permalink / raw)
  To: jayts; +Cc: lkml, lad, xpert, mcrichto

Jay Ts wrote:
> 
> > A patch against kernel 2.4.0 final which provides low-latency
> > scheduling is at
> >
> >       http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads
> >
> > Some notes:
> >
> > - Worst-case scheduling latency with *very* intense workloads is now
> >   0.8 milliseconds on a 500MHz uniprocessor.
> 
> Wow!  That's super.  Now about the only thing left is to get it included
> in the standard kernel.  Do you think Linus Torvalds is more likely
> to accept these patches than Ingo's?  I sure hope this one works out.

Neither, I think.

We can't apply some patch and say "there; it's low-latency".

We (or "he") need to decide up-front that Linux is to become
a low latency kernel. Then we need to decide the best way of
doing that.

Making the kernel internally preemptive is probably the best way of
doing this.  But it's a *big* task to which must beard-scratching must
be put.  It goes way beyond the preemptive-kernel patches which have
thus far been proposed.

I could propose a simple patch for 2.4 (say, the ten most-needed
scheduling points).  This would get us down to maybe 5-10 milliesconds
under heavy load (10-20x improvement).

That would probably be a great and sufficient improvement for
the HA heartbeat monitoring apps, the database TP monitors,
the QuakeIII players and, of course, people who are only
interested in audio record and playback - I'd need advice
from the audio experts for that.

I hope that one or more of the desktop-oriented Linux distributors
discover that hosing HTML out of gigE ports is not really the
One True Appplication of Linux, and that they decide to offer
a low-latency kernel for the other 99.99% of Linux users.
 
> >   This is one to
> >   three orders of magnitude better than BeOS, MacOS and the Windowses.
> 
> ** salivates **
> 
> > - Low latency will probably only be achieved when using the ext2 and
> >   NFS filesystems.
> 
> Well it's extremely nice to see NFS included at least.  I was really
> worried about that one.  What about Samba?  (Keeping in mind that
> serious "professional" musicians will likely have their Linux systems
> networked to a Windows box, at least until they have all the necessary
> tools on Linux.

I would expect the smbfs client code to be OK.  Will test - thanks.

> > - If you care about latency, be *very* cautious about upgrading to
> >   XFree86 4.x.  I'll cover this issue in a separate email, copied
> >   to the XFree team.
> 
> Did that email pass by me unnoticed?  What's the prob with XF86 4.0?

I haven't gathered the energy to send it.

The basic problem with many video cards is this:

Video adapters have on-board command FIFOs.  They also
have a "FIFO has spare room" control bit.

If you write to the FIFO when there is no spare room,
the damned thing busies the PCI bus until there *is*
room.  This can be up to twenty *milliseconds*.

This will screw up realtime operating systems,
will cause network receive overruns, will screw
up isochronous protocols such as USB and 1394
and will of course screw up scheduling latency.

In xfree3 it was OK - the drivers polled the "spare room"
bit before writing.  But in xfree4 the drivers are starting
to take advantage of this misfeature.  I am told that
a significant number of people are backing out xfree4
upgrades because of this.  For audio.

The manufacturers got caught out by the trade press
in '98 and '99 and they added registry flags to their
drivers to turn off this obnoxious behaviour.

What needs to happen is for the xfree guys to add a
control flag to XF86Config for this.  I believe they
have - it's called `PCIRetry'.

I believe PCIRetry defaults to `off'.  This is bad.
It should default to `on'.

You can read about this minor scandal at the following
URLs:

        http://www.zefiro.com/vgakills.txt
        http://www.zdnet.com/pcmag/news/trends/t980619a.htm
        http://www.research.microsoft.com/~mbj/papers/tr-98-29.html

So,  we need to talk to the xfree team.

Whoops!  I accidentally Cc'ed them :-)

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11  3:22     ` Cort Dougan
@ 2001-01-11 12:38       ` Alan Cox
  0 siblings, 0 replies; 55+ messages in thread
From: Alan Cox @ 2001-01-11 12:38 UTC (permalink / raw)
  To: Cort Dougan; +Cc: jayts, Andrew Morton, lkml, lad

> The darn thing disables intrs on its own for quite some time with some of
> the more aggressive drivers.  We saw our 20us latencies under RTLinux go up
> a lot with some of those drivers.

It isnt disabling interrupts. Its stalling the PCI bus. Its nasty tricks by
card vendors apparently to get good benchmark numbers.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11  5:19     ` David S. Miller
@ 2001-01-11 13:57       ` Daniel Phillips
  2001-01-11 20:55       ` Nigel Gamble
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 55+ messages in thread
From: Daniel Phillips @ 2001-01-11 13:57 UTC (permalink / raw)
  To: David S. Miller, linux-kernel

"David S. Miller" wrote:
>          2) It affects only code which can burn a lot of cpu without
>             scheduling.  Compare this to schemes which make the kernel
>             fully pre-emptable, causing _EVERYONE_ to pay the price of
>             low-latency....

Is there necessarily a price?  Kernel preemption can make io-bound code
go faster by allowing a blocked task to start running again immediately
on io completion.  As things are now, the task will have to wait for
whatever might be happening in the kernel to complete.

--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11  5:19     ` David S. Miller
  2001-01-11 13:57       ` Daniel Phillips
@ 2001-01-11 20:55       ` Nigel Gamble
  2001-01-12 13:30         ` Andrew Morton
  2001-01-11 21:31       ` David S. Miller
  2001-01-12 13:21       ` Andrew Morton
  3 siblings, 1 reply; 55+ messages in thread
From: Nigel Gamble @ 2001-01-11 20:55 UTC (permalink / raw)
  To: David S. Miller; +Cc: andrewm, linux-kernel, linux-audio-dev

On Wed, 10 Jan 2001, David S. Miller wrote:
> Opinion: Personally, I think the approach in Andrew's patch
> 	 is the way to go.
> 
> 	 Not because it can give the absolute best results.
> 	 But rather, it is because it says "here is where a lot
>          of time is spent".
> 
> 	 This has two huge benefits:
> 	 1) It tells us where possible algorithmic improvements may
> 	    be possible.  In some cases we may be able to improve the
> 	    code to the point where the pre-emption points are no
> 	    longer necessary and can thus be removed.

This is definitely an important goal.  But lock-metering code in a fully
preemptible kernel an also identify spots where algorithmic improvements
are most important.

> 	 2) It affects only code which can burn a lot of cpu without
> 	    scheduling.  Compare this to schemes which make the kernel
> 	    fully pre-emptable, causing _EVERYONE_ to pay the price of
> 	    low-latency.  If we were to later fine algorithmic
> 	    improvements to the high-latency pieces of code, we
>             couldn't then just "undo" support for pre-emption because
> 	    dependencies will have swept across the whole kernel
> 	    already.
> 
>             Pre-emption, by itself, also doesn't help in situations
> 	    where lots of time is spent while holding spinlocks.
> 	    There are several other operating systems which support
> 	    pre-emption where you will find hard coded calls to the
> 	    scheduler in time-consuming code.  Heh, it's almost like,
> 	    "what's the frigging point of pre-emption then if you
> 	    still have to manually check in some spots?"

Spinlocks should not be held for lots of time.  This adversely affects
SMP scalability as well as latency.  That's why MontaVista's kernel
preemption patch uses sleeping mutex locks instead of spinlocks for the
long held locks.  In a fully preemptible kernel that is implemented
correctly, you won't find any hard-coded calls to the scheduler in time
consuming code.  The scheduler should only be called in response to an
interrupt (IO or timeout) when we know that a higher priority process
has been made runnable, or when the running process sleeps (voluntarily
or when it has to wait for something) or exits.  This is the case in
both of the fully preemptible kernels which I've worked on (IRIX and
REAL/IX).

Nigel Gamble                                    nigel@nrg.org
Mountain View, CA, USA.                         http://www.nrg.org/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11  5:19     ` David S. Miller
  2001-01-11 13:57       ` Daniel Phillips
  2001-01-11 20:55       ` Nigel Gamble
@ 2001-01-11 21:31       ` David S. Miller
  2001-01-15  5:27         ` george anzinger
  2001-01-12 13:21       ` Andrew Morton
  3 siblings, 1 reply; 55+ messages in thread
From: David S. Miller @ 2001-01-11 21:31 UTC (permalink / raw)
  To: nigel; +Cc: andrewm, linux-kernel, linux-audio-dev


Nigel Gamble writes:
 > That's why MontaVista's kernel preemption patch uses sleeping mutex
 > locks instead of spinlocks for the long held locks.

Anyone who uses sleeping mutex locks is asking for trouble.  Priority
inversion is an issue I dearly hope we never have to deal with in the
Linux kernel, and sleeping SMP mutex locks lead to exactly this kind
of problem.

Later,
David S. Miller
davem@redhat.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11  5:19     ` David S. Miller
                         ` (2 preceding siblings ...)
  2001-01-11 21:31       ` David S. Miller
@ 2001-01-12 13:21       ` Andrew Morton
  3 siblings, 0 replies; 55+ messages in thread
From: Andrew Morton @ 2001-01-12 13:21 UTC (permalink / raw)
  To: David S. Miller; +Cc: jayts, linux-kernel, linux-audio-dev, mcrichto

"David S. Miller" wrote:
> 
> ...
> Bug:    In the tcp_minisock.c changes, if you bail out of the loop
>         early (ie. max_killed=1) you do not decrement tcp_tw_count
>         by killed, which corrupts the state of the TIME_WAIT socket
>         reaper.  The fix is simple, just duplicate the tcp_tw_count
>         decrement into the "if (max_killed)" code block.

Well that was moderately stupid.  Thanks.  It doesn't seem to cause
problems in practice though.  Maybe in the longer term...

I believe the tcp_minisucks.c code needs redoing irrespective
of latency stuff.  It can spend several hundred milliseconds
in a timer handler, which is rather unsociable.

There are a number of moderately complex ways of smoothing out
its behaviour, but I'm inclined to just punt the whole thing
up to process context via schedule_task().

We'll see...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11 20:55       ` Nigel Gamble
@ 2001-01-12 13:30         ` Andrew Morton
  2001-01-12 15:11           ` Tim Wright
                             ` (3 more replies)
  0 siblings, 4 replies; 55+ messages in thread
From: Andrew Morton @ 2001-01-12 13:30 UTC (permalink / raw)
  To: nigel; +Cc: David S. Miller, linux-kernel, linux-audio-dev

Nigel Gamble wrote:
> 
> Spinlocks should not be held for lots of time.  This adversely affects
> SMP scalability as well as latency.  That's why MontaVista's kernel
> preemption patch uses sleeping mutex locks instead of spinlocks for the
> long held locks.

Nigel,

what worries me about this is the Apache-flock-serialisation saga.

Back in -test8, kumon@fujitsu demonstrated that changing this:

	lock_kernel()
	down(sem)
	<stuff>
	up(sem)
	unlock_kernel()

into this:

	down(sem)
	<stuff>
	up(sem)

had the effect of *decreasing* Apache's maximum connection rate
on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec.

That's downright scary.

Obviously, <stuff> was very quick, and the CPUs were passing through
this section at a great rate.

How can we be sure that converting spinlocks to semaphores
won't do the same thing?  Perhaps for workloads which we
aren't testing?

So this needs to be done with caution.

As davem points out, now we know where the problems are
occurring, a good next step is to redesign some of those
parts of the VM and buffercache.  I don't think this will
be too hard, but they have to *want* to change :)

Some of those algorithms are approximately O(N^2), for huge
values of N.


-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-12 13:30         ` Andrew Morton
@ 2001-01-12 15:11           ` Tim Wright
  2001-01-12 22:30             ` Nigel Gamble
  2001-01-13  1:01             ` Andrew Morton
  2001-01-12 22:46           ` Nigel Gamble
                             ` (2 subsequent siblings)
  3 siblings, 2 replies; 55+ messages in thread
From: Tim Wright @ 2001-01-12 15:11 UTC (permalink / raw)
  To: Andrew Morton; +Cc: nigel, David S. Miller, linux-kernel, linux-audio-dev

On Sat, Jan 13, 2001 at 12:30:46AM +1100, Andrew Morton wrote:
> what worries me about this is the Apache-flock-serialisation saga.
> 
> Back in -test8, kumon@fujitsu demonstrated that changing this:
> 
> 	lock_kernel()
> 	down(sem)
> 	<stuff>
> 	up(sem)
> 	unlock_kernel()
> 
> into this:
> 
> 	down(sem)
> 	<stuff>
> 	up(sem)
> 
> had the effect of *decreasing* Apache's maximum connection rate
> on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec.
> 
> That's downright scary.
> 
> Obviously, <stuff> was very quick, and the CPUs were passing through
> this section at a great rate.
> 
> How can we be sure that converting spinlocks to semaphores
> won't do the same thing?  Perhaps for workloads which we
> aren't testing?
> 
> So this needs to be done with caution.
> 

Hmmm...
if <stuff> is very quick, and is guaranteed not to sleep, then a semaphore
is the wrong way to protect it. A spinlock is the correct choice. If it's
always slow, and can sleep, then a semaphore makes more sense, although if
it's highly contented, you're going to serialize and throughput will die.
At that point, you need to redesign :-)
If it's mostly quick but occasionally needs to sleep, I don't know what the
correct idiom would be in Linux. DYNIX/ptx has the concept of atomically
releasing a spinlock and going to sleep on a semaphore, and that would be
the solution there e.g.

p_lock(lock);
retry:
...
if (condition where we need to sleep) {
    p_sema_v_lock(sema, lock);
    /* we got woken up */
    p_lock(lock);
    goto retry;
}
...

I'm stating the obvious here, and re-iterating what you said, and that is that
we need to carefully pick the correct primitive for the job. Unless there's
something very unusual in the Linux implementation that I've missed, a
spinlock is a "cheaper" method of protecting a short critical section, and
should be chosen.

I know the BKL is a semantically a little unusual (the automatic release on
sleep stuff), but even so, isn't

 	lock_kernel()
 	down(sem)
 	<stuff>
 	up(sem)
 	unlock_kernel()

actually equivalent to

	lock_kernel()
	<stuff>
	unlock_kernel()

If so, it's no great surprise that performance dropped given that we replaced
a spinlock (albeit one guarding somewhat more than the critical section) with
a semaphore.

Tim

--
Tim Wright - timw@splhi.com or timw@aracnet.com or twright@us.ibm.com
IBM Linux Technology Center, Beaverton, Oregon
"Nobody ever said I was charming, they said "Rimmer, you're a git!"" RD VI
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-12 15:11           ` Tim Wright
@ 2001-01-12 22:30             ` Nigel Gamble
  2001-01-13  1:01             ` Andrew Morton
  1 sibling, 0 replies; 55+ messages in thread
From: Nigel Gamble @ 2001-01-12 22:30 UTC (permalink / raw)
  To: Tim Wright; +Cc: Andrew Morton, David S. Miller, linux-kernel, linux-audio-dev

On Fri, 12 Jan 2001, Tim Wright wrote:

> On Sat, Jan 13, 2001 at 12:30:46AM +1100, Andrew Morton wrote:
> > what worries me about this is the Apache-flock-serialisation saga.
> > 
> > Back in -test8, kumon@fujitsu demonstrated that changing this:
> > 
> > 	lock_kernel()
> > 	down(sem)
> > 	<stuff>
> > 	up(sem)
> > 	unlock_kernel()
> > 
> > into this:
> > 
> > 	down(sem)
> > 	<stuff>
> > 	up(sem)
> > 
> > had the effect of *decreasing* Apache's maximum connection rate
> > on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec.
> > 
> > That's downright scary.
> > 
> > Obviously, <stuff> was very quick, and the CPUs were passing through
> > this section at a great rate.
> > 
> > How can we be sure that converting spinlocks to semaphores
> > won't do the same thing?  Perhaps for workloads which we
> > aren't testing?
> > 
> > So this needs to be done with caution.
> > 
> 
> Hmmm...
> if <stuff> is very quick, and is guaranteed not to sleep, then a semaphore
> is the wrong way to protect it. A spinlock is the correct choice. If it's
> always slow, and can sleep, then a semaphore makes more sense, although if
> it's highly contented, you're going to serialize and throughput will die.
> At that point, you need to redesign :-)
> If it's mostly quick but occasionally needs to sleep, I don't know what the
> correct idiom would be in Linux. DYNIX/ptx has the concept of atomically
> releasing a spinlock and going to sleep on a semaphore, and that would be
> the solution there e.g.
> 
> p_lock(lock);
> retry:
> ...
> if (condition where we need to sleep) {
>     p_sema_v_lock(sema, lock);
>     /* we got woken up */
>     p_lock(lock);
>     goto retry;
> }
> ...
> 
> I'm stating the obvious here, and re-iterating what you said, and that is that
> we need to carefully pick the correct primitive for the job. Unless there's
> something very unusual in the Linux implementation that I've missed, a
> spinlock is a "cheaper" method of protecting a short critical section, and
> should be chosen.
> 
> I know the BKL is a semantically a little unusual (the automatic release on
> sleep stuff), but even so, isn't
> 
>  	lock_kernel()
>  	down(sem)
>  	<stuff>
>  	up(sem)
>  	unlock_kernel()
> 
> actually equivalent to
> 
> 	lock_kernel()
> 	<stuff>
> 	unlock_kernel()
> 
> If so, it's no great surprise that performance dropped given that we replaced
> a spinlock (albeit one guarding somewhat more than the critical section) with
> a semaphore.
> 
> Tim
> 
> --
> Tim Wright - timw@splhi.com or timw@aracnet.com or twright@us.ibm.com
> IBM Linux Technology Center, Beaverton, Oregon
> "Nobody ever said I was charming, they said "Rimmer, you're a git!"" RD VI
> 

Nigel Gamble                                    nigel@nrg.org
Mountain View, CA, USA.                         http://www.nrg.org/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-12 13:30         ` Andrew Morton
  2001-01-12 15:11           ` Tim Wright
@ 2001-01-12 22:46           ` Nigel Gamble
  2001-01-12 23:08           ` george anzinger
  2001-01-21  0:05           ` yodaiken
  3 siblings, 0 replies; 55+ messages in thread
From: Nigel Gamble @ 2001-01-12 22:46 UTC (permalink / raw)
  To: Andrew Morton; +Cc: David S. Miller, linux-kernel, linux-audio-dev

On Sat, 13 Jan 2001, Andrew Morton wrote:
> Nigel Gamble wrote:
> > Spinlocks should not be held for lots of time.  This adversely affects
> > SMP scalability as well as latency.  That's why MontaVista's kernel
> > preemption patch uses sleeping mutex locks instead of spinlocks for the
> > long held locks.
> 
> Nigel,
> 
> what worries me about this is the Apache-flock-serialisation saga.
> 
> Back in -test8, kumon@fujitsu demonstrated that changing this:
> 
> 	lock_kernel()
> 	down(sem)
> 	<stuff>
> 	up(sem)
> 	unlock_kernel()
> 
> into this:
> 
> 	down(sem)
> 	<stuff>
> 	up(sem)
> 
> had the effect of *decreasing* Apache's maximum connection rate
> on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec.
> 
> That's downright scary.
> 
> Obviously, <stuff> was very quick, and the CPUs were passing through
> this section at a great rate.

Yes, this demonstrates that spinlocks are preferable to sleep locks for
short sections.  However, it looks to me like the implementation of up()
may be partly to blame.  It looks to me as if it tends to prefer to
context switch to the woken up process, instead of continuing to run the
current process.  Surrounding the semaphore with the BKL has the effect
of enforcing the latter behavior, because the semaphore itself will
never have any waiters.

> How can we be sure that converting spinlocks to semaphores
> won't do the same thing?  Perhaps for workloads which we
> aren't testing?
> 
> So this needs to be done with caution.
> 
> As davem points out, now we know where the problems are
> occurring, a good next step is to redesign some of those
> parts of the VM and buffercache.  I don't think this will
> be too hard, but they have to *want* to change :)

Yes, wherever the code can be redesigned to avoid long held locks, that
would definitely be my preferred solution.  I think everyone would be
happy if we could end up with a maintainable solution using only
spinlocks that are held for no longer than a couple of hundred
microseconds.

Nigel Gamble                                    nigel@nrg.org
Mountain View, CA, USA.                         http://www.nrg.org/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-12 13:30         ` Andrew Morton
  2001-01-12 15:11           ` Tim Wright
  2001-01-12 22:46           ` Nigel Gamble
@ 2001-01-12 23:08           ` george anzinger
  2001-01-21  0:05           ` yodaiken
  3 siblings, 0 replies; 55+ messages in thread
From: george anzinger @ 2001-01-12 23:08 UTC (permalink / raw)
  To: Andrew Morton; +Cc: nigel, David S. Miller, linux-kernel, linux-audio-dev

Andrew Morton wrote:
> 
> Nigel Gamble wrote:
> >
> > Spinlocks should not be held for lots of time.  This adversely affects
> > SMP scalability as well as latency.  That's why MontaVista's kernel
> > preemption patch uses sleeping mutex locks instead of spinlocks for the
> > long held locks.
> 
> Nigel,
> 
> what worries me about this is the Apache-flock-serialisation saga.
> 
> Back in -test8, kumon@fujitsu demonstrated that changing this:
> 
>         lock_kernel()
>         down(sem)
>         <stuff>
>         up(sem)
>         unlock_kernel()
> 
> into this:
> 
>         down(sem)
>         <stuff>
>         up(sem)
> 
> had the effect of *decreasing* Apache's maximum connection rate
> on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec.
> 
> That's downright scary.
> 
> Obviously, <stuff> was very quick, and the CPUs were passing through
> this section at a great rate.

If <stuff> was that fast, maybe the down/up should have been a spinlock
too.  But what if it is changed to:

      BKL_enter_mutx()
      down(sem)
      <stuff>
      up(sem)
      BKL_exit_mutex()
> 
> How can we be sure that converting spinlocks to semaphores
> won't do the same thing?  Perhaps for workloads which we
> aren't testing?

The key is to keep the fast stuff on the spinlock and the slow stuff on
the mutex.  Otherwise you WILL eat up the cpu with the overhead.
> 
> So this needs to be done with caution.
> 
> As davem points out, now we know where the problems are
> occurring, a good next step is to redesign some of those
> parts of the VM and buffercache.  I don't think this will
> be too hard, but they have to *want* to change :)

They will *want* to change if they pop up due to other work :)
> 
> Some of those algorithms are approximately O(N^2), for huge
> values of N.
> 
> -
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-12 15:11           ` Tim Wright
  2001-01-12 22:30             ` Nigel Gamble
@ 2001-01-13  1:01             ` Andrew Morton
  2001-01-15 19:46               ` Tim Wright
  1 sibling, 1 reply; 55+ messages in thread
From: Andrew Morton @ 2001-01-13  1:01 UTC (permalink / raw)
  To: timw; +Cc: nigel, David S. Miller, linux-kernel, linux-audio-dev

Tim Wright wrote:
> 
> Hmmm...
> if <stuff> is very quick, and is guaranteed not to sleep, then a semaphore
> is the wrong way to protect it. A spinlock is the correct choice. If it's
> always slow, and can sleep, then a semaphore makes more sense, although if
> it's highly contented, you're going to serialize and throughput will die.
> At that point, you need to redesign :-)
> If it's mostly quick but occasionally needs to sleep, I don't know what the
> correct idiom would be in Linux. DYNIX/ptx has the concept of atomically
> releasing a spinlock and going to sleep on a semaphore, and that would be
> the solution there e.g.
> 
> p_lock(lock);
> retry:
> ...
> if (condition where we need to sleep) {
>     p_sema_v_lock(sema, lock);
>     /* we got woken up */
>     p_lock(lock);
>     goto retry;
> }
> ...

That's an interesting concept.  How could this actually be used
to protect a particular resource?  Do all users of that
resource have to claim both the lock and the semaphore before
they may access it?


There are a number of locks (such as pagecache_lock) which in the
great majority of cases are held for a short period, but are 
occasionally held for a long period.  So these locks are not
a performance problem, they are not a scalability problem but
they *are* a worst-case-latency problem.

> 
> I'm stating the obvious here, and re-iterating what you said, and that is that
> we need to carefully pick the correct primitive for the job. Unless there's
> something very unusual in the Linux implementation that I've missed, a
> spinlock is a "cheaper" method of protecting a short critical section, and
> should be chosen.
> 
> I know the BKL is a semantically a little unusual (the automatic release on
> sleep stuff), but even so, isn't
> 
>         lock_kernel()
>         down(sem)
>         <stuff>
>         up(sem)
>         unlock_kernel()
> 
> actually equivalent to
> 
>         lock_kernel()
>         <stuff>
>         unlock_kernel()
> 
> If so, it's no great surprise that performance dropped given that we replaced
> a spinlock (albeit one guarding somewhat more than the critical section) with
> a semaphore.

Yes.

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11 11:30 [linux-audio-dev] low-latency scheduling patch for 2.4.0 Andrew Morton
  2001-01-07  2:53 ` Andrew Morton
@ 2001-01-13  2:45 ` Jay Ts
  2001-01-21  0:10   ` yodaiken
  2001-01-13 18:11 ` video drivers hog pci bus ? [was:[linux-audio-dev] low-latency scheduling patch for 2.4.0] Jörn Nettingsmeier
  2 siblings, 1 reply; 55+ messages in thread
From: Jay Ts @ 2001-01-13  2:45 UTC (permalink / raw)
  To: Andrew Morton; +Cc: lkml, lad, xpert, mcrichto

Andrew Morton wrote:
> 
> Jay Ts wrote:
> > 
> > Now about the only thing left is to get it included
> > in the standard kernel.  Do you think Linus Torvalds is more likely
> > to accept these patches than Ingo's?  I sure hope this one works out.
> 
> We (or "he") need to decide up-front that Linux is to become
> a low latency kernel. Then we need to decide the best way of
> doing that.
> 
> Making the kernel internally preemptive is probably the best way of
> doing this.  But it's a *big* task

Ouch.  Yes, I agree that the ideal path is for Linus and the other
kernel developers and ... well, just about everyone ... is to create
a long-range strategy and 'roadmap' that includes support for low-latency.

And making the kernel preemptive might be the best way to do that
(and I'm saying "might"...).

But all that can take years, if it happens at all, and we may have
a short-term approach that will satisfy almost everyone, at least for
now, and maybe even allow for the development and maybe even (?) commercial
distribution ("shrink wrap") of audio software for Linux.  (Er, assuming
that the ALSA drivers become the standard audio drivers.  Mustn't forget
that.)

As for actually desiring a preemptive kernel, I'm not a complete expert
in this area, but I will say that no one has ever managed to explain to
me why the extra complexity is vital, necessary, or just worth the 
bother.  Sure, it would help with the implementation and OS support of
the multithreaded and realtime code that I'm developing.  So far, I haven't
run into any major limitations yet related to lack of a preemtive kernel,
but maybe I will later. (?)

> I could propose a simple patch for 2.4 (say, the ten most-needed
> scheduling points).  This would get us down to maybe 5-10 milliesconds
> under heavy load (10-20x improvement).

5-10 ms wouldn't be great, but would at least be better than nothing.
It would be a good start, perhaps, especially if it were understood that
things will get better later on.  As with the development of SMP support
for Linux.

> That would probably be a great and sufficient improvement for [...]
> people who are only interested in audio record and playback - I'd need advice
> from the audio experts for that.

Well, call me an audio expert, then. :)  What sort of advice do you
want?  You can send your comments to the LAD (linux audio development)
mailing list, and there are a bunch of smart audio/music programmers
who I'm pretty sure will be happy to comment.

One thing I'd like to say is that simple recording and playback of audio
is hardly the complete picture!  Try recording and playback of *many*
channels of audio, while at the same time running multiple software
synthesizers and effects plugins, and recording and playing back MIDI
sequences.  And other things, too.  One thing I ask of anyone who's developing
Linux is to please think in an open-ended manner regarding audio/music.
This is really still a pretty new and immature field, and the software
(when the Real Stuff gets to Linux, that is) will be happy to absorb
whatever hardware resources are thrown at it for years to come.

> I hope that one or more of the desktop-oriented Linux distributors
> discover that hosing HTML out of gigE ports is not really the
> One True Appplication of Linux,

I agree approximately 110.111%. :)  Really, I find servers to be
pretty boring.  "Linux is supposed to be fun", right? :)

> > What's the prob with XF86 4.0?
>   [snipped longish explanation] 
> So,  we need to talk to the xfree team.
> 
> Whoops!  I accidentally Cc'ed them :-)

Thank you.  A low-latency kernel would be meaningless if the X server
creates delays of 20ms!  This just plain needs to be fixed.

- Jay Ts
jayts@bigfoot.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* video drivers hog pci bus ? [was:[linux-audio-dev] low-latency  scheduling patch for 2.4.0]
  2001-01-11 11:30 [linux-audio-dev] low-latency scheduling patch for 2.4.0 Andrew Morton
  2001-01-07  2:53 ` Andrew Morton
  2001-01-13  2:45 ` [linux-audio-dev] " Jay Ts
@ 2001-01-13 18:11 ` Jörn Nettingsmeier
  2 siblings, 0 replies; 55+ messages in thread
From: Jörn Nettingsmeier @ 2001-01-13 18:11 UTC (permalink / raw)
  To: Andrew Morton; +Cc: jayts, lkml, lad, xpert, mcrichto, alsa-devel

[alsa folks, i'd appreciate a comment on this thread from
linux-audio-dev]

hello everyone !

in a post related to his latest low-latency patch, andrew morton
gave a pointer to
http://www.zefiro.com/vgakills.txt , which addresses the problem of
dropped samples due to agressive video drivers hogging the pci bus
with retry attempts to optimize benchmark results while producing a
"zipper" noise, e.g. when moving windows around with the mouse while
playing a soundfile.
some may have tried fiddling with the "pci retry" option in the
XF86Config (see the linux audio quality howto by paul winkler at
http://www.linuxdj.com/audio/quality for details).

i recall some people having reported mysterious l/r swaps w/ alsa
drivers on some cards, and iirc, most of these reports were not
easily reproduced and explained. 
the zefiro paper states that the zefiro cards would swap channels
occasionally under the circumstances mentioned. it sounds probable
to me that all drivers using interleaved data would suffer from this
problem.

can some more experienced people comment on this ?
is my assumption correct that the bus hogging behaviour is affected
by the pci_retry option ?

btw: the text only mentions pci video cards. will agp cards also
clog the pci bus ?

please give some detail in your answers - i would like to include
this in the linux-audio-dev faq and resources pages. (so chances are
you will only have to answer this once :)


sorry if this has been dealt with before, i seem to have trouble to
follow all my mailing lists...


regards,

jörn


Andrew Morton wrote:
> 
> >
> > > A patch against kernel 2.4.0 final which provides low-latency
> > > scheduling is at
> > >
> > >       http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads
> > >
> > > Some notes:
> > >
> > > - Worst-case scheduling latency with *very* intense workloads is now
> > >   0.8 milliseconds on a 500MHz uniprocessor.
> Neither, I think.
> 
> We can't apply some patch and say "there; it's low-latency".
> 
> We (or "he") need to decide up-front that Linux is to become
> a low latency kernel. Then we need to decide the best way of
> doing that.
> 
> Making the kernel internally preemptive is probably the best way of
> doing this.  But it's a *big* task to which must beard-scratching must
> be put.  It goes way beyond the preemptive-kernel patches which have
> thus far been proposed.
> 
> I could propose a simple patch for 2.4 (say, the ten most-needed
> scheduling points).  This would get us down to maybe 5-10 milliesconds
> under heavy load (10-20x improvement).
> 
> That would probably be a great and sufficient improvement for
> the HA heartbeat monitoring apps, the database TP monitors,
> the QuakeIII players and, of course, people who are only
> interested in audio record and playback - I'd need advice
> from the audio experts for that.
> 
> I hope that one or more of the desktop-oriented Linux distributors
> discover that hosing HTML out of gigE ports is not really the
> One True Appplication of Linux, and that they decide to offer
> a low-latency kernel for the other 99.99% of Linux users.
> >
> > Well it's extremely nice to see NFS included at least.  I was really
> > worried about that one.  What about Samba?  (Keeping in mind that
> > serious "professional" musicians will likely have their Linux systems
> > networked to a Windows box, at least until they have all the necessary
> > tools on Linux.
> 
> > > - If you care about latency, be *very* cautious about upgrading to
> > >   XFree86 4.x.  I'll cover this issue in a separate email, copied
> > >   to the XFree team.
> 
> I haven't gathered the energy to send it.
> 
> The basic problem with many video cards is this:
> 
> Video adapters have on-board command FIFOs.  They also
> have a "FIFO has spare room" control bit.
> 
> If you write to the FIFO when there is no spare room,
> the damned thing busies the PCI bus until there *is*
> room.  This can be up to twenty *milliseconds*.
> 
> This will screw up realtime operating systems,
> will cause network receive overruns, will screw
> up isochronous protocols such as USB and 1394
> and will of course screw up scheduling latency.
> 
> In xfree3 it was OK - the drivers polled the "spare room"
> bit before writing.  But in xfree4 the drivers are starting
> to take advantage of this misfeature.  I am told that
> a significant number of people are backing out xfree4
> upgrades because of this.  For audio.
> 
> The manufacturers got caught out by the trade press
> in '98 and '99 and they added registry flags to their
> drivers to turn off this obnoxious behaviour.
> 
> What needs to happen is for the xfree guys to add a
> control flag to XF86Config for this.  I believe they
> have - it's called `PCIRetry'.
> 
> I believe PCIRetry defaults to `off'.  This is bad.
> It should default to `on'.
> 
> You can read about this minor scandal at the following
> URLs:
> 
>         http://www.zefiro.com/vgakills.txt
>         http://www.zdnet.com/pcmag/news/trends/t980619a.htm
>         http://www.research.microsoft.com/~mbj/papers/tr-98-29.html
> 
> So,  we need to talk to the xfree team.
> 
> Whoops!  I accidentally Cc'ed them :-)
> 
> -

-- 
Jörn Nettingsmeier     
home://Kurfürstenstr.49.45138.Essen.Germany      
phone://+49.201.491621
http://www.folkwang.uni-essen.de/~nettings/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: low-latency scheduling patch for 2.4.0
  2001-01-07  2:53 ` Andrew Morton
  2001-01-11  3:12   ` [linux-audio-dev] " Jay Ts
@ 2001-01-14 11:35   ` Andrew Morton
  2001-01-14 14:38     ` Gregory Maxwell
  1 sibling, 1 reply; 55+ messages in thread
From: Andrew Morton @ 2001-01-14 11:35 UTC (permalink / raw)
  To: lkml, lad

Andrew Morton wrote:
> 
> A patch against kernel 2.4.0 final which provides low-latency
> scheduling is at
> 
>         http://www.uow.edu.au/~andrewm/linux/schedlat.html#downloads
> 

This has been updated for 2.4.1-pre3

- Fixed latency problems with some /proc files and forking
  when many files are open.

- Fixed the tcp-minisocks thing.

- The patch now works properly on SMP.

  If a wakeup is directed to a SCHED_FIFO or SCHED_RR
  task then we request a reschedule on *all* non-idle
  CPUs.

  This causes any CPU which is holding a long-lived
  spinlock to bale out, allowing the target CPU to
  acquire the spinlock and then reschedule normally.

  Bit of a hack, but it works very well and there
  is no impact on the system unless there are
  non-SCHED_OTHER tasks running.

  Five lines of code :)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: low-latency scheduling patch for 2.4.0
  2001-01-14 11:35   ` Andrew Morton
@ 2001-01-14 14:38     ` Gregory Maxwell
  2001-01-15 10:59       ` Andrew Morton
  0 siblings, 1 reply; 55+ messages in thread
From: Gregory Maxwell @ 2001-01-14 14:38 UTC (permalink / raw)
  To: Andrew Morton; +Cc: lkml, lad

On Sun, Jan 14, 2001 at 10:35:51PM +1100, Andrew Morton wrote:
[snip]
> - The patch now works properly on SMP.
[snip]

Any benchmark results on SMP yet?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-11 21:31       ` David S. Miller
@ 2001-01-15  5:27         ` george anzinger
  0 siblings, 0 replies; 55+ messages in thread
From: george anzinger @ 2001-01-15  5:27 UTC (permalink / raw)
  To: David S. Miller; +Cc: nigel, andrewm, linux-kernel, linux-audio-dev

"David S. Miller" wrote:
> 
> Nigel Gamble writes:
>  > That's why MontaVista's kernel preemption patch uses sleeping mutex
>  > locks instead of spinlocks for the long held locks.
> 
> Anyone who uses sleeping mutex locks is asking for trouble.  Priority
> inversion is an issue I dearly hope we never have to deal with in the
> Linux kernel, and sleeping SMP mutex locks lead to exactly this kind
> of problem.
> 
Exactly why we are going to us priority inherit mutexes.  This handles
the inversion nicely.

George
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: low-latency scheduling patch for 2.4.0
  2001-01-14 14:38     ` Gregory Maxwell
@ 2001-01-15 10:59       ` Andrew Morton
  0 siblings, 0 replies; 55+ messages in thread
From: Andrew Morton @ 2001-01-15 10:59 UTC (permalink / raw)
  To: Gregory Maxwell; +Cc: lkml, lad

Gregory Maxwell wrote:
> 
> On Sun, Jan 14, 2001 at 10:35:51PM +1100, Andrew Morton wrote:
> [snip]
> > - The patch now works properly on SMP.
> [snip]
> 
> Any benchmark results on SMP yet?

SMP and UP are much the same.

Workload is `make -j3 bzImage', the measured time
is from entry to an ISR to execution of the userspace
process.  The histogram has 10 microsecond resolution.

SMP
===
0:165601 10:17192 20:12769 30:33220 40:59318
50:60814 60:42915 70:20949 80:8124 90:2590
100:944 110:397 120:211 130:96 140:51
150:41 160:36 170:24 180:21 190:15
200:14 210:12 220:13 230:7 240:11
250:6 260:3 270:4 280:10 290:6
300:5 310:3 320:3 330:6 340:1
350:1 370:1 400:1 620:1

Total samples: 425436

So on SMP, latency is < 10 microseconds 165601/425436 = 39% of
the time.

UP
==

0:52735 10:33480 20:101199 30:200301 40:135470
50:9199 60:4356 70:1531 80:770 90:396
100:288 110:178 120:102 130:100 140:63
150:45 160:61 170:40 180:30 190:23
200:29 210:12 220:26 230:10 240:8
250:6 260:2 300:1

Total samples: 540461


In other words:

SMP
===

usecs

0  -100:   99.54%
100-200:    0.43%
200-300:    0.02%
300-400:    0.0049%
620:        0.00023%

UP
==

0  -100:   99.81%
100-200:    0.17%
200-300:    0.017%
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-13  1:01             ` Andrew Morton
@ 2001-01-15 19:46               ` Tim Wright
  0 siblings, 0 replies; 55+ messages in thread
From: Tim Wright @ 2001-01-15 19:46 UTC (permalink / raw)
  To: Andrew Morton; +Cc: timw, nigel, David S. Miller, linux-kernel, linux-audio-dev

On Sat, Jan 13, 2001 at 12:01:04PM +1100, Andrew Morton wrote:
> Tim Wright wrote:
[...]
> > p_lock(lock);
> > retry:
> > ...
> > if (condition where we need to sleep) {
> >     p_sema_v_lock(sema, lock);
> >     /* we got woken up */
> >     p_lock(lock);
> >     goto retry;
> > }
> > ...
> 
> That's an interesting concept.  How could this actually be used
> to protect a particular resource?  Do all users of that
> resource have to claim both the lock and the semaphore before
> they may access it?
> 

Ahh, I thought I might have been a tad terse in my explanation. No, the
idea here is that the spinlock guards the access to the data structure we're
concerned about. The sort of code I was thinking about would be where we need
to allocate a data structure. We attempt to grab it from the freelist, and if
successful, then everything is fine. Otherwise, we need to sleep waiting for
some resources to be freed up. So we atomically drop the lock and sleep on
the allocation semaphore. The freeing-up path is also protected by the same
lock, and would do something like 'if (there are sleepers) wake(sleepers)'.
This wakes up the sleeper who grabs the spinlock and retries the alloc. The
result is no races, but we don't spin or hold the lock for a long time.

It doesn't have to be an allocation. The same idea works for e.g. protecting
access to "buffer cache" (not necessarily Linux) data, and then atomically
releasing the lock and sleeping waiting for an I/O to happen.

> 
> There are a number of locks (such as pagecache_lock) which in the
> great majority of cases are held for a short period, but are 
> occasionally held for a long period.  So these locks are not
> a performance problem, they are not a scalability problem but
> they *are* a worst-case-latency problem.
> 

Understood. Whether the above metaphor works depends on whether or not the
"holding for a long time" case fits this pattern i.e. at this stage,
I'm not sufficiently familiar with the Linux VM code. I'm in the process
of rectifying that problem :-)

Regards,

Tim

-- 
Tim Wright - timw@splhi.com or timw@aracnet.com or twright@us.ibm.com
IBM Linux Technology Center, Beaverton, Oregon
"Nobody ever said I was charming, they said "Rimmer, you're a git!"" RD VI
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-12 13:30         ` Andrew Morton
                             ` (2 preceding siblings ...)
  2001-01-12 23:08           ` george anzinger
@ 2001-01-21  0:05           ` yodaiken
  2001-01-22  0:54             ` Nigel Gamble
  3 siblings, 1 reply; 55+ messages in thread
From: yodaiken @ 2001-01-21  0:05 UTC (permalink / raw)
  To: Andrew Morton; +Cc: nigel, David S. Miller, linux-kernel, linux-audio-dev


Let me just point out that Nigel (I think) has previously stated that
the purpose of this approach is to bring the stunning success of 
IRIX style "RT" to Linux. Since some of us believe that IRIX is a virtual
handbook of OS errors, it really comes down to a design style. I think
that simplicity and "does the main job well" wins every time over 
"really cool algorithms" and "does everything badly". Others 
disagree.


On Sat, Jan 13, 2001 at 12:30:46AM +1100, Andrew Morton wrote:
> Nigel Gamble wrote:
> > 
> > Spinlocks should not be held for lots of time.  This adversely affects
> > SMP scalability as well as latency.  That's why MontaVista's kernel
> > preemption patch uses sleeping mutex locks instead of spinlocks for the
> > long held locks.
> 
> Nigel,
> 
> what worries me about this is the Apache-flock-serialisation saga.
> 
> Back in -test8, kumon@fujitsu demonstrated that changing this:
> 
> 	lock_kernel()
> 	down(sem)
> 	<stuff>
> 	up(sem)
> 	unlock_kernel()
> 
> into this:
> 
> 	down(sem)
> 	<stuff>
> 	up(sem)
> 
> had the effect of *decreasing* Apache's maximum connection rate
> on an 8-way from ~5,000 connections/sec to ~2,000 conn/sec.
> 
> That's downright scary.
> 
> Obviously, <stuff> was very quick, and the CPUs were passing through
> this section at a great rate.
> 
> How can we be sure that converting spinlocks to semaphores
> won't do the same thing?  Perhaps for workloads which we
> aren't testing?
> 
> So this needs to be done with caution.
> 
> As davem points out, now we know where the problems are
> occurring, a good next step is to redesign some of those
> parts of the VM and buffercache.  I don't think this will
> be too hard, but they have to *want* to change :)
> 
> Some of those algorithms are approximately O(N^2), for huge
> values of N.
> 
> 
> -
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> Please read the FAQ at http://www.tux.org/lkml/

-- 
---------------------------------------------------------
Victor Yodaiken 
Finite State Machine Labs: The RTLinux Company.
 www.fsmlabs.com  www.rtlinux.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-13  2:45 ` [linux-audio-dev] " Jay Ts
@ 2001-01-21  0:10   ` yodaiken
  2001-01-26  9:14     ` Pavel Machek
  0 siblings, 1 reply; 55+ messages in thread
From: yodaiken @ 2001-01-21  0:10 UTC (permalink / raw)
  To: Jay Ts; +Cc: Andrew Morton, lkml, lad, xpert, mcrichto

On Fri, Jan 12, 2001 at 07:45:43PM -0700, Jay Ts wrote:
> Andrew Morton wrote:
> > 
> > Jay Ts wrote:
> > > 
> > > Now about the only thing left is to get it included
> > > in the standard kernel.  Do you think Linus Torvalds is more likely
> > > to accept these patches than Ingo's?  I sure hope this one works out.
> > 
> > We (or "he") need to decide up-front that Linux is to become
> > a low latency kernel. Then we need to decide the best way of
> > doing that.
> > 
> > Making the kernel internally preemptive is probably the best way of
> > doing this.  But it's a *big* task
> 
> Ouch.  Yes, I agree that the ideal path is for Linus and the other
> kernel developers and ... well, just about everyone ... is to create
> a long-range strategy and 'roadmap' that includes support for low-latency.
> 
> And making the kernel preemptive might be the best way to do that
> (and I'm saying "might"...).

Keep in mind that Ken Thompson & Dennis Ritchie did not decide on a 
non-preemptive strategy for UNIX because they were unaware of such 
methods or because they were stupid. And when Rob Pike redesigned a new
"unix" Plan9  note there is no-preemptive kernel, and the core Linux
designers have rejected preemptive kernels too. Now it is certainly possible
that things have change and/or all these folks are just plain wrong. But
I wouldn't bet too much on it.

-- 
---------------------------------------------------------
Victor Yodaiken 
Finite State Machine Labs: The RTLinux Company.
 www.fsmlabs.com  www.rtlinux.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-21  0:05           ` yodaiken
@ 2001-01-22  0:54             ` Nigel Gamble
  2001-01-22  1:49               ` Paul Barton-Davis
  0 siblings, 1 reply; 55+ messages in thread
From: Nigel Gamble @ 2001-01-22  0:54 UTC (permalink / raw)
  To: yodaiken; +Cc: Andrew Morton, David S. Miller, linux-kernel, linux-audio-dev

On Sat, 20 Jan 2001 yodaiken@fsmlabs.com wrote:
> Let me just point out that Nigel (I think) has previously stated that
> the purpose of this approach is to bring the stunning success of 
> IRIX style "RT" to Linux. Since some of us believe that IRIX is a virtual
> handbook of OS errors, it really comes down to a design style. I think
> that simplicity and "does the main job well" wins every time over 
> "really cool algorithms" and "does everything badly". Others 
> disagree.

Let me just point out that Victor has his own commercial axe to grind in
his continual bad-mouthing of IRIX, the internals of which he knows
nothing about.

Nigel Gamble                                    nigel@nrg.org
Mountain View, CA, USA.                         http://www.nrg.org/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-22  0:54             ` Nigel Gamble
@ 2001-01-22  1:49               ` Paul Barton-Davis
  2001-01-22  2:21                 ` Nigel Gamble
  0 siblings, 1 reply; 55+ messages in thread
From: Paul Barton-Davis @ 2001-01-22  1:49 UTC (permalink / raw)
  To: nigel
  Cc: yodaiken, Andrew Morton, David S. Miller, linux-kernel, linux-audio-dev

>Let me just point out that Victor has his own commercial axe to grind in
>his continual bad-mouthing of IRIX, the internals of which he knows
>nothing about.

1) do you actually disagree with victor ?

2) victor is not the only person who has expressed this opinion. the
   most prolific irix critic seems to be larry mcvoy, who certainly
   claims to know quite a bit about the internals.

this discussion has the hallmarks of turning into a personal
bash-fest, which is really pointless. what is *not* pointless is a
considered discussion about the merits of the IRIX "RT" approach over
possible approaches that Linux might take which are dissimilar to the
IRIX one. on the other hand, as Victor said, a large part of that
discussion ultimately comes down to a design style rather than hard
factual or logical reasoning.

Paul Davis <pbd@op.net>                                 Bala Cynwyd, PA, USA
Linux Audio Systems                                             610-667-4807
----------------------------------------------------------------------------
hybrid rather than pure; compromising rather than clean;
distorted rather than straightforward; ambiguous rather than
articulated; both-and rather than either-or; the difficult
unity of inclusion rather than the easy unity of exclusion.   Robert Venturi
----------------------------------------------------------------------------


   
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-22  1:49               ` Paul Barton-Davis
@ 2001-01-22  2:21                 ` Nigel Gamble
  2001-01-22  3:31                   ` J Sloan
  2001-01-28 13:14                   ` yodaiken
  0 siblings, 2 replies; 55+ messages in thread
From: Nigel Gamble @ 2001-01-22  2:21 UTC (permalink / raw)
  To: Paul Barton-Davis
  Cc: yodaiken, Andrew Morton, David S. Miller, linux-kernel, linux-audio-dev

On Sun, 21 Jan 2001, Paul Barton-Davis wrote:
> >Let me just point out that Victor has his own commercial axe to grind in
> >his continual bad-mouthing of IRIX, the internals of which he knows
> >nothing about.
> 
> 1) do you actually disagree with victor ?

Yes, I most emphatically do disagree with Victor!  IRIX is used for
mission-critical audio applications - recording as well playback - and
other low-latency applications.  The same OS scales to large numbers of
CPUs.  And it has the best desktop interactive response of any OS I've
used.  I will be very happy when Linux is as good in all these areas,
and I'm working hard to achieve this goal with negligible impact on the
current Linux "sweet-spot" applications such as web serving.

> this discussion has the hallmarks of turning into a personal
> bash-fest, which is really pointless. what is *not* pointless is a
> considered discussion about the merits of the IRIX "RT" approach over
> possible approaches that Linux might take which are dissimilar to the
> IRIX one. on the other hand, as Victor said, a large part of that
> discussion ultimately comes down to a design style rather than hard
> factual or logical reasoning.

I agree.  I'm not wedded to any particular design - I just want a
low-latency Linux by whatever is the best way of achieving that.
However, I am hearing Victor say that we shouldn't try to make Linux
itself low-latency, we should just use his so-called "RTLinux" environment
for low-latency tasks.  RTLinux is not Linux, it is a separate
environment with a separate, limited set of APIs.  You can't run XMMS,
or any other existing Linux audio app in RTLinux.  I want a low-latency
Linux, not just another RTOS living parasitically alongside Linux.

Nigel Gamble                                    nigel@nrg.org
Mountain View, CA, USA.                         http://www.nrg.org/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-22  2:21                 ` Nigel Gamble
@ 2001-01-22  3:31                   ` J Sloan
  2001-01-28 13:14                   ` yodaiken
  1 sibling, 0 replies; 55+ messages in thread
From: J Sloan @ 2001-01-22  3:31 UTC (permalink / raw)
  To: nigel
  Cc: Paul Barton-Davis, yodaiken, Andrew Morton, David S. Miller,
	linux-kernel, linux-audio-dev

Nigel Gamble wrote:

> Yes, I most emphatically do disagree with Victor!  IRIX is used for
> mission-critical audio applications - recording as well playback - and
> other low-latency applications.  The same OS scales to large numbers of
> CPUs.  And it has the best desktop interactive response of any OS I've
> used.  I will be very happy when Linux is as good in all these areas,
> and I'm working hard to achieve this goal with negligible impact on the
> current Linux "sweet-spot" applications such as web serving.

I have to agree - when I worked at the University of California,
a number of us had SGI Indys in our offices. The desktop was
lightning fast, and the graphics were awesome. This is no news
to anybody, since SGI is known for graphics. The big surprise,
however, came when we were trying to find the best nfs server
platform, and benchmarked the SGI just for fun - as it turns out,
a little Indy workstation blew away all other platforms, including
some rather large expensive SPARC boxes, as an nfs server.

So Irix clearly showed the best of both worlds - great latency
and great throughput.

I guess what I'm saying is, there are a lot of proven concepts
in Irix, which work well in real life situations - don't throw out the
baby with the bath water -

jjs




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-21  0:10   ` yodaiken
@ 2001-01-26  9:14     ` Pavel Machek
  0 siblings, 0 replies; 55+ messages in thread
From: Pavel Machek @ 2001-01-26  9:14 UTC (permalink / raw)
  To: yodaiken, Jay Ts; +Cc: Andrew Morton, lkml, lad, xpert, mcrichto

Hi!

> > And making the kernel preemptive might be the best way to do that
> > (and I'm saying "might"...).
> 
> Keep in mind that Ken Thompson & Dennis Ritchie did not decide on a 
> non-preemptive strategy for UNIX because they were unaware of such 
> methods or because they were stupid. And when Rob Pike redesigned a new
> "unix" Plan9  note there is no-preemptive kernel, and the core Linux
> designers have rejected preemptive kernels too. Now it is certainly possible
> that things have change and/or all these folks are just plain wrong. But
> I wouldn't bet too much on it.

Wrong. It was linus who suggested how to do preemptive kernel nicely. I
guess he counts as core Linux designer ;-).
								Pavel
-- 
I'm pavel@ucw.cz. "In my country we have almost anarchy and I don't care."
Panos Katsaloulis describing me w.r.t. patents at discuss@linmodems.org
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-22  2:21                 ` Nigel Gamble
  2001-01-22  3:31                   ` J Sloan
@ 2001-01-28 13:14                   ` yodaiken
  2001-01-28 14:07                     ` Bill Huey
  2001-01-28 14:19                     ` Andrew Morton
  1 sibling, 2 replies; 55+ messages in thread
From: yodaiken @ 2001-01-28 13:14 UTC (permalink / raw)
  To: Nigel Gamble
  Cc: Paul Barton-Davis, yodaiken, Andrew Morton, David S. Miller,
	linux-kernel, linux-audio-dev

On Sun, Jan 21, 2001 at 06:21:05PM -0800, Nigel Gamble wrote:
> Yes, I most emphatically do disagree with Victor!  IRIX is used for
> mission-critical audio applications - recording as well playback - and
> other low-latency applications.  The same OS scales to large numbers of
> CPUs.  And it has the best desktop interactive response of any OS I've

And it has bloat, it's famously buggy, it is impossible to maintain, ...


> used.  I will be very happy when Linux is as good in all these areas,
> and I'm working hard to achieve this goal with negligible impact on the
> current Linux "sweet-spot" applications such as web serving.

As stated previously: I think this is a proven improbability and I have
not seen any code or designs from you to show otherwise.

> I agree.  I'm not wedded to any particular design - I just want a
> low-latency Linux by whatever is the best way of achieving that.
> However, I am hearing Victor say that we shouldn't try to make Linux
> itself low-latency, we should just use his so-called "RTLinux" environment

I suggest that you get your hearing checked. I'm fully in favor of sensible
low latency Linux. I believe however that low latency  in Linux will
	A. be "soft realtime", close to deadline most of the time.
	B. millisecond level on present hardware
	C. Best implemented by careful algorithm design instead of 
	"stuff the kernel with resched points" and hope for the best.

RTLinux main focus is hard realtime: a few microseconds here and there
are critical for us and for the applications we target. For consumer
audio, this is overkill and vanilla Linux should be able to provide
services reasonably well. But ...

> for low-latency tasks.  RTLinux is not Linux, it is a separate
> environment with a separate, limited set of APIs.  You can't run XMMS,
> or any other existing Linux audio app in RTLinux.  I want a low-latency
> Linux, not just another RTOS living parasitically alongside Linux.

Nice marketing line, but it is not working code.


-- 
---------------------------------------------------------
Victor Yodaiken 
Finite State Machine Labs: The RTLinux Company.
 www.fsmlabs.com  www.rtlinux.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-28 13:14                   ` yodaiken
@ 2001-01-28 14:07                     ` Bill Huey
  2001-01-28 14:26                       ` Andrew Morton
  2001-01-29  5:02                       ` yodaiken
  2001-01-28 14:19                     ` Andrew Morton
  1 sibling, 2 replies; 55+ messages in thread
From: Bill Huey @ 2001-01-28 14:07 UTC (permalink / raw)
  To: yodaiken
  Cc: Nigel Gamble, Paul Barton-Davis, Andrew Morton, David S. Miller,
	linux-kernel, linux-audio-dev


On Sun, Jan 28, 2001 at 06:14:28AM -0700, yodaiken@fsmlabs.com wrote:
> > Yes, I most emphatically do disagree with Victor!  IRIX is used for
> > mission-critical audio applications - recording as well playback - and
> And it has bloat, it's famously buggy, it is impossible to maintain, ...

However, that doesn't fault its concepts and its original goals. This
kind stuff is often more of an implementation and bad abstraction issue than
about faulty design and end goals.

> > used.  I will be very happy when Linux is as good in all these areas,
> > and I'm working hard to achieve this goal with negligible impact on the
> > current Linux "sweet-spot" applications such as web serving.
> As stated previously: I think this is a proven improbability and I have
> not seen any code or designs from you to show otherwise.

Andrew Morton's patch uses < 10 rescheduling points (maybe less from memory)
and in controlled, focused and logical places. It's certainly not a unmaintainable
mammoth unlike previous attempts, since Riel (many thanks) has massively
cleaned up the VM layer by using more reasonable algorithms, etc...

> I suggest that you get your hearing checked. I'm fully in favor of sensible
> low latency Linux. I believe however that low latency  in Linux will
> 	A. be "soft realtime", close to deadline most of the time.

Which is very good and maintainable with Andrew's patches.

> 	B. millisecond level on present hardware

Also very good an useable for many applications short of writting dedicated
code on specialized DSP cards.

> 	C. Best implemented by careful algorithm design instead of 
> 	"stuff the kernel with resched points" and hope for the best.

Algorithms ? which ones ? VM layer, scheduler ?  It seems there's enough
there in the Linux kernel to start doing interesting stuff, assuming that
there's a large enough media crowd willing to do the userspace programming.

> > for low-latency tasks.  RTLinux is not Linux, it is a separate
> > environment with a separate, limited set of APIs.  You can't run XMMS,
> > or any other existing Linux audio app in RTLinux.  I want a low-latency
> > Linux, not just another RTOS living parasitically alongside Linux.
 
> Nice marketing line, but it is not working code.

Mean what ? How does that response answer his criticism ?

bill

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-28 13:14                   ` yodaiken
  2001-01-28 14:07                     ` Bill Huey
@ 2001-01-28 14:19                     ` Andrew Morton
  2001-01-28 16:17                       ` Joe deBlaquiere
  2001-01-30 15:08                       ` David Woodhouse
  1 sibling, 2 replies; 55+ messages in thread
From: Andrew Morton @ 2001-01-28 14:19 UTC (permalink / raw)
  To: yodaiken; +Cc: Nigel Gamble, linux-kernel, linux-audio-dev

yodaiken@fsmlabs.com wrote:
> 
> ...
> 
> I suggest that you get your hearing checked. I'm fully in favor of sensible
> low latency Linux. I believe however that low latency  in Linux will
>         A. be "soft realtime", close to deadline most of the time.
>         B. millisecond level on present hardware
>         C. Best implemented by careful algorithm design instead of
>         "stuff the kernel with resched points" and hope for the best.

Point C would be nice, but I don't believe it will happen because of

a) The sheer number of problem areas
b) The complexity of fixing them this way and
c) The low level of motivation to make Linux perform well in
   this area.

Main problem areas are the icache, dcache, pagecache, buffer cache,
slab manager, filemap and filesystems.  That's a lot of cantankerous
cats to herd.

In many cases it just doesn't make sense.  If we need to unmap 10,000
pages, well, we need to unmap 10,000 pages.  The only algorithmic
redesign we can do here is to free them in 500 page blobs.  That's
silly because we're unbatching work which can be usefully batched.
You're much better off unbatching the work *on demand* rather than
by prior decision.  And the best way of doing that is, yup, by
peeking at current->need_resched, or by preempting the kernel.

There has been surprisingly little discussion here about the
desirability of a preemptible kernel.

> 
> Nice marketing line, but it is not working code.
> 

Guys, please don't.

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-28 14:07                     ` Bill Huey
@ 2001-01-28 14:26                       ` Andrew Morton
  2001-01-29  5:02                       ` yodaiken
  1 sibling, 0 replies; 55+ messages in thread
From: Andrew Morton @ 2001-01-28 14:26 UTC (permalink / raw)
  To: Bill Huey; +Cc: yodaiken, Nigel Gamble, linux-kernel, linux-audio-dev

Bill Huey wrote:
> 
> Andrew Morton's patch uses < 10 rescheduling points (maybe less from memory)

err... It grew.  More like 50 now reiserfs is in there.  That's counting
real instances - it's not counting ones which are expanded multiple times
as "1".

It could be brought down to 20-25 with good results.  It seems to have
a 1/x distribution - double the reschedule count, halve the latency.
We're currently doing 300-400 usecs.

I think a 1.5-millisecond @ 500MHz kernel would be a good, maintainable
solution and a sensible compromise.

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-28 14:19                     ` Andrew Morton
@ 2001-01-28 16:17                       ` Joe deBlaquiere
  2001-01-29 15:44                         ` yodaiken
                                           ` (2 more replies)
  2001-01-30 15:08                       ` David Woodhouse
  1 sibling, 3 replies; 55+ messages in thread
From: Joe deBlaquiere @ 2001-01-28 16:17 UTC (permalink / raw)
  To: Andrew Morton; +Cc: yodaiken, Nigel Gamble, linux-kernel, linux-audio-dev

Andrew Morton wrote:

> There has been surprisingly little discussion here about the
> desirability of a preemptible kernel.
> 

And I think that is a very intersting topic... (certainly more 
interesting than hotmail's firewalling policy ;o)

Alright, so suppose I dream up an application which I think really 
really needs preemption (linux heart pacemaker project? ;o) I'm just not 
convinced that linux would ever be the correct codebase to start with. 
The fundamental design of every driver in the system presumes that there 
is no preemption.

A recent example I came across is in the MTD code which invokes the 
erase algorithm for CFI memory. This algorithm spews a command sequence 
to the flash chips followed by a list of sectors to erase. Following 
each sector adress, the chip will wait for 50usec for another address, 
after which timeout it begins the erase cycle. With a RTLinux-style 
approach the driver is eventually going to fail to issue the command in 
time. There isn't any logic to detect and correct the preemption case, 
so it just gets confused and thinks the erase failed. Ergo, RTLinux and 
MTD are mutually exclusive. (I should probably note that I do not intend 
this as an indictment of RTLinux or MTD, but just an example of why 
preemption breaks the Linux driver model).

So what is the solution in the preemption case? Should we re-write every 
driver to handle the preemption? Do we need a cli_yes_i_mean_it() for 
the cases where disabling interrupts is _absolutely_ required? Do we 
push drivers like MTD down into preemptable-Linux? Do we push all 
drivers down?
In the meantime, fixing the few places where the kernel spends an 
extended period of time performing a task makes sense to me. If you're 
going to be busy for a while it is 'courteous' to allow the scheduler a 
chance to give some time to other threads. Of course it's hard to know 
when to draw the line.

So now I am starting to wonder about what needs to be profiled. Is there 
a mechanism in place now to measure the time spent with interrupts off, 
for instance? I know this has to have been quantified to some extent, right?

-- 
Joe deBlaquiere
Red Hat, Inc.
307 Wynn Drive
Huntsville AL, 35805
voice : (256)-704-9200
fax   : (256)-837-3839

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-28 14:07                     ` Bill Huey
  2001-01-28 14:26                       ` Andrew Morton
@ 2001-01-29  5:02                       ` yodaiken
  1 sibling, 0 replies; 55+ messages in thread
From: yodaiken @ 2001-01-29  5:02 UTC (permalink / raw)
  To: Bill Huey
  Cc: yodaiken, Nigel Gamble, Paul Barton-Davis, Andrew Morton,
	David S. Miller, linux-kernel, linux-audio-dev

On Sun, Jan 28, 2001 at 06:07:04AM -0800, Bill Huey wrote:
> 
> On Sun, Jan 28, 2001 at 06:14:28AM -0700, yodaiken@fsmlabs.com wrote:
> > > Yes, I most emphatically do disagree with Victor!  IRIX is used for
> > > mission-critical audio applications - recording as well playback - and
> > And it has bloat, it's famously buggy, it is impossible to maintain, ...
> 
> However, that doesn't fault its concepts and its original goals. This
> kind stuff is often more of an implementation and bad abstraction issue than
> about faulty design and end goals.


That's the core of the disagreement. I think SGI had some really good
engineers who have worked very hard - but that hard work and good
programming can never compensate for bad design. I think Linux works so
well because it's been designed well -- and good design often means
refusing to go down a path that is known to seemingly invariably
lead to failure. HP-UX, IRIX, Masscomp ... -- all these guys have
tried this "concept" and all have had some sucess, but at a 
very high price.
 

> 
> > > used.  I will be very happy when Linux is as good in all these areas,
> > > and I'm working hard to achieve this goal with negligible impact on the
> > > current Linux "sweet-spot" applications such as web serving.
> > As stated previously: I think this is a proven improbability and I have
> > not seen any code or designs from you to show otherwise.
> 
> Andrew Morton's patch uses < 10 rescheduling points (maybe less from memory)
> and in controlled, focused and logical places. It's certainly not a unmaintainable
> mammoth unlike previous attempts, since Riel (many thanks) has massively
> cleaned up the VM layer by using more reasonable algorithms, etc...
> 

Andrews' patch has grown, but sure, I think he is doing good work.

> > I suggest that you get your hearing checked. I'm fully in favor of sensible
> > low latency Linux. I believe however that low latency  in Linux will
> > 	A. be "soft realtime", close to deadline most of the time.
> 
> Which is very good and maintainable with Andrew's patches.
> 
> > 	B. millisecond level on present hardware
> 
> Also very good an useable for many applications short of writting dedicated
> code on specialized DSP cards.
> 
> > 	C. Best implemented by careful algorithm design instead of 
> > 	"stuff the kernel with resched points" and hope for the best.
> 
> Algorithms ? which ones ? VM layer, scheduler ?  It seems there's enough
> there in the Linux kernel to start doing interesting stuff, assuming that
> there's a large enough media crowd willing to do the userspace programming.
> 
> > > for low-latency tasks.  RTLinux is not Linux, it is a separate
> > > environment with a separate, limited set of APIs.  You can't run XMMS,
> > > or any other existing Linux audio app in RTLinux.  I want a low-latency
> > > Linux, not just another RTOS living parasitically alongside Linux.
>  
> > Nice marketing line, but it is not working code.
> 
> Mean what ? How does that response answer his criticism ?
> 

RTLinux is not intended to be another Linux - one Linux is enough
(and one IRIX is too many!). What 
RTLinux does is, essentially, add a special realtime process to
Linux and let applications in this "process" be written as threads
and signal handlers. The environment is specifically made to be
"restrictive" and programmers are encouraged to put non-realtime 
components in Linux and only put timing critical stuff in the RT
process. So complaining that the RTLinux environment is not 
the Linux environment is rather silly and akin to a complain that
you can't call shell scripts from driver code. Different environments
serve different purposes. The engineering claim behind RTLinux is that
while many attempts to integrate hard realtime into general purpose
OS's have produced large buggy, slow, and compromised OS's, the
RTLinux model provides a simple software design that offers reliable
hard realtime without screwing up the internals of the general
purpose OS. In fact, I argue that it's a mistake to integrate
hard realtime and non-realtime code in applications as well. 
So I think that the argument "we should have hard realtime in 
an integrated environment" is like arguing  "we should have fast cornering,
and instant acceleration in a moving van". If you don't like my 
sports car because it does not come with air brakes and a trailer, 
show me how it's possible to accomplish both design goals: but don't
simply "marketize" about how inferior the sports car is because it
is missing moving van features. 



> bill

-- 
---------------------------------------------------------
Victor Yodaiken 
Finite State Machine Labs: The RTLinux Company.
 www.fsmlabs.com  www.rtlinux.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-28 16:17                       ` Joe deBlaquiere
@ 2001-01-29 15:44                         ` yodaiken
  2001-01-29 17:23                           ` Joe deBlaquiere
  2001-01-30 15:08                           ` David Woodhouse
  2001-01-29 22:08                         ` Pavel Machek
  2001-01-29 22:31                         ` Roger Larsson
  2 siblings, 2 replies; 55+ messages in thread
From: yodaiken @ 2001-01-29 15:44 UTC (permalink / raw)
  To: Joe deBlaquiere
  Cc: Andrew Morton, yodaiken, Nigel Gamble, linux-kernel, linux-audio-dev

On Sun, Jan 28, 2001 at 10:17:46AM -0600, Joe deBlaquiere wrote:
> A recent example I came across is in the MTD code which invokes the 
> erase algorithm for CFI memory. This algorithm spews a command sequence 
> to the flash chips followed by a list of sectors to erase. Following 
> each sector adress, the chip will wait for 50usec for another address, 
> after which timeout it begins the erase cycle. With a RTLinux-style 
> approach the driver is eventually going to fail to issue the command in 
> time. There isn't any logic to detect and correct the preemption case, 
> so it just gets confused and thinks the erase failed. Ergo, RTLinux and 
> MTD are mutually exclusive. (I should probably note that I do not intend 
> this as an indictment of RTLinux or MTD, but just an example of why 
> preemption breaks the Linux driver model).

Only if your RTLinux application is running. In other words, you cannot
commit more than 100% of cpu cycle time and expect to deliver.
I think one of the common difficulties with realtime is that time-shared
systems with virtual memory make people used to elastic resource
limits and real-time has unforgiving time limits.



> 
> So what is the solution in the preemption case? Should we re-write every 
> driver to handle the preemption? Do we need a cli_yes_i_mean_it() for 
> the cases where disabling interrupts is _absolutely_ required? Do we 
> push drivers like MTD down into preemptable-Linux? Do we push all 
> drivers down?
> In the meantime, fixing the few places where the kernel spends an 
> extended period of time performing a task makes sense to me. If you're 
> going to be busy for a while it is 'courteous' to allow the scheduler a 
> chance to give some time to other threads. Of course it's hard to know 
> when to draw the line.

Or what is the tradeoff or whether  a deadlock will follow. 
step 1: memory manager thread frees a few pages and is courteous
step 2: bunch of thrashing and eventually all processes stall
step 3: go to step 1

alternative
step 1: memory manager thread frees enough pages for some processes to
        advance to termination
step 2: all is well

and make up 100 similar scenarios. And this is why "preemptive"
OS's tend to add such abominations as "priority inheritance" which 
make failure cases rarer and harder to diagnose or complex schedulers
that spend a significant fraction of cpu time trying to figure out
what process should advance or ...


> 
> So now I am starting to wonder about what needs to be profiled. Is there 
> a mechanism in place now to measure the time spent with interrupts off, 
> for instance? I know this has to have been quantified to some extent, right?
> 
> -- 
> Joe deBlaquiere
> Red Hat, Inc.
> 307 Wynn Drive
> Huntsville AL, 35805
> voice : (256)-704-9200
> fax   : (256)-837-3839

-- 
---------------------------------------------------------
Victor Yodaiken 
Finite State Machine Labs: The RTLinux Company.
 www.fsmlabs.com  www.rtlinux.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-29 15:44                         ` yodaiken
@ 2001-01-29 17:23                           ` Joe deBlaquiere
  2001-01-29 17:38                             ` yodaiken
  2001-01-30 15:08                           ` David Woodhouse
  1 sibling, 1 reply; 55+ messages in thread
From: Joe deBlaquiere @ 2001-01-29 17:23 UTC (permalink / raw)
  To: yodaiken; +Cc: Andrew Morton, Nigel Gamble, linux-kernel, linux-audio-dev

Good morning world! :o)

yodaiken@fsmlabs.com wrote:

> Only if your RTLinux application is running. In other words, you cannot
> commit more than 100% of cpu cycle time and expect to deliver.
> I think one of the common difficulties with realtime is that time-shared
> systems with virtual memory make people used to elastic resource
> limits and real-time has unforgiving time limits.
> 
> 

	The problem I see is not one of cpu 'overcommit' but of 'critical 
sections' in the driver code. I really like the preemptive model, but it 
would seem to me that there needs to be a way to cooperate with some of 
the driver code. Allowing the driver a brief exclusive time share does 
certainly have latency implications, but breaking drivers can crash 
things pretty quickly (If you're running a program XIP from flash and a 
RT interrupt leaves the flash in a unreadable state, boom!).
	If the answer is to run the 'critical' driver code as a RT thread, I can 
live with that, but there should be a clear policy and mechanism in 
place to handle it.

> 
> 
> Or what is the tradeoff or whether  a deadlock will follow. 
> step 1: memory manager thread frees a few pages and is courteous
> step 2: bunch of thrashing and eventually all processes stall
> step 3: go to step 1
> 
> alternative
> step 1: memory manager thread frees enough pages for some processes to
>         advance to termination
> step 2: all is well
> 
> and make up 100 similar scenarios. And this is why "preemptive"
> OS's tend to add such abominations as "priority inheritance" which 
> make failure cases rarer and harder to diagnose or complex schedulers
> that spend a significant fraction of cpu time trying to figure out
> what process should advance or ...
> 
> 

It doesn't matter how you do it, the cooperative model eventually starts 
to feel like Windoze3.1 in the extreme case, but even so, it was much 
more multithreaded than DOS. Of course, the Right Thing (TM) is to do 
away with the cooperative model. But even in a preemptive model, there's 
no reason to have code like

while (!done)
{
	done = check_done();
}

when you can have:

while (!done)
{
	yield();
	done = check_done();
}

being preemptive and being cooperative aren't mutually exclusive.

Borrowing your sports car / delivery van metaphor, I'm thinking we could 
come up with something along the lines of a BMW 750iL... room for six 
and still plenty of uumph.

Cheers,

Joe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-29 17:23                           ` Joe deBlaquiere
@ 2001-01-29 17:38                             ` yodaiken
  2001-01-29 18:03                               ` Joe deBlaquiere
  0 siblings, 1 reply; 55+ messages in thread
From: yodaiken @ 2001-01-29 17:38 UTC (permalink / raw)
  To: Joe deBlaquiere
  Cc: yodaiken, Andrew Morton, Nigel Gamble, linux-kernel, linux-audio-dev

On Mon, Jan 29, 2001 at 11:23:24AM -0600, Joe deBlaquiere wrote:
> It doesn't matter how you do it, the cooperative model eventually starts 
> to feel like Windoze3.1 in the extreme case, but even so, it was much 
> more multithreaded than DOS. Of course, the Right Thing (TM) is to do 
> away with the cooperative model. But even in a preemptive model, there's 
> no reason to have code like

So we assume: if you have no RT threads running, Linux does the right
thing. If you have RT threads running, you want them to run no matter
what Linux does. This may be over-simple, but it's robust. 
Otherwise you end up with either (A) an impossible to verify mess
of many code components each hoping that the aggregate of the others
will behave correctly or (B) a semi-functioning high overhead centralized
priority system that slows everything down and probably does not
work anyway.

> 
> while (!done)
> {
> 	done = check_done();
> }
> 
> when you can have:
> 
> while (!done)
> {
> 	yield();
> 	done = check_done();
> }

But there is a reason for the first: time. 

while(!read_pci_condition); // usually finishes in 10us

versus

while(!read_pci_condition)yield(); // usually finishes in 1millisecond

can have a nasty impact on system performance. 

      
> 
> being preemptive and being cooperative aren't mutually exclusive.
> 
> Borrowing your sports car / delivery van metaphor, I'm thinking we could 
> come up with something along the lines of a BMW 750iL... room for six 
> and still plenty of uumph.

Not a cheap vehicle.  Linux is pretty snappy on an AMD SC420  or
a M860 and 5 meg of memory. And it scales to a quad xeon well. Don't
try that with IRIX.  
So to push my tired metaphor even further beyond
the bounds, a delivery van that needs jet fuel and uses two lanes, 
won't do well in the delivery business no matter how well it 
accelerates.



-- 
---------------------------------------------------------
Victor Yodaiken 
Finite State Machine Labs: The RTLinux Company.
 www.fsmlabs.com  www.rtlinux.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-29 17:38                             ` yodaiken
@ 2001-01-29 18:03                               ` Joe deBlaquiere
  0 siblings, 0 replies; 55+ messages in thread
From: Joe deBlaquiere @ 2001-01-29 18:03 UTC (permalink / raw)
  To: yodaiken; +Cc: Andrew Morton, Nigel Gamble, linux-kernel, linux-audio-dev



yodaiken@fsmlabs.com wrote:

> On Mon, Jan 29, 2001 at 11:23:24AM -0600, Joe deBlaquiere wrote:
> 
>> while (!done)
>> {
>> 	done = check_done();
>> }
>> 
>> when you can have:
>> 
>> while (!done)
>> {
>> 	yield();
>> 	done = check_done();
>> }
> 
> 
> But there is a reason for the first: time. 
> 
> while(!read_pci_condition); // usually finishes in 10us
> 
> versus
> 
> while(!read_pci_condition)yield(); // usually finishes in 1millisecond
> 
> can have a nasty impact on system performance. 
> 
>       

So perhaps you check need_resched first, but it all boils down to how 
many 10 us delays you're going to take. If you start taking too many 
you're just gratuitously sucking the V+ line without meaningful results. 
It all really depends on how much you actually expect to wait. There is 
unfortunately no way at the present to quantify all these delays and 
tune the system to the performance requirement. You could try something 
like (no, i didn't compile this, so don't expect it to work) :

#define EXPECTED_TIMEOUT(x,y)	( (x) > CONFIG_DRIVER_TIMEOUT_MAX ? (y) : (0) )

while (!done)
{
	EXPECTED_TIMEOUT(50,yield());
	done = check_done();
}

to tune these delays in or out of the kernel based on a kernel config 
parameter. If you are willing to tolerate a longer delay the driver 
itself can run faster, whereas if you force the system to yield, then 
this favors the other threads. (I'm not advocating we litter the entire 
kernel with gobs of nested macros either, just that there should be some 
way to do it right).

> 
>> being preemptive and being cooperative aren't mutually exclusive.
>> 
>> Borrowing your sports car / delivery van metaphor, I'm thinking we could 
>> come up with something along the lines of a BMW 750iL... room for six 
>> and still plenty of uumph.
> 
> 
> Not a cheap vehicle.  Linux is pretty snappy on an AMD SC420  or
> a M860 and 5 meg of memory. And it scales to a quad xeon well. Don't
> try that with IRIX.  
> So to push my tired metaphor even further beyond
> the bounds, a delivery van that needs jet fuel and uses two lanes, 
> won't do well in the delivery business no matter how well it 
> accelerates.

So how about we just put a knob on the dash and make it scale from 
delivery van to sports car based on the needs of one particular system. 
I realize there will be boundaries, but that's what you and I have jobs, 
to solve the boudary cases. The more flexible the kernel is, the easier 
it is to adapt to the extremes.

--
Joe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-28 16:17                       ` Joe deBlaquiere
  2001-01-29 15:44                         ` yodaiken
@ 2001-01-29 22:08                         ` Pavel Machek
  2001-01-29 22:31                         ` Roger Larsson
  2 siblings, 0 replies; 55+ messages in thread
From: Pavel Machek @ 2001-01-29 22:08 UTC (permalink / raw)
  To: Joe deBlaquiere, Andrew Morton
  Cc: yodaiken, Nigel Gamble, linux-kernel, linux-audio-dev

Hi!

> > There has been surprisingly little discussion here about the
> > desirability of a preemptible kernel.
> > 
> 
> And I think that is a very intersting topic... (certainly more 
> interesting than hotmail's firewalling policy ;o)
> 
> Alright, so suppose I dream up an application which I think really 
> really needs preemption (linux heart pacemaker project? ;o) I'm just not 
> convinced that linux would ever be the correct codebase to start with. 
> The fundamental design of every driver in the system presumes that there 
> is no preemption.

Nonsense. SMP+SMM BIOS is *very* similar to preemptible kernel.

SMP means that you can run two pieces in kernel at same time. With
preemptible kernel "same" is rather bigger granularity, but thats
minor difference. And SMI BIOS means that cpu can be stopped for
arbitrary time doing its housekeeping. (Going suspend to disk?)

> A recent example I came across is in the MTD code which invokes the 
> erase algorithm for CFI memory. This algorithm spews a command sequence 
> to the flash chips followed by a list of sectors to erase. Following 
> each sector adress, the chip will wait for 50usec for another address, 
> after which timeout it begins the erase cycle. With a RTLinux-style 

With SMM BIOS, this is br0ken.

> So what is the solution in the preemption case? Should we re-write every 
> driver to handle the preemption? Do we need a cli_yes_i_mean_it()
> for 

You can dissable SMM interrupts, AFAIK.
								Pavel
-- 
I'm pavel@ucw.cz. "In my country we have almost anarchy and I don't care."
Panos Katsaloulis describing me w.r.t. patents at discuss@linmodems.org
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-28 16:17                       ` Joe deBlaquiere
  2001-01-29 15:44                         ` yodaiken
  2001-01-29 22:08                         ` Pavel Machek
@ 2001-01-29 22:31                         ` Roger Larsson
  2001-01-29 23:46                           ` Joe deBlaquiere
  2 siblings, 1 reply; 55+ messages in thread
From: Roger Larsson @ 2001-01-29 22:31 UTC (permalink / raw)
  To: Joe deBlaquiere, Andrew Morton
  Cc: yodaiken, Nigel Gamble, linux-kernel, linux-audio-dev

On Sunday 28 January 2001 17:17, Joe deBlaquiere wrote:
> Andrew Morton wrote:
> > There has been surprisingly little discussion here about the
> > desirability of a preemptible kernel.
>
> And I think that is a very intersting topic... (certainly more
> interesting than hotmail's firewalling policy ;o)
>
> Alright, so suppose I dream up an application which I think really
> really needs preemption (linux heart pacemaker project? ;o) I'm just not
> convinced that linux would ever be the correct codebase to start with.
> The fundamental design of every driver in the system presumes that there
> is no preemption.

please, no linux heart pacemaker at sourceforge... :-)

>
> A recent example I came across is in the MTD code which invokes the
> erase algorithm for CFI memory. This algorithm spews a command sequence
> to the flash chips followed by a list of sectors to erase. Following
> each sector adress, the chip will wait for 50usec for another address,
> after which timeout it begins the erase cycle. With a RTLinux-style
> approach the driver is eventually going to fail to issue the command in
> time. There isn't any logic to detect and correct the preemption case,
> so it just gets confused and thinks the erase failed. Ergo, RTLinux and
> MTD are mutually exclusive. (I should probably note that I do not intend
> this as an indictment of RTLinux or MTD, but just an example of why
> preemption breaks the Linux driver model).

Can't that happen in the 2.4.0 kernel too?
If interrupts are not disabled during the command queuing any (with more than 
50 us execution time) interrupt might disturb the setup.
That part of the code should either:
a) accept partial success, and continue (can it check current success)
b) disable interrupts, then shouldn't it use
     spin_lock_irq(...) instead of spin_lock_bh(...)

Where is the code BTW? (file:line)
is it the functions named do_erase_1_by_16_oneblock?


>
> So what is the solution in the preemption case? Should we re-write every
> driver to handle the preemption? Do we need a cli_yes_i_mean_it() for
> the cases where disabling interrupts is _absolutely_ required? 

the problem will be drivers requiring a maximum execution time of a part
of its code...
Most drivers use spin_lock_irq to get mutual exclusivetly with their interrupt
handlers - lets keep it that way.

What about introducing a  timecritical_lock()
#define timecritical_lock(int maxtime_us)  {__cli(); } 
	/* local processor only */


Any driver using the timecritical_lock are not 100% RTLinux compatible,
but should be ok in a preemtive kernel where timecritical_locks are 
short compared to "guarantees".
The parameter maxtime_us is a hint to system integrators - don't use drivers 
with high maxtime in a RT system.

But it will be extremly hart to make any guarantees...
If the driver needs the PCI bus, it might be locked for burst transfer...
SMP issues...

> Do we
> push drivers like MTD down into preemptable-Linux? Do we push all
> drivers down?

All drivers should be compatible with preemtive-Linux, they who are not are
unlikely to be compatible with the current Linux.

> In the meantime, fixing the few places where the kernel spends an
> extended period of time performing a task makes sense to me. If you're
> going to be busy for a while it is 'courteous' to allow the scheduler a
> chance to give some time to other threads. Of course it's hard to know
> when to draw the line.
>
> So now I am starting to wonder about what needs to be profiled. Is there
> a mechanism in place now to measure the time spent with interrupts off,
> for instance? I know this has to have been quantified to some extent,
> right?

-- 
Home page:
  none currently
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-29 22:31                         ` Roger Larsson
@ 2001-01-29 23:46                           ` Joe deBlaquiere
  0 siblings, 0 replies; 55+ messages in thread
From: Joe deBlaquiere @ 2001-01-29 23:46 UTC (permalink / raw)
  To: Roger Larsson
  Cc: Andrew Morton, yodaiken, Nigel Gamble, linux-kernel, linux-audio-dev



Roger Larsson wrote:

> On Sunday 28 January 2001 17:17, Joe deBlaquiere wrote:
> 
>> Andrew Morton wrote:
>> 
>>> There has been surprisingly little discussion here about the
>>> desirability of a preemptible kernel.
>> 
>> And I think that is a very intersting topic... (certainly more
>> interesting than hotmail's firewalling policy ;o)
>> 
>> Alright, so suppose I dream up an application which I think really
>> really needs preemption (linux heart pacemaker project? ;o) I'm just not
>> convinced that linux would ever be the correct codebase to start with.
>> The fundamental design of every driver in the system presumes that there
>> is no preemption.
> 
> 
> please, no linux heart pacemaker at sourceforge... :-)
> 
> 
>> A recent example I came across is in the MTD code which invokes the
>> erase algorithm for CFI memory. This algorithm spews a command sequence
>> to the flash chips followed by a list of sectors to erase. Following
>> each sector adress, the chip will wait for 50usec for another address,
>> after which timeout it begins the erase cycle. With a RTLinux-style
>> approach the driver is eventually going to fail to issue the command in
>> time. There isn't any logic to detect and correct the preemption case,
>> so it just gets confused and thinks the erase failed. Ergo, RTLinux and
>> MTD are mutually exclusive. (I should probably note that I do not intend
>> this as an indictment of RTLinux or MTD, but just an example of why
>> preemption breaks the Linux driver model).
> 
> 
> Can't that happen in the 2.4.0 kernel too?
> If interrupts are not disabled during the command queuing any (with more than 
> 50 us execution time) interrupt might disturb the setup.
> That part of the code should either:
> a) accept partial success, and continue (can it check current success)
> b) disable interrupts, then shouldn't it use
>      spin_lock_irq(...) instead of spin_lock_bh(...)
> 
> Where is the code BTW? (file:line)
> is it the functions named do_erase_1_by_16_oneblock?
> 
> 

that's basically the one, just slightly modified to loop through a list 
addresses instead of the single address. It's a negligible performance 
gain, but simplified some other code I was testing.

The point is that some small set of operations may need to be atomic and 
or executed in a critical time period. Allowing these portions of code 
to be preempted opens the door for unexpected failure cases.

 
A 'perfect' driver would of course still be able to recover (or at least 
guarantee it wouldn't crash). Sometimes the timing issues aren't known 
or explictly stated, so the driver has 'never failed that way before' 
because it hasn't been tested that way.

> What about introducing a  timecritical_lock()
> #define timecritical_lock(int maxtime_us)  {__cli(); } 
> 	/* local processor only */
> 
> 
> Any driver using the timecritical_lock are not 100% RTLinux compatible,
> but should be ok in a preemtive kernel where timecritical_locks are 
> short compared to "guarantees".
> The parameter maxtime_us is a hint to system integrators - don't use drivers 
> with high maxtime in a RT system.
> 
> But it will be extremly hart to make any guarantees...
> If the driver needs the PCI bus, it might be locked for burst transfer...
> SMP issues...
> 
> 
>> Do we
>> push drivers like MTD down into preemptable-Linux? Do we push all
>> drivers down?
> 
> 
> All drivers should be compatible with preemtive-Linux, they who are not are
> unlikely to be compatible with the current Linux.
> 
> 

some kind of timecritical_lock() is a possibility. It would seem that 
such a construct would coexist with SMP and RTLinux. I guess I'm just a 
pessimist in that I expect that having not an explicit lock on time 
critical sections will eventually mean that something will invalidate 
the timing.

-- 
Joe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-29 15:44                         ` yodaiken
  2001-01-29 17:23                           ` Joe deBlaquiere
@ 2001-01-30 15:08                           ` David Woodhouse
  2001-01-30 15:44                             ` Joe deBlaquiere
                                               ` (2 more replies)
  1 sibling, 3 replies; 55+ messages in thread
From: David Woodhouse @ 2001-01-30 15:08 UTC (permalink / raw)
  To: Joe deBlaquiere
  Cc: yodaiken, Andrew Morton, Nigel Gamble, linux-kernel, linux-audio-dev


jadb@redhat.com said:
> (If you're running a program XIP from flash and a  RT interrupt leaves
> the flash in a unreadable state, boom!).

Bad example. I don't expect Linux to support writable XIP any time in the 
near future. The only thing I envisage myself doing to help people who want 
writable XIP is to take away their crackpipe.

Until we get dual-port flash, of course.

The thing that really does concern me about the flash driver code is the
fact that it often wants to wait for about 100µs. On machines with
HZ==100, that sucks if you use udelay() and it sucks if you schedule(). So
we end up dropping the spinlock (so at least bottom halves can run again)
and calling:

static inline void cfi_udelay(int us)
{
        if (current->need_resched)
                schedule();
        else
                udelay(us);
}


--
dwmw2


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-28 14:19                     ` Andrew Morton
  2001-01-28 16:17                       ` Joe deBlaquiere
@ 2001-01-30 15:08                       ` David Woodhouse
  1 sibling, 0 replies; 55+ messages in thread
From: David Woodhouse @ 2001-01-30 15:08 UTC (permalink / raw)
  To: Joe deBlaquiere
  Cc: Andrew Morton, yodaiken, Nigel Gamble, linux-kernel, linux-audio-dev


jadb@redhat.com said:
> A recent example I came across is in the MTD code which invokes the
> erase algorithm for CFI memory. This algorithm spews a command
> sequence  to the flash chips followed by a list of sectors to erase.
> Following  each sector adress, the chip will wait for 50usec for
> another address,  after which timeout it begins the erase cycle. With
> a RTLinux-style  approach the driver is eventually going to fail to
> issue the command in  time.

That code is within spin_lock_bh(), isn't it? So with the current 
preemption approach, it's not going to get interrupted except by a real 
interrupt, which hopefully won't take too long anyway. 

spin_lock_bh() is used because eventually we're intending to stop the erase 
routine from waiting for completion, and make it poll for completion from a 
timer routine. We need protection against concurrent access to the chip 
from that timer routine. 

But perhaps we could be using spin_lock_irq() to prevent us from being
interrupted and failing to meet the timing requirements for subsequent 
commands to the chip if IRQ handlers really do take too long. 

--
dwmw2


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-30 15:08                           ` David Woodhouse
@ 2001-01-30 15:44                             ` Joe deBlaquiere
  2001-01-30 16:29                               ` Paul Davis
  2001-01-31  7:55                               ` george anzinger
  2001-01-30 16:19                             ` David Woodhouse
  2001-01-30 20:51                             ` yodaiken
  2 siblings, 2 replies; 55+ messages in thread
From: Joe deBlaquiere @ 2001-01-30 15:44 UTC (permalink / raw)
  To: David Woodhouse
  Cc: yodaiken, Andrew Morton, Nigel Gamble, linux-kernel, linux-audio-dev



David Woodhouse wrote:

> jadb@redhat.com said:
> 
>> (If you're running a program XIP from flash and a  RT interrupt leaves
>> the flash in a unreadable state, boom!).
> 
> 
> Bad example. I don't expect Linux to support writable XIP any time in the 
> near future. The only thing I envisage myself doing to help people who want 
> writable XIP is to take away their crackpipe.
> 

I wasn't thinking of running the kernel XIP from writable, but even 
trying to do that from the filesystem is a mess. If you're going to be 
that way about it...

/me hands over the crackpipe

> Until we get dual-port flash, of course.
> 
> The thing that really does concern me about the flash driver code is the
> fact that it often wants to wait for about 100µs. On machines with
> HZ==100, that sucks if you use udelay() and it sucks if you schedule(). So
> we end up dropping the spinlock (so at least bottom halves can run again)
> and calling:
> 
> static inline void cfi_udelay(int us)
> {
>         if (current->need_resched)
>                 schedule();
>         else
>                 udelay(us);
> }
> 

The locical answer is run with HZ=10000 so you get 100us intervals, 
right ;o). On systems with multiple hardware timers you could kick off a 
single event at 200us, couldn't you? I've done that before with the 
extra timer assigned exclusively to a resource. It's not a giant time 
slice, but at least you feel like you're allowing something to happen, 
right?

> 
> --
> dwmw2


-- 
Joe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-30 15:08                           ` David Woodhouse
  2001-01-30 15:44                             ` Joe deBlaquiere
@ 2001-01-30 16:19                             ` David Woodhouse
  2001-02-01 12:40                               ` Pavel Machek
  2001-01-30 20:51                             ` yodaiken
  2 siblings, 1 reply; 55+ messages in thread
From: David Woodhouse @ 2001-01-30 16:19 UTC (permalink / raw)
  To: Joe deBlaquiere
  Cc: yodaiken, Andrew Morton, Nigel Gamble, linux-kernel, linux-audio-dev


jadb@redhat.com said:
>  I wasn't thinking of running the kernel XIP from writable, but even
> trying to do that from the filesystem is a mess. If you're going to be
>  that way about it...

Heh. I am. Read-only XIP is going to be doable, but writable XIP means that
any time you start to write to the flash chip, you have to find all the
mappings of every page from that chip and mark them absent, then deal 
properly with faults on them; making processes sleep till the chip is in a 
readable state again. It's going to suck. Lots.

I'm not going to emulate our beloved leader and declare that it's never
going to be supported - I have no particular problem with someone doing
this, as long as I don't have to get too involved and it doesn't end up in
my CVS tree with me being the one who's expected to feed/justify it to
Linus.

> /me hands over the crackpipe

You don't want writable XIP. You just think you do, because you work for a 
software company and you're not allowed to call the hardware designers 
naughty names when they fail to realise that compression is far more useful 
than XIP and also cheaper, in 99% of cases.

--
dwmw2


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-30 15:44                             ` Joe deBlaquiere
@ 2001-01-30 16:29                               ` Paul Davis
  2001-01-30 16:35                                 ` David Woodhouse
  2001-01-31  7:55                               ` george anzinger
  1 sibling, 1 reply; 55+ messages in thread
From: Paul Davis @ 2001-01-30 16:29 UTC (permalink / raw)
  To: Joe deBlaquiere
  Cc: David Woodhouse, yodaiken, Andrew Morton, Nigel Gamble,
	linux-kernel, linux-audio-dev

>The locical answer is run with HZ=10000 so you get 100us intervals, 
>right ;o). On systems with multiple hardware timers you could kick off a 
>single event at 200us, couldn't you? I've done that before with the 
>extra timer assigned exclusively to a resource. It's not a giant time 
>slice, but at least you feel like you're allowing something to happen, 
>right?

no, thats not the logical answer at all. the logical answer is
something like the excellent but neglected UTIME patch that
continually reprograms the system timer so that you can get precise
event scheduling without the insane overhead of HZ=10000.

--p
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-30 16:29                               ` Paul Davis
@ 2001-01-30 16:35                                 ` David Woodhouse
  0 siblings, 0 replies; 55+ messages in thread
From: David Woodhouse @ 2001-01-30 16:35 UTC (permalink / raw)
  To: Paul Davis
  Cc: Joe deBlaquiere, yodaiken, Andrew Morton, Nigel Gamble,
	linux-kernel, linux-audio-dev


pbd@Op.Net said:
>  no, thats not the logical answer at all. the logical answer is
> something like the excellent but neglected UTIME patch that
> continually reprograms the system timer so that you can get precise
> event scheduling without the insane overhead of HZ=10000.

Indeed. Which, as an added bonus, lets me nick the system timer and 
reprogram it to 18KHz for the PC speaker driver :)

But that's a different story.

--
dwmw2


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-30 15:08                           ` David Woodhouse
  2001-01-30 15:44                             ` Joe deBlaquiere
  2001-01-30 16:19                             ` David Woodhouse
@ 2001-01-30 20:51                             ` yodaiken
  2001-01-30 21:00                               ` David Woodhouse
  2 siblings, 1 reply; 55+ messages in thread
From: yodaiken @ 2001-01-30 20:51 UTC (permalink / raw)
  To: David Woodhouse
  Cc: Joe deBlaquiere, yodaiken, Andrew Morton, Nigel Gamble,
	linux-kernel, linux-audio-dev

> The thing that really does concern me about the flash driver code is the
> fact that it often wants to wait for about 100µs. On machines with
> HZ==100, that sucks if you use udelay() and it sucks if you schedule(). So
> we end up dropping the spinlock (so at least bottom halves can run again)
> and calling:
> 
> static inline void cfi_udelay(int us)
> {
>         if (current->need_resched)
>                 schedule();
>         else
>                 udelay(us);
> }

So then a >100us delay is ok ?

I have a dumb RT perspective: either you have to make the deadline or you don't.
If you have to make the deadline, then why are you checking need_resched?



-- 
---------------------------------------------------------
Victor Yodaiken 
Finite State Machine Labs: The RTLinux Company.
 www.fsmlabs.com  www.rtlinux.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-30 20:51                             ` yodaiken
@ 2001-01-30 21:00                               ` David Woodhouse
  0 siblings, 0 replies; 55+ messages in thread
From: David Woodhouse @ 2001-01-30 21:00 UTC (permalink / raw)
  To: yodaiken
  Cc: Joe deBlaquiere, Andrew Morton, Nigel Gamble, linux-kernel,
	linux-audio-dev

On Tue, 30 Jan 2001 yodaiken@fsmlabs.com wrote:

> So then a >100us delay is ok ?
> 
> I have a dumb RT perspective: either you have to make the deadline or you don't.
> If you have to make the deadline, then why are you checking need_resched?

In the case I'm describing, it _works_ if you have a >100us delay. We ask
the chip to do something, and we expect it to take about 100us, so we come
back and start polling for completion after that time.  Actually, we tune
our estimate of how long to back off, depending on how long we've actually
spent polling for completion on previous attempts. But if you _always_
have a 10ms delay, each time you only really needed to wait 100us, then
the performance is appalling.

I'd like to be nice, but I don't want to drop my performance by 100x 
without good reason. Hence the check for need_resched.

Don't confuse this with the code which was mentioned before, which needs 
to send many different words to the chip consecutively. That is done while 
holding a spin_lock. We don't drop the lock and wait between commands in 
that situation.

-- 
dwmw2


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-30 15:44                             ` Joe deBlaquiere
  2001-01-30 16:29                               ` Paul Davis
@ 2001-01-31  7:55                               ` george anzinger
  1 sibling, 0 replies; 55+ messages in thread
From: george anzinger @ 2001-01-31  7:55 UTC (permalink / raw)
  To: Joe deBlaquiere
  Cc: David Woodhouse, yodaiken, Andrew Morton, Nigel Gamble,
	linux-kernel, linux-audio-dev

Joe deBlaquiere wrote:

~snip~

> The locical answer is run with HZ=10000 so you get 100us intervals, 
> right ;o). 

Lets not assume we need the overhead of HZ=10000 to get 100us 
alarm/timer resolution.  How about a timer that ticks when we need the 
next tick...

On systems with multiple hardware timers you could kick off a 
> single event at 200us, couldn't you? I've done that before with the 
> extra timer assigned exclusively to a resource. 

With the right hardware resource, one high res counter can give you all 
the various tick resolutions you need. BTDT on HPRT.

George

It's not a giant time 
> slice, but at least you feel like you're allowing something to happen, 
> right?
> 
>> 
>> -- 
>> dwmw2

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-01-30 16:19                             ` David Woodhouse
@ 2001-02-01 12:40                               ` Pavel Machek
  2001-02-01 22:33                                 ` David Woodhouse
  0 siblings, 1 reply; 55+ messages in thread
From: Pavel Machek @ 2001-02-01 12:40 UTC (permalink / raw)
  To: David Woodhouse, Joe deBlaquiere
  Cc: yodaiken, Andrew Morton, Nigel Gamble, linux-kernel, linux-audio-dev

Hi!

> >  I wasn't thinking of running the kernel XIP from writable, but even
> > trying to do that from the filesystem is a mess. If you're going to be
> >  that way about it...
> 
> Heh. I am. Read-only XIP is going to be doable, but writable XIP means that
> any time you start to write to the flash chip, you have to find all
> the

I thought that Vtech Helio folks already have XIP supported...
								Pavel

-- 
I'm pavel@ucw.cz. "In my country we have almost anarchy and I don't care."
Panos Katsaloulis describing me w.r.t. patents at discuss@linmodems.org
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-02-01 12:40                               ` Pavel Machek
@ 2001-02-01 22:33                                 ` David Woodhouse
  2001-02-02  4:17                                   ` Joe deBlaquiere
  0 siblings, 1 reply; 55+ messages in thread
From: David Woodhouse @ 2001-02-01 22:33 UTC (permalink / raw)
  To: Pavel Machek; +Cc: Joe deBlaquiere, linux-kernel

On Thu, 1 Feb 2001, Pavel Machek wrote:

> I thought that Vtech Helio folks already have XIP supported...

Plenty of people are doing XIP of the kernel. I'm not aware of anyone 
doing XIP of userspace pages. 

-- 
dwmw2


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
  2001-02-01 22:33                                 ` David Woodhouse
@ 2001-02-02  4:17                                   ` Joe deBlaquiere
  0 siblings, 0 replies; 55+ messages in thread
From: Joe deBlaquiere @ 2001-02-02  4:17 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Pavel Machek, linux-kernel



David Woodhouse wrote:

> On Thu, 1 Feb 2001, Pavel Machek wrote:
> 
> 
>> I thought that Vtech Helio folks already have XIP supported...
> 
> 
> Plenty of people are doing XIP of the kernel. I'm not aware of anyone 
> doing XIP of userspace pages. 

uClinux does XIP (readonly) for userspace programs in the Dragonball 
port. Of course it's a different executable format than Linux, so there 
are some hooks for it.

-- 
Joe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 55+ messages in thread

end of thread, other threads:[~2001-02-02  4:42 UTC | newest]

Thread overview: 55+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-01-11 11:30 [linux-audio-dev] low-latency scheduling patch for 2.4.0 Andrew Morton
2001-01-07  2:53 ` Andrew Morton
2001-01-11  3:12   ` [linux-audio-dev] " Jay Ts
2001-01-11  3:22     ` Cort Dougan
2001-01-11 12:38       ` Alan Cox
2001-01-11  5:19     ` David S. Miller
2001-01-11 13:57       ` Daniel Phillips
2001-01-11 20:55       ` Nigel Gamble
2001-01-12 13:30         ` Andrew Morton
2001-01-12 15:11           ` Tim Wright
2001-01-12 22:30             ` Nigel Gamble
2001-01-13  1:01             ` Andrew Morton
2001-01-15 19:46               ` Tim Wright
2001-01-12 22:46           ` Nigel Gamble
2001-01-12 23:08           ` george anzinger
2001-01-21  0:05           ` yodaiken
2001-01-22  0:54             ` Nigel Gamble
2001-01-22  1:49               ` Paul Barton-Davis
2001-01-22  2:21                 ` Nigel Gamble
2001-01-22  3:31                   ` J Sloan
2001-01-28 13:14                   ` yodaiken
2001-01-28 14:07                     ` Bill Huey
2001-01-28 14:26                       ` Andrew Morton
2001-01-29  5:02                       ` yodaiken
2001-01-28 14:19                     ` Andrew Morton
2001-01-28 16:17                       ` Joe deBlaquiere
2001-01-29 15:44                         ` yodaiken
2001-01-29 17:23                           ` Joe deBlaquiere
2001-01-29 17:38                             ` yodaiken
2001-01-29 18:03                               ` Joe deBlaquiere
2001-01-30 15:08                           ` David Woodhouse
2001-01-30 15:44                             ` Joe deBlaquiere
2001-01-30 16:29                               ` Paul Davis
2001-01-30 16:35                                 ` David Woodhouse
2001-01-31  7:55                               ` george anzinger
2001-01-30 16:19                             ` David Woodhouse
2001-02-01 12:40                               ` Pavel Machek
2001-02-01 22:33                                 ` David Woodhouse
2001-02-02  4:17                                   ` Joe deBlaquiere
2001-01-30 20:51                             ` yodaiken
2001-01-30 21:00                               ` David Woodhouse
2001-01-29 22:08                         ` Pavel Machek
2001-01-29 22:31                         ` Roger Larsson
2001-01-29 23:46                           ` Joe deBlaquiere
2001-01-30 15:08                       ` David Woodhouse
2001-01-11 21:31       ` David S. Miller
2001-01-15  5:27         ` george anzinger
2001-01-12 13:21       ` Andrew Morton
2001-01-14 11:35   ` Andrew Morton
2001-01-14 14:38     ` Gregory Maxwell
2001-01-15 10:59       ` Andrew Morton
2001-01-13  2:45 ` [linux-audio-dev] " Jay Ts
2001-01-21  0:10   ` yodaiken
2001-01-26  9:14     ` Pavel Machek
2001-01-13 18:11 ` video drivers hog pci bus ? [was:[linux-audio-dev] low-latency scheduling patch for 2.4.0] Jörn Nettingsmeier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).