linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: 2.4.10pre11aa1
       [not found] <20010918230242.F720@athlon.random>
@ 2001-09-19  8:49 ` Alexander Viro
  2001-09-19 12:07   ` 2.4.10pre11aa1 Alexander Viro
  0 siblings, 1 reply; 3+ messages in thread
From: Alexander Viro @ 2001-09-19  8:49 UTC (permalink / raw)
  To: Andrea Arcangeli; +Cc: linux-kernel



On Tue, 18 Sep 2001, Andrea Arcangeli wrote:

> Thanks also to Marcelo for promptly finding a problem in the vm rewrite,
> to Al Viro for having spotted promptly a silly bug in the
> blkdev-pagecache patch (see details on l-k) and for all the people
> who provided feedback over the last day.

I can add one more into the mix: what the hell had happened in rd.c?

a) you reintroduced the crap with rd_inodes[]
b) just try to call ioctl(fd, BLKFLSBUF) twice. Oops...
c) WTF with acrobatics around initrd_bd_op?  FWIW, initrd has no business
being a block device and both old and new variants are ugly, but what's
the point of adding extra tricks?
d) call ioctl(fd, BLKFLSBUF) and open the thing one more time before
closing fd.  Watch what happens.  It's broken by design.

I realize that you had that file modified in your tree, but bloody hell,
it doesn't mean "ignore any changes that happened in mainline kernel
without even looking at them".  As for the BLKFLSBUF...  How was it supposed
to work?


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: 2.4.10pre11aa1
  2001-09-19  8:49 ` 2.4.10pre11aa1 Alexander Viro
@ 2001-09-19 12:07   ` Alexander Viro
  2001-09-19 12:54     ` 2.4.10pre11aa1 Andrea Arcangeli
  0 siblings, 1 reply; 3+ messages in thread
From: Alexander Viro @ 2001-09-19 12:07 UTC (permalink / raw)
  To: Andrea Arcangeli; +Cc: linux-kernel



> I can add one more into the mix: what the hell had happened in rd.c?
> 
> a) you reintroduced the crap with rd_inodes[]
> b) just try to call ioctl(fd, BLKFLSBUF) twice. Oops...
> c) WTF with acrobatics around initrd_bd_op?  FWIW, initrd has no business
> being a block device and both old and new variants are ugly, but what's
> the point of adding extra tricks?
> d) call ioctl(fd, BLKFLSBUF) and open the thing one more time before
> closing fd.  Watch what happens.  It's broken by design.
> 
> I realize that you had that file modified in your tree, but bloody hell,
> it doesn't mean "ignore any changes that happened in mainline kernel
> without even looking at them".  As for the BLKFLSBUF...  How was it supposed
> to work?


BTW, what's to stop shrink_cache() from picking a page out of ramdisk
pagecache and calling ->writepage() on it?  The thing will immediately
get dirtied again, AFAICS (blkdev_writepage() -> submit_bh() -> ... ->
rd_make_request() -> rd_blkdev_pagecache(WRITE,...) -> SetPageDirty()).

If you get a lot of stuff in ramdisks, things can get rather insteresting...


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: 2.4.10pre11aa1
  2001-09-19 12:07   ` 2.4.10pre11aa1 Alexander Viro
@ 2001-09-19 12:54     ` Andrea Arcangeli
  0 siblings, 0 replies; 3+ messages in thread
From: Andrea Arcangeli @ 2001-09-19 12:54 UTC (permalink / raw)
  To: Alexander Viro; +Cc: linux-kernel

On Wed, Sep 19, 2001 at 08:07:30AM -0400, Alexander Viro wrote:
> BTW, what's to stop shrink_cache() from picking a page out of ramdisk
> pagecache and calling ->writepage() on it?  The thing will immediately

it's the same trick that ramfs uses also, so it is the right way as far
as ramfs isn't broken too (and quite frankly these days ramfs is much
more important than ramdisk given our heavy use of logical caches).

> If you get a lot of stuff in ramdisks, things can get rather insteresting...

under heavy memory pressure possibly, that applies to ramfs also as said
above. Anyways this was a clean approch and the new vm make sure not to
get confused by writepage marking the page dirty again, the worst thing
that can happen is some cpu cycle wasted, _but_ we save cpu cycles in
not having special checks  when ramfs isn't in use and the fact there are no
special cases also make the code cleaner.

Now, I'm fine to add special cases if this sort out to be too much cpu
wasted [check how much shrink_cache show up on the profiling to be sure]
(of course not just for ramdisk that isn't very important, but for ramfs
too which is much more critical to be very efficient).

Andrea

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2001-09-19 12:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20010918230242.F720@athlon.random>
2001-09-19  8:49 ` 2.4.10pre11aa1 Alexander Viro
2001-09-19 12:07   ` 2.4.10pre11aa1 Alexander Viro
2001-09-19 12:54     ` 2.4.10pre11aa1 Andrea Arcangeli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).