* Re: 2.4.7-ac4 disk thrashing
@ 2001-08-08 6:38 Dieter Nützel
2001-08-08 10:57 ` Alan Cox
0 siblings, 1 reply; 8+ messages in thread
From: Dieter Nützel @ 2001-08-08 6:38 UTC (permalink / raw)
To: Linux Kernel List, ReiserFS List
Cc: Chris Mason, Nikita Danilov, Daniel Phillips, Tom Vier
Tom Vier wrote:
>switching from 2.4.7-ac3 to -ac4, disk access seems to be much more
>synchronis. running a ./configure script causes all kinds of trashing, as
>does installing .debs. i'm using reiserfs on top of software raid 0 on an
>alpha.
I can second that on 2.4.7-ac4 - ac9 (all versions).
Dbench show a dramatic decrease in disk troughput (~10 MB/sec) for every case
I've tested. I have a ReiserFS only system and the test partition was /opt
(/dev/sda8) with ~2.7 GB stuff on it, so we have some aging, too. It is the
last and slowest partition on my fast IBM U160 18 GB, 10.000 RPMs disk.
I've used 2.4.7 + acX + transaction-tracking-2 (Chris) + use-once-pages
(Daniel).
Now, some numbers (it should be noted that the u+s times are mostly equal but
the whole time and the throughput is different.
first lines for 2.4.7-ac3, then 2.4.7-ac9
dbench 48
Throughput 27.3983 MB/sec (NB=34.2479 MB/sec 273.983 MBit/sec)
37.580u 115.730s 3:51.30 66.2% 0+0k 0+0io 1310pf+0w
Throughput 18.4711 MB/sec (NB=23.0889 MB/sec 184.711 MBit/sec)
37.710u 121.900s 5:43.05 46.5% 0+0k 0+0io 1311pf+0w
dbench 32
Throughput 34.7552 MB/sec (NB=43.444 MB/sec 347.552 MBit/sec)
24.620u 73.980s 2:02.55 80.4% 0+0k 0+0io 911pf+0w
Throughput 21.8827 MB/sec (NB=27.3533 MB/sec 218.827 MBit/sec)
25.410u 76.610s 3:14.04 52.5% 0+0k 0+0io 912pf+0w
dbench 16
16 clients started
Throughput 37.7379 MB/sec (NB=47.1724 MB/sec 377.379 MBit/sec)
12.350u 35.330s 0:56.97 83.6% 0+0k 0+0io 511pf+0w
Throughput 30.0396 MB/sec (NB=37.5495 MB/sec 300.396 MBit/sec)
12.970u 37.320s 1:10.31 71.5% 0+0k 0+0io 511pf+0w
dbench 8
Throughput 40.9394 MB/sec (NB=51.1742 MB/sec 409.394 MBit/sec)
6.080u 17.420s 0:26.80 87.6% 0+0k 0+0io 311pf+0w
Throughput 28.174 MB/sec (NB=35.2175 MB/sec 281.74 MBit/sec)
6.280u 18.360s 0:38.49 64.0% 0+0k 0+0io 312pf+0w
dbench 4
Throughput 41.4035 MB/sec (NB=51.7544 MB/sec 414.035 MBit/sec)
3.140u 8.240s 0:13.76 82.7% 0+0k 0+0io 211pf+0w
Throughput 25.2641 MB/sec (NB=31.5801 MB/sec 252.641 MBit/sec)
3.270u 8.680s 0:21.91 54.5% 0+0k 0+0io 212pf+0w
dbench 2
Throughput 38.6387 MB/sec (NB=48.2983 MB/sec 386.387 MBit/sec)
1.680u 4.030s 0:07.83 72.9% 0+0k 0+0io 161pf+0w
Throughput 30.4352 MB/sec (NB=38.0441 MB/sec 304.352 MBit/sec)
1.690u 4.300s 0:09.68 61.8% 0+0k 0+0io 162pf+0w
dbench 1
Throughput 33.3689 MB/sec (NB=41.7111 MB/sec 333.689 MBit/sec)
0.820u 2.000s 0:04.96 56.8% 0+0k 0+0io 136pf+0w
Throughput 30.8583 MB/sec (NB=38.5729 MB/sec 308.583 MBit/sec)
0.750u 2.010s 0:05.28 52.2% 0+0k 0+0io 137pf+0w
System spec:
Athlon 550 I (yes, the first generation)
MSI MS-6167 Rev 1.0B (AMD Irongate C4, without bypass)
640 MB Pc100-2-2-2 SDRAM
SCSI subsystem driver Revision: 1.00
scsi0 : Adaptec AIC7XXX EISA/VLB/PCI SCSI HBA DRIVER, Rev 6.2.1
<Adaptec 2940 Ultra SCSI adapter>
aic7880: Ultra Wide Channel A, SCSI Id=7, 16/255 SCBs
Vendor: IBM Model: DDYS-T18350N Rev: S96H
Type: Direct-Access ANSI SCSI revision: 03
Vendor: IBM Model: DDRS-34560D Rev: DC1B
Type: Direct-Access ANSI SCSI revision: 02
Vendor: IBM Model: DDRS-34560W Rev: S71D
Type: Direct-Access ANSI SCSI revision: 02
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/sda3 104412 69616 34796 67% /
/dev/sda2 1518088 37392 1480696 3% /tmp
/dev/sda5 1028092 334056 694036 33% /var
/dev/sda6 2048188 87188 1961000 5% /home
/dev/sda7 5124536 1752784 3371752 35% /usr
/dev/sda8 7068348 2715860 4352488 39% /opt
tmpfs 321188 0 321188 0% /dev/shm
Could it be that the ReiserFS cleanups in ac4 do harm?
http://marc.theaimsgroup.com/?l=reiserfs&m=99683332027428&w=2
Thanks,
Dieter
--
Dieter Nützel
Graduate Student, Computer Science
University of Hamburg
Department of Computer Science
Cognitive Systems Group
Vogt-Kölln-Straße 30
D-22527 Hamburg, Germany
email: nuetzel@kogs.informatik.uni-hamburg.de
@home: Dieter.Nuetzel@hamburg.de
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 2.4.7-ac4 disk thrashing
2001-08-08 6:38 2.4.7-ac4 disk thrashing Dieter Nützel
@ 2001-08-08 10:57 ` Alan Cox
2001-08-08 15:41 ` Daniel Phillips
0 siblings, 1 reply; 8+ messages in thread
From: Alan Cox @ 2001-08-08 10:57 UTC (permalink / raw)
To: Dieter Nützel
Cc: Linux Kernel List, ReiserFS List, Chris Mason, Nikita Danilov,
Daniel Phillips, Tom Vier
> Could it be that the ReiserFS cleanups in ac4 do harm?
> http://marc.theaimsgroup.com/?l=3Dreiserfs&m=3D99683332027428&w=3D2
I suspect the use once patch is the more relevant one.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 2.4.7-ac4 disk thrashing
2001-08-08 10:57 ` Alan Cox
@ 2001-08-08 15:41 ` Daniel Phillips
2001-08-08 16:03 ` Dieter Nützel
2001-08-13 6:24 ` 2.4.7-ac4 disk thrashing (SOLVED?) Dieter Nützel
0 siblings, 2 replies; 8+ messages in thread
From: Daniel Phillips @ 2001-08-08 15:41 UTC (permalink / raw)
To: Alan Cox, Dieter Nützel
Cc: Linux Kernel List, ReiserFS List, Chris Mason, Nikita Danilov,
Daniel Phillips, Tom Vier
On Wednesday 08 August 2001 12:57, Alan Cox wrote:
> > Could it be that the ReiserFS cleanups in ac4 do harm?
> > http://marc.theaimsgroup.com/?l=3Dreiserfs&m=3D99683332027428&w=3D2
>
> I suspect the use once patch is the more relevant one.
Two things to check:
- Linus found a bug in balance_dirty_state yesterday. Is the
fix applied?
- The original use-once patch tends to leave a referenced pages
on the inactive_dirty queue longer, not in itself a problem,
but can expose other problems. The previously posted patch
below fixes that, is it applied?
To apply (with use-once already applied):
cd /usr/src/your.2.4.7.source.tree
patch -p0 <this.patch
--- ../2.4.7.clean/mm/filemap.c Sat Aug 4 14:27:16 2001
+++ ./mm/filemap.c Sat Aug 4 23:41:00 2001
@@ -979,9 +979,13 @@
static inline void check_used_once (struct page *page)
{
- if (!page->age) {
- page->age = PAGE_AGE_START;
- ClearPageReferenced(page);
+ if (!PageActive(page)) {
+ if (page->age)
+ activate_page(page);
+ else {
+ page->age = PAGE_AGE_START;
+ ClearPageReferenced(page);
+ }
}
}
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 2.4.7-ac4 disk thrashing
2001-08-08 15:41 ` Daniel Phillips
@ 2001-08-08 16:03 ` Dieter Nützel
2001-08-13 6:24 ` 2.4.7-ac4 disk thrashing (SOLVED?) Dieter Nützel
1 sibling, 0 replies; 8+ messages in thread
From: Dieter Nützel @ 2001-08-08 16:03 UTC (permalink / raw)
To: Daniel Phillips, Alan Cox
Cc: Linux Kernel List, ReiserFS List, Chris Mason, Nikita Danilov, Tom Vier
Am Mittwoch, 8. August 2001 17:41 schrieb Daniel Phillips:
> On Wednesday 08 August 2001 12:57, Alan Cox wrote:
> > > Could it be that the ReiserFS cleanups in ac4 do harm?
> > > http://marc.theaimsgroup.com/?l=3Dreiserfs&m=3D99683332027428&w=3D2
> >
> > I suspect the use once patch is the more relevant one.
>
> Two things to check:
>
> - Linus found a bug in balance_dirty_state yesterday. Is the
> fix applied?
No, I'll try.
> - The original use-once patch tends to leave a referenced pages
> on the inactive_dirty queue longer, not in itself a problem,
> but can expose other problems. The previously posted patch
> below fixes that, is it applied?
>
> To apply (with use-once already applied):
Yes, it was with -ac9.
But it wasn't much different from ac6/7/8 without it. All nearly "equally
bad". The disk steps like mad compared against 2.4.7-ac1 and ac-3. I can
"hear" it and the whole system "feels" slow.
2.4.7-ac1 + transaction-tracking-2 (Chris) + use-once-pages
(Daniel) + 2.4.7-unlink-truncate-rename-rmdir.dif (Nikita) is the best Linux
I've ever run.
I did several (~10 times) dbench-1.1 (should I retry with dbench-1.2?) and
all gave nearly same results.
ac-1, ac3 + fixes GREAT
ac5, ac6, ac7, ac8, ac9 + fixes BAD
Thanks,
Dieter
> cd /usr/src/your.2.4.7.source.tree
> patch -p0 <this.patch
>
> --- ../2.4.7.clean/mm/filemap.c Sat Aug 4 14:27:16 2001
> +++ ./mm/filemap.c Sat Aug 4 23:41:00 2001
> @@ -979,9 +979,13 @@
>
> static inline void check_used_once (struct page *page)
> {
> - if (!page->age) {
> - page->age = PAGE_AGE_START;
> - ClearPageReferenced(page);
> + if (!PageActive(page)) {
> + if (page->age)
> + activate_page(page);
> + else {
> + page->age = PAGE_AGE_START;
> + ClearPageReferenced(page);
> + }
> }
> }
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 2.4.7-ac4 disk thrashing (SOLVED?)
2001-08-08 15:41 ` Daniel Phillips
2001-08-08 16:03 ` Dieter Nützel
@ 2001-08-13 6:24 ` Dieter Nützel
1 sibling, 0 replies; 8+ messages in thread
From: Dieter Nützel @ 2001-08-13 6:24 UTC (permalink / raw)
To: Daniel Phillips, Alan Cox
Cc: Linux Kernel List, ReiserFS List, Chris Mason, Nikita Danilov, Tom Vier
Am Mittwoch, 8. August 2001 18:03 schrieb Dieter Nützel:
> Am Mittwoch, 8. August 2001 17:41 schrieb Daniel Phillips:
> > On Wednesday 08 August 2001 12:57, Alan Cox wrote:
> > > > Could it be that the ReiserFS cleanups in ac4 do harm?
> > > > http://marc.theaimsgroup.com/?l=3Dreiserfs&m=3D99683332027428&w=3D2
> > >
> > > I suspect the use once patch is the more relevant one.
> >
> > Two things to check:
> >
> > - Linus found a bug in balance_dirty_state yesterday. Is the
> > fix applied?
>
> No, I'll try.
>
> > - The original use-once patch tends to leave a referenced pages
> > on the inactive_dirty queue longer, not in itself a problem,
> > but can expose other problems. The previously posted patch
> > below fixes that, is it applied?
> >
> > To apply (with use-once already applied):
>
> Yes, it was with -ac9.
Here are my latest results.
Kupdated seems to be the disk IO performance killer for ReiserFS (even with
2.4.7-ac4 and beyond, latest version I've tested was 2.4.8 and 2.4.8-ac1).
2.4.7-ac1 +
use-once-pages Daniel Phillips
use-once-pages-2 Daniel Phillips
transaction-tracking-2.diff Chris Mason
2.4.7-unlink-truncate-rename-rmdir.dif Vladimir V.Saveliev
2.4.7-plug-hole-and-pap-5660-pathrelse-fixes.dif Vladimir V.Saveliev
2.4.8 +
use-once-pages-2 Daniel Phillips
transaction-tracking-2.diff Chris Mason
2.4.7-unlink-truncate-rename-rmdir.dif Vladimir V.Saveliev
2.4.7-plug-hole-and-pap-5660-pathrelse-fixes.dif Vladimir V.Saveliev
The first two lines per dbench run present the results for "normal"
kernel settings and the second two lines show the results for "killall
-STOP kupdated" and "echo 80 64 64 256 500 6000 90 >
/proc/sys/vm/bdflush" like Linus suggested.
dbench-1.1: 48
2.4.8
Throughput 21.467 MB/sec (NB=26.8337 MB/sec 214.67 MBit/sec)
39.270u 125.690s 4:55.17 55.8% 0+0k 0+0io 1310pf+0w
Throughput 28.1733 MB/sec (NB=35.2166 MB/sec 281.733 MBit/sec)
37.920u 123.120s 3:44.92 71.5% 0+0k 0+0io 1310pf+0w
2.4.7-ac1
Throughput 29.4888 MB/sec (NB=36.861 MB/sec 294.888 MBit/sec)
38.380u 114.120s 3:34.90 70.9% 0+0k 0+0io 1310pf+0w
Throughput 30.7171 MB/sec (NB=38.3964 MB/sec 307.171 MBit/sec)
38.160u 127.720s 3:26.31 80.4% 0+0k 0+0io 1310pf+0w
dbench-1.1: 32
2.4.8
Throughput 21.5015 MB/sec (NB=26.8769 MB/sec 215.015 MBit/sec)
25.470u 80.170s 3:17.46 53.4% 0+0k 0+0io 911pf+0w
Throughput 33.5427 MB/sec (NB=41.9284 MB/sec 335.427 MBit/sec)
25.380u 83.350s 2:06.94 85.6% 0+0k 0+0io 911pf+0w
2.4.7-ac1
Throughput 33.5394 MB/sec (NB=41.9243 MB/sec 335.394 MBit/sec)
25.460u 74.220s 2:06.95 78.5% 0+0k 0+0io 911pf+0w
Throughput 34.4063 MB/sec (NB=43.0078 MB/sec 344.063 MBit/sec)
25.270u 84.510s 2:03.78 88.6% 0+0k 0+0io 911pf+0w
dbench-1.1: 16
2.4.8
Throughput 25.373 MB/sec (NB=31.7163 MB/sec 253.73 MBit/sec)
12.610u 36.380s 1:23.24 58.8% 0+0k 0+0io 510pf+0w
Throughput 42.1528 MB/sec (NB=52.691 MB/sec 421.528 MBit/sec)
12.770u 35.870s 0:50.11 97.0% 0+0k 0+0io 510pf+0w
2.4.7-ac1
Throughput 36.8195 MB/sec (NB=46.0244 MB/sec 368.195 MBit/sec)
12.750u 36.280s 0:57.36 85.4% 0+0k 0+0io 510pf+0w
Throughput 41.0814 MB/sec (NB=51.3518 MB/sec 410.814 MBit/sec)
13.520u 36.370s 0:51.41 97.0% 0+0k 0+0io 510pf+0w
dbench-1.1: 8
2.4.8
Throughput 23.9236 MB/sec (NB=29.9045 MB/sec 239.236 MBit/sec)
6.090u 18.510s 0:45.15 54.4% 0+0k 0+0io 311pf+0w
Throughput 43.1126 MB/sec (NB=53.8908 MB/sec 431.126 MBit/sec)
6.580u 16.760s 0:25.50 91.5% 0+0k 0+0io 311pf+0w
2.4.7-ac1
Throughput 41.315 MB/sec (NB=51.6437 MB/sec 413.15 MBit/sec)
6.590u 17.000s 0:26.56 88.8% 0+0k 0+0io 311pf+0w
Throughput 42.9713 MB/sec (NB=53.7142 MB/sec 429.713 MBit/sec)
6.180u 17.210s 0:25.58 91.4% 0+0k 0+0io 311pf+0w
dbench-1.1: 4
2.4.8
Throughput 23.2973 MB/sec (NB=29.1216 MB/sec 232.973 MBit/sec)
3.430u 8.510s 0:23.67 50.4% 0+0k 0+0io 211pf+0w
Throughput 42.6651 MB/sec (NB=53.3313 MB/sec 426.651 MBit/sec)
3.010u 8.310s 0:12.38 91.4% 0+0k 0+0io 210pf+0w
2.4.7-ac1
Throughput 41.1515 MB/sec (NB=51.4394 MB/sec 411.515 MBit/sec)
3.040u 8.500s 0:12.83 89.9% 0+0k 0+0io 210pf+0w
Throughput 41.7318 MB/sec (NB=52.1647 MB/sec 417.318 MBit/sec)
3.140u 8.430s 0:13.67 84.6% 0+0k 0+0io 211pf+0w
dbench-1.1: 2
2.4.8
Throughput 28.002 MB/sec (NB=35.0025 MB/sec 280.02 MBit/sec)
1.630u 4.240s 0:10.44 56.2% 0+0k 0+0io 161pf+0w
Throughput 40.2392 MB/sec (NB=50.2991 MB/sec 402.392 MBit/sec)
1.640u 3.880s 0:07.56 73.0% 0+0k 0+0io 161pf+0w
2.4.7-ac1
Throughput 37.7007 MB/sec (NB=47.1259 MB/sec 377.007 MBit/sec)
1.540u 4.250s 0:08.01 72.2% 0+0k 0+0io 161pf+0w
Throughput 37.7846 MB/sec (NB=47.2308 MB/sec 377.846 MBit/sec)
1.560u 4.240s 0:07.99 72.5% 0+0k 0+0io 161pf+0w
dbench-1.1: 1
2.4.8
Throughput 40.6674 MB/sec (NB=50.8342 MB/sec 406.674 MBit/sec)
0.720u 2.150s 0:04.25 67.5% 0+0k 0+0io 136pf+0w
Throughput 35.7121 MB/sec (NB=44.6401 MB/sec 357.121 MBit/sec)
0.720u 1.970s 0:04.70 57.2% 0+0k 0+0io 136pf+0w
2.4.7-ac1
Throughput 32.982 MB/sec (NB=41.2275 MB/sec 329.82 MBit/sec)
0.620u 2.240s 0:05.01 57.0% 0+0k 0+0io 136pf+0w
Throughput 33.6056 MB/sec (NB=42.007 MB/sec 336.056 MBit/sec)
0.870u 2.000s 0:04.94 58.0% 0+0k 0+0io 136pf+0w
Now, I only have tested 2.4.8 (patches like above) with different dirty
balancing numbers and with/without kupdated stopped.
Please compare the results with the above numbers.
dbench-1.1: 32
echo 30 64 64 256 500 3000 60 > /proc/sys/vm/bdflush (normal mode)
Throughput 21.2009 MB/sec (NB=26.5011 MB/sec 212.009 MBit/sec)
25.930u 81.180s 3:20.25 53.4% 0+0k 0+0io 911pf+0w
echo 50 64 64 256 500 3000 60 > /proc/sys/vm/bdflush
Throughput 21.6827 MB/sec (NB=27.1034 MB/sec 216.827 MBit/sec)
25.620u 82.950s 3:15.83 55.4% 0+0k 0+0io 911pf+0w
echo 50 64 64 256 500 6000 60 > /proc/sys/vm/bdflush
Throughput 20.8374 MB/sec (NB=26.0468 MB/sec 208.374 MBit/sec)
26.010u 82.830s 3:23.72 53.4% 0+0k 0+0io 911pf+0w
killall -STOP kupdated
echo 30 64 64 256 500 3000 60 > /proc/sys/vm/bdflush (normal mode)
Throughput 29.1071 MB/sec (NB=36.3838 MB/sec 291.071 MBit/sec)
25.950u 81.530s 2:26.13 73.5% 0+0k 0+0io 911pf+0w
echo 30 64 64 256 500 6000 60 > /proc/sys/vm/bdflush
Throughput 29.4383 MB/sec (NB=36.7979 MB/sec 294.383 MBit/sec)
25.450u 83.530s 2:24.50 75.4% 0+0k 0+0io 911pf+0w
echo 50 64 64 256 500 3000 60 > /proc/sys/vm/bdflush
Throughput 31.3666 MB/sec (NB=39.2083 MB/sec 313.666 MBit/sec)
25.660u 88.830s 2:15.67 84.3% 0+0k 0+0io 911pf+0w
echo 50 64 64 256 500 6000 60 > /proc/sys/vm/bdflush
Throughput 32.1513 MB/sec (NB=40.1891 MB/sec 321.513 MBit/sec)
25.260u 83.190s 2:12.39 81.9% 0+0k 0+0io 911pf+0w
echo 50 64 64 256 500 3000 70 > /proc/sys/vm/bdflush
Throughput 31.9838 MB/sec (NB=39.9797 MB/sec 319.838 MBit/sec)
25.050u 83.990s 2:13.08 81.9% 0+0k 0+0io 911pf+0w
echo 50 64 64 256 500 6000 70 > /proc/sys/vm/bdflush
Throughput 32.5191 MB/sec (NB=40.6489 MB/sec 325.191 MBit/sec)
25.990u 81.690s 2:10.91 82.2% 0+0k 0+0io 911pf+0w
echo 50 64 64 256 500 3000 80 > /proc/sys/vm/bdflush
Throughput 31.7866 MB/sec (NB=39.7332 MB/sec 317.866 MBit/sec)
25.510u 85.010s 2:13.90 82.5% 0+0k 0+0io 911pf+0w
echo 50 64 64 256 500 6000 80 > /proc/sys/vm/bdflush
Throughput 31.2084 MB/sec (NB=39.0105 MB/sec 312.084 MBit/sec)
26.010u 87.040s 2:16.36 82.9% 0+0k 0+0io 911pf+0w
echo 85 64 64 256 500 3000 95 > /proc/sys/vm/bdflush
Throughput 31.6878 MB/sec (NB=39.6098 MB/sec 316.878 MBit/sec)
25.640u 86.710s 2:14.31 83.6% 0+0k 0+0io 911pf+0w
echo 85 64 64 256 500 6000 95 > /proc/sys/vm/bdflush
Throughput 32.4279 MB/sec (NB=40.5349 MB/sec 324.279 MBit/sec)
25.860u 86.710s 2:11.27 85.7% 0+0k 0+0io 911pf+0w
Conclusion.
I found that stopping kupdated together with ReiserFS is a big win.
As everyone know ReiserFS need more CPU cycles then the "other"
filesystems could this be the answer to the observed disk thrashing?
With kupdated running I saw several more percent of idle CPU.
Are there any drawbacks?
Thanks and good night.
-Dieter
^ permalink raw reply [flat|nested] 8+ messages in thread
* 2.4.7-ac4 disk thrashing
@ 2001-08-04 15:38 Tom Vier
2001-08-04 23:04 ` Matthew Gardiner
0 siblings, 1 reply; 8+ messages in thread
From: Tom Vier @ 2001-08-04 15:38 UTC (permalink / raw)
To: linux-kernel
switching from 2.4.7-ac3 to -ac4, disk access seems to be much more
synchronis. running a ./configure script causes all kinds of trashing, as
does installing .debs. i'm using reiserfs on top of software raid 0 on an
alpha.
--
Tom Vier <tmv5@home.com>
DSA Key id 0x27371A2C
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 2.4.7-ac4 disk thrashing
2001-08-04 15:38 2.4.7-ac4 disk thrashing Tom Vier
@ 2001-08-04 23:04 ` Matthew Gardiner
2001-08-05 0:55 ` Tom Vier
0 siblings, 1 reply; 8+ messages in thread
From: Matthew Gardiner @ 2001-08-04 23:04 UTC (permalink / raw)
To: Tom Vier; +Cc: linux-kernel
Tom Vier wrote:
>switching from 2.4.7-ac3 to -ac4, disk access seems to be much more
>synchronis. running a ./configure script causes all kinds of trashing, as
>does installing .debs. i'm using reiserfs on top of software raid 0 on an
>alpha.
>
Apparently, in ac5 (which I am running), there was a bug on non-x86
cpu's using reiserfs. Download and install the new patch and try.
Matthew Gardiner
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 2.4.7-ac4 disk thrashing
2001-08-04 23:04 ` Matthew Gardiner
@ 2001-08-05 0:55 ` Tom Vier
0 siblings, 0 replies; 8+ messages in thread
From: Tom Vier @ 2001-08-05 0:55 UTC (permalink / raw)
To: Matthew Gardiner; +Cc: linux-kernel
On Sun, Aug 05, 2001 at 11:04:59AM +1200, Matthew Gardiner wrote:
> Tom Vier wrote:
> >switching from 2.4.7-ac3 to -ac4, disk access seems to be much more
> >synchronis. running a ./configure script causes all kinds of trashing, as
> >does installing .debs. i'm using reiserfs on top of software raid 0 on an
> >alpha.
> Apparently, in ac5 (which I am running), there was a bug on non-x86
> cpu's using reiserfs. Download and install the new patch and try.
that's just a signedness fix. i've tried ac5 and it has the same problem as
ac4.
--
Tom Vier <tmv5@home.com>
DSA Key id 0x27371A2C
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2001-08-13 6:29 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-08 6:38 2.4.7-ac4 disk thrashing Dieter Nützel
2001-08-08 10:57 ` Alan Cox
2001-08-08 15:41 ` Daniel Phillips
2001-08-08 16:03 ` Dieter Nützel
2001-08-13 6:24 ` 2.4.7-ac4 disk thrashing (SOLVED?) Dieter Nützel
-- strict thread matches above, loose matches on Subject: below --
2001-08-04 15:38 2.4.7-ac4 disk thrashing Tom Vier
2001-08-04 23:04 ` Matthew Gardiner
2001-08-05 0:55 ` Tom Vier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).