* PROBLEM: copying/creating large (>500MB) files results in sluggish behaviour.
@ 2001-07-23 18:10 Jochen Siebert
2001-08-06 8:40 ` Wojtek Pilorz
0 siblings, 1 reply; 2+ messages in thread
From: Jochen Siebert @ 2001-07-23 18:10 UTC (permalink / raw)
To: linux-kernel
Hi, (look at the end of email for data)
I´ve got a problem with my linux 2.4.7 box. If I download a
large (>500MB) file from a computer connected via 100Mbit
LAN (i.e. with more than 2MB/s) *or* I create such a large
file via the "dd" command (dd if=/dev/zero of=sloow)
after some time my computer gets very sluggy and reacts
veery sloow. I´ve watched the memory usage while creating
such a big file and noticed that all memory gets filled up,
and even swapping starts, *before* the disk starts to write
the file. Ok, so swapping and file writing at once seems to
be not such a good idea of the kernel, even if the IBM
IC35L040 IDE disk is a fast one. Feel free to ask my via
email.
Adies,
Jochen
ps: plz cc me.
Now the facts:
=============================
ver_linux:
Gnu C 2.96
Gnu make 3.79.1
binutils 2.10.1.0.2
util-linux 2.10s
mount 2.11b
modutils 2.4.3
e2fsprogs 1.19
reiserfsprogs 3.x.0i
PPP 2.4.0
Linux C Library 2.2.2
Dynamic linker (ldd) 2.2.2
Procps 2.0.7
Net-tools 1.59
Console-tools 0.2.3
Sh-utils 2.0
Modules Loaded serial parport_pc lp parport mga
agpgart es1371 ac97_codec 3c59x nls_iso8859-1 nls_cp850
vfat fat
===============================
kernel 2.4.7,
ASUS A7V board,
MemTotal: 384880 kB +128MB Swap,
reiserfs
====================
cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 6
model : 3
model name : AMD Duron(tm) Processor
stepping : 0
cpu MHz : 908.119
cache size : 64 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 sep
mtrr pge mca cmov pat pse36 mmx fxsr syscall mmxext
3dnowext 3dnow
bogomips : 1808.79
====================
modules:
serial 43472 0 (autoclean)
parport_pc 23632 1 (autoclean)
lp 5520 0 (autoclean)
parport 26144 1 (autoclean) [parport_pc lp]
mga 91312 1
agpgart 13024 3
es1371 25872 0
ac97_codec 8464 0 [es1371]
3c59x 25088 1 (autoclean)
nls_iso8859-1 2864 2 (autoclean)
nls_cp850 3616 2 (autoclean)
vfat 9328 2 (autoclean)
fat 31776 0 (autoclean) [vfat]
====================
UDMA 100 controller:
cat /proc/ide/pdc202xx
PDC20265 Chipset.
------------------------------- General Status
---------------------------------Burst Mode
: enabled
Host Mode : Normal
Bus Clocking : 33 PCI Internal
IO pad select : 10 mA
Status Polling Period : 8
Interrupt Check Status Polling Delay : 2
--------------- Primary Channel ---------------- Secondary
Channel ------------- enabled
enabled
66 Clocking enabled disabled
Mode PCI Mode PCI
FIFO Empty FIFO Empty
--------------- drive0 --------- drive1 -------- drive0
---------- drive1 ------DMA enabled: yes no
no no
DMA Mode: UDMA 4 NOTSET NOTSET
NOTSET
PIO Mode: PIO 4 NOTSET NOTSET
NOTSET
========================================
--
__________________________________________________________
jochen siebert fon +49/241/89449581 mobil +49/178/5400301
siebert@physik.rwth-aachen.de icq #6257096
browse to http://www.pgp.net to get my public pgp key
fprint: BAD0 5B0D E645 BF63 D2E9 8BCD 9516 BAB5 D67E 6F2B
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: PROBLEM: copying/creating large (>500MB) files results in sluggish behaviour.
2001-07-23 18:10 PROBLEM: copying/creating large (>500MB) files results in sluggish behaviour Jochen Siebert
@ 2001-08-06 8:40 ` Wojtek Pilorz
0 siblings, 0 replies; 2+ messages in thread
From: Wojtek Pilorz @ 2001-08-06 8:40 UTC (permalink / raw)
To: Jochen Siebert; +Cc: linux-kernel
On Mon, 23 Jul 2001, Jochen Siebert wrote:
> Date: Mon, 23 Jul 2001 18:10:34 +0000
> From: Jochen Siebert <siebert@kawo2.rwth-aachen.de>
> Reply-To: siebert@physik.rwth-aachen.de
> To: linux-kernel@vger.kernel.org
> Subject: PROBLEM: copying/creating large (>500MB) files results in
> sluggish behaviour.
>
> Hi, (look at the end of email for data)
>
> I´ve got a problem with my linux 2.4.7 box. If I download a
> large (>500MB) file from a computer connected via 100Mbit
> LAN (i.e. with more than 2MB/s) *or* I create such a large
> file via the "dd" command (dd if=/dev/zero of=sloow)
> after some time my computer gets very sluggy and reacts
> veery sloow. I´ve watched the memory usage while creating
> such a big file and noticed that all memory gets filled up,
> and even swapping starts, *before* the disk starts to write
> the file. Ok, so swapping and file writing at once seems to
> be not such a good idea of the kernel, even if the IBM
> IC35L040 IDE disk is a fast one. Feel free to ask my via
> email.
>
> Adies,
> Jochen
>
> ps: plz cc me.
>
> Now the facts:
> =============================
[...]
> MemTotal: 384880 kB +128MB Swap,
I can see similar problem with kernels 2.2.x when writing a large amount
of data (compared to size of RAM, of size of free RAM) to a slow media (PD
drive , with write performance of about 250 kBytes/s).
It seems all data is 'cached' before being written, and this can make
system really unusable for the copy time - login from console can take
several minutes.
My understanding is that Linux is too aggressive in caching data to be
written and does not make distinction between fast and slow media;
The only workaround I am aware of is to reduce the speed data is put to
the filesystem (e.g. by using rsync with option --bwlimit)
Best regards,
Wojtek
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2001-08-06 8:37 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-07-23 18:10 PROBLEM: copying/creating large (>500MB) files results in sluggish behaviour Jochen Siebert
2001-08-06 8:40 ` Wojtek Pilorz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).