linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Very High Load, kernel 2.4.18, apache/mysql
       [not found] <3D90FD7B.9080209@wanadoo.fr>
@ 2002-09-25  1:12 ` Adam Goldstein
  0 siblings, 0 replies; 25+ messages in thread
From: Adam Goldstein @ 2002-09-25  1:12 UTC (permalink / raw)
  To: FD Cami; +Cc: linux-kernel

We haven't run memtest, but, the same segfaults occurred on the old  
server, the middle server and the new server (granted the new and old  
are almost the same and used most of the same memory)

Being ECC it should report errors, as well as fix them. Using a  
different file system is on the top of our test list... reiser notail,  
noatime in particular. Never really thought about xfs.

While I agree that the segfaults are very odd, I can't find a reason  
for them.... I will run memtest    
(ftp://rpmfind.net/linux/Mandrake-devel/cooker/i586/Mandrake/RPMS/ 
memtest86-3.0-2mdk.i586.rpm  it adds a lilo boot option .. how neat! )

Some interesting tests I found regarding ext2, ext3 and reiser.
http://www.net.oregonstate.edu/~kveton/fs/


On Tuesday, September 24, 2002, at 08:04 PM, FD Cami wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Adam Goldstein wrote:
> | I have been trying to find an answer to this for the past couple  
> weeks,
> | but I have finally
> | broken down and must post this to this list. ;)
> |
> | I am running a high user load site (>20million hits/month stamp  
> auction
> | site) which runs entirely on apache/php with mysql. It was running
> | smoothly (for the most part) on as a virtual server on a relatively  
> nice
> | box(see Moya below), but started needing more and more disk space  
> (from
> | uploads, logs, etc) and kept running out of space on the root  
> partition
> | (including /var...which has mysql & weblogs)
> |
> | I decided to build a new box for it, which we were shipping to a
> | highspeed colo facility.
> |
> | While this unit was slightly less powerful, it was a clean install  
> with
> | a larger root partition (See Anubis below).  This unit started acting
> | pathetic, and had less other loads on it (old box also has lots of  
> samba
> | sharing and other low-traffic websites). The system load kept at high
> | rates constantly, 5-8 during non-peak hours , average of 10-20 during
> | most times, and spikes >100... needless to say, any load  over 5-10  
> made
> | the unit a pile of dung.
> |
> | My partner is running a similar site, under debian, on similar  
> hardware
> | (almost identical, actually) and is having -very- similar problems.
> |
> | I stripped the old server, packed it into a new 4U case (-packed!-)  
> and
> | moved just the one site (29G including pictures and sql data) to it,  
> and
> | the results are no better. This unit has even more ram, and, more  
> hard
> | drive space. (See Nosferatu below)
> |
> | We are at the end of our ropes, and are clearing our chalkboards to
> | start testing pieces of our systems... problem is, testing these  
> system
> | is difficult due to needing to put live loads on them. We need to  
> narrow
> | down the search, and need your help... please...
> |
> | We also see high amounts of apache children segfaulting under  
> load... as
> | high as 2-10/minute at times. I have tried turning off atimes, and
> | reducing tcp timeouts, etc. The big users of CPU are typically apache
> | and mysql. About 110+ instances of apache and mysqld each run in top  
> at
> | high load. CPU use bounces wildly, with most in user space.
>
> A few hints (and i'm heading to bed) :
> try running 2.4.19 + O(1) scheduler from Ingo Molnar instead of the
> stock scheduler, it improves efficiency, especially if you're SMP  
> (which
> is the case) - i've been running that on several production, medium  
> load
> machines :
> http://people.redhat.com/~mingo/O(1)-scheduler/
> NB : patch is for 2.4.19rc but applies cleanly to 2.4.19 and runs
> perfectly.
>
> Also offloading Apache with the TUX kernel-space webserver available
> from
> http://people.redhat.com/~mingo/TUX-patches/
> (yeah, same guy) could help _alot_
>
> A simple remark from a squid user : try using squid as a server-side
> proxy, also called httpd-accelerator :
> http://www.squid-cache.org/
> http://squid.visolve.com/squid24s1/httpd_accelerator.htm
> Well worth a try.
>
> I have _yet_ to do a TUX+SQUID+APACHE config, if only i had time
> to spare...
>
> Also, using a different filesystem may help.
> Maybe trying ReiserFS with the noatime AND notail options could be
> very handy. Maybe tuning XFS would do the trick.
> What are you running on now ?
>
> Also, be sure to test your machines' RAM using  
> http://www.memtest86.com/
> you shouldn't see processes segfaulting like that... It's weird.
>
> - --
>
> F. CAMI
> - ----------------------------------------------------------
> ~ "To disable the Internet to save EMI and Disney is the
> moral equivalent of burning down the library of Alexandria
> to ensure the livelihood of monastic scribes."
> ~              - John Ippolito (Guggenheim)
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.0.7 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQE9kP17uBGY13rZQM8RAt1RAJ9wbvKi7kkhwYUqgd7zmzi4+OmPMgCbBOjL
> v2Upj2t4LL3hhRkmfO7du/8=
> =+0BK
> -----END PGP SIGNATURE-----
>
-- 
Adam Goldstein
White Wolf Networks


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25 20:16 ` Adam Goldstein
  2002-09-25 21:26   ` Roger Larsson
  2002-09-26  3:03   ` Ernst Herzberg
@ 2002-10-01  5:36   ` David Rees
  2 siblings, 0 replies; 25+ messages in thread
From: David Rees @ 2002-10-01  5:36 UTC (permalink / raw)
  To: linux-kernel

I can second the PHPA recommendation.  Since you appear to be CPU bound
doing a lot of processing in httpd, anything you can do to speed them up
will help.  PHPA showed significant performance increases in my tests.

-Dave

On Wed, Sep 25, 2002 at 04:16:47PM -0400, Adam Goldstein wrote:
> During my investigation of php accelerator (which we put off before 
> thinking it would be better to stabilize the server first) I came 
> across a small blurb about php 4.1.2 (which we use) and mysql.
> 
> http://www.php-accelerator.co.uk/faq.php#segv2
> 
> Apparently this is how the site is written in some places, and it 
> causes instability in the php portion of the apache process. We are 
> fixing this now. Also, with the nodiratime, noatime, ext2 combination, 
> the load has decreased a little, but, not very much. It has still 
> reached >25 load when apache processes reached 120 (112 active 
> according to server-status) and page loads come to near dead stop... 
> segfaults still exist, even with fixed mysql connection calls. :(      
> 1-4/min under present  25+ load.
> 
> As for the syslog, unfort. almost every entry was marked async. I 
> changed an auth log entry but messages was already async. I left 
> kernel.errors sync, as It never really logs.
> 
> On Wednesday, September 25, 2002, at 04:55 AM, Randal, Phil wrote:
> 
> > Have you tried using PHP Accelerator?
> >
> > It's the only free PHP Cache which has survived my testing,
> > and should certainly reduce your CPU load.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-26 18:36     ` Marco Colombo
  2002-09-26 19:27       ` Rik van Riel
@ 2002-09-27  8:52       ` Martin Brulisauer
  1 sibling, 0 replies; 25+ messages in thread
From: Martin Brulisauer @ 2002-09-27  8:52 UTC (permalink / raw)
  To: Ernst Herzberg, Marco Colombo; +Cc: Adam Goldstein, linux-kernel


Turn on extended status in your apache configuration file:
-> ExtendedStatus On
so you can see more information on what the server is
doing. The information looks like:

Current Time: Friday, 27-Sep-2002 10:48:59 CEST\x0f
Restart Time: Tuesday, 27-Aug-2002 17:42:47 CEST\x0f
Parent Server Generation: 32 \x0f
Server uptime: 30 days 17 hours 6 minutes 12 seconds\x0f
Total accesses: 256966 - Total Traffic: 1.7 GB\x0f
CPU Usage: u11.3027 s6.76172 cu14.4033 cs2.29785 - .00131% CPU 
load\x0f
.0968 requests/sec - 702 B/second - 7.1 kB/request\x0f
1 requests currently being processed, 5 idle servers 

Martin


On 26 Sep 2002, at 20:36, Marco Colombo wrote:

> On Thu, 26 Sep 2002, Ernst Herzberg wrote:
> 
> > First reconfigure your apache, with
> > 
> > MaxClients 256  # absolute minimum, maybe you have to recompile apache
> > MinSpareServers 100  # better 150 to 200
> > MaxSpareServers 200 # bring it near MaxClients
> 
> KeepAlive		On
> MaxKeepAliveRequests	1000
> 
> .TM.
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-26 19:27       ` Rik van Riel
  2002-09-26 20:02         ` Marco Colombo
@ 2002-09-26 20:25         ` Ernst Herzberg
  1 sibling, 0 replies; 25+ messages in thread
From: Ernst Herzberg @ 2002-09-26 20:25 UTC (permalink / raw)
  To: Rik van Riel, Marco Colombo; +Cc: Adam Goldstein, linux-kernel


KeepAlive		On  # that is ok
MaxKeepAliveRequests	1000  # that is too high. No Client will request such count. Check your Pages: 
How many Images/Frames/etc you nee per request? Should not reach 100 ;-)
KeepAliveTimeout 15 # that is the key, but that is dangerous. Too high, and you will run out of MaxClients.
But you see that in serverstats ('K')

On Donnerstag, 26. September 2002 21:27, Rik van Riel wrote:
> On Thu, 26 Sep 2002, Marco Colombo wrote:
> > On Thu, 26 Sep 2002, Ernst Herzberg wrote:
> > > MaxClients 256  # absolute minimum, maybe you have to recompile apache
> > > MinSpareServers 100  # better 150 to 200
> > > MaxSpareServers 200 # bring it near MaxClients
> >
> > KeepAlive		On
> > MaxKeepAliveRequests	1000
>
> That sounds like an extraordinarily bad idea.  You really
> don't want to have ALL your apache daemons tied up with
> keepalive requests.
>
> Personally I never have MaxKeepAliveRequests set to more
> than 2/3 of MaxClients.
>


<Earny>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-26 20:02         ` Marco Colombo
@ 2002-09-26 20:09           ` Rik van Riel
  0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2002-09-26 20:09 UTC (permalink / raw)
  To: Marco Colombo; +Cc: Ernst Herzberg, Adam Goldstein, linux-kernel

On Thu, 26 Sep 2002, Marco Colombo wrote:

> Say we set MaxKeepAliveRequests to 190 (~2/3 of 256) instead of 1000.
>
> How many requests does a client perform before it hits the 15 sec idle
> timer?  Is it 189? The apache process is stuck in the timeout phase
> anyway. Is it 191? Then the first apache process drops the keepalive
> connection, the client reconnects to a second server process, which
> is stuck again in the timeout phase. Or am I missing something?

As I read it, MaxKeepAliveRequests is the maximum of simultaneous
keepalive requests that are tying up apache processes.

regards,

Rik
-- 
A: No.
Q: Should I include quotations after my reply?

http://www.surriel.com/		http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-26 19:27       ` Rik van Riel
@ 2002-09-26 20:02         ` Marco Colombo
  2002-09-26 20:09           ` Rik van Riel
  2002-09-26 20:25         ` Ernst Herzberg
  1 sibling, 1 reply; 25+ messages in thread
From: Marco Colombo @ 2002-09-26 20:02 UTC (permalink / raw)
  To: Rik van Riel; +Cc: Marco Colombo, Ernst Herzberg, Adam Goldstein, linux-kernel

On Thu, 26 Sep 2002, Rik van Riel wrote:

> On Thu, 26 Sep 2002, Marco Colombo wrote:
> > On Thu, 26 Sep 2002, Ernst Herzberg wrote:
> >
> > > MaxClients 256  # absolute minimum, maybe you have to recompile apache
> > > MinSpareServers 100  # better 150 to 200
> > > MaxSpareServers 200 # bring it near MaxClients
> >
> > KeepAlive		On
> > MaxKeepAliveRequests	1000
> 
> That sounds like an extraordinarily bad idea.  You really
> don't want to have ALL your apache daemons tied up with
> keepalive requests.

[this is sliding OT]

# MaxKeepAliveRequests: The maximum number of requests to allow
# during a persistent connection. Set to -1 to allow an unlimited amount.
# We recommend you leave this number high, for maximum performance.
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

(what "high" means is the question here, I believe)

> Personally I never have MaxKeepAliveRequests set to more
> than 2/3 of MaxClient.

There's a timeout (15 sec, by default), which kicks idle clients away.

I guess it depends on the kind of load. If you're serving just static pages,
I agree. If you're serving dynamic pages via SQL queries (expecially with
authenticated connection), "session" setup cost may dominate.


Anyway, it's the "extraordinarily bad" part that I don't get.

Say we set MaxKeepAliveRequests to 190 (~2/3 of 256) instead of 1000.

How many requests does a client perform before it hits the 15 sec idle
timer?  Is it 189? The apache process is stuck in the timeout phase
anyway. Is it 191? Then the first apache process drops the keepalive
connection, the client reconnects to a second server process, which
is stuck again in the timeout phase. Or am I missing something?

> 
> Rik
> 

.TM.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-26 18:36     ` Marco Colombo
@ 2002-09-26 19:27       ` Rik van Riel
  2002-09-26 20:02         ` Marco Colombo
  2002-09-26 20:25         ` Ernst Herzberg
  2002-09-27  8:52       ` Martin Brulisauer
  1 sibling, 2 replies; 25+ messages in thread
From: Rik van Riel @ 2002-09-26 19:27 UTC (permalink / raw)
  To: Marco Colombo; +Cc: Ernst Herzberg, Adam Goldstein, linux-kernel

On Thu, 26 Sep 2002, Marco Colombo wrote:
> On Thu, 26 Sep 2002, Ernst Herzberg wrote:
>
> > MaxClients 256  # absolute minimum, maybe you have to recompile apache
> > MinSpareServers 100  # better 150 to 200
> > MaxSpareServers 200 # bring it near MaxClients
>
> KeepAlive		On
> MaxKeepAliveRequests	1000

That sounds like an extraordinarily bad idea.  You really
don't want to have ALL your apache daemons tied up with
keepalive requests.

Personally I never have MaxKeepAliveRequests set to more
than 2/3 of MaxClients.

Rik
-- 
A: No.
Q: Should I include quotations after my reply?

http://www.surriel.com/		http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-26  3:03   ` Ernst Herzberg
@ 2002-09-26 18:36     ` Marco Colombo
  2002-09-26 19:27       ` Rik van Riel
  2002-09-27  8:52       ` Martin Brulisauer
  0 siblings, 2 replies; 25+ messages in thread
From: Marco Colombo @ 2002-09-26 18:36 UTC (permalink / raw)
  To: Ernst Herzberg; +Cc: Adam Goldstein, linux-kernel

On Thu, 26 Sep 2002, Ernst Herzberg wrote:

> First reconfigure your apache, with
> 
> MaxClients 256  # absolute minimum, maybe you have to recompile apache
> MinSpareServers 100  # better 150 to 200
> MaxSpareServers 200 # bring it near MaxClients

KeepAlive		On
MaxKeepAliveRequests	1000

.TM.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-26 17:09       ` Joachim Breuer
@ 2002-09-26 17:16         ` Rik van Riel
  0 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2002-09-26 17:16 UTC (permalink / raw)
  To: Joachim Breuer; +Cc: Adam Goldstein, linux-kernel

On Thu, 26 Sep 2002, Joachim Breuer wrote:

> In the olden days (at least I learnt that definition for a system
> based on 3.x BSD), the "load average" is the number of runnable
> processes (i.e. those that could do work if they got a slice of CPU
> time) averaged over some period of time (1, 2, 5, 10 minutes).

> I don't know the concise definition in Linux's case either.

Extending your definition, the load average in Linux would be:

"the number of processes that could do work if they got a slice
 of CPU time or had their data in RAM instead of being blocked
 on disk"

Rik
-- 
A: No.
Q: Should I include quotations after my reply?

http://www.surriel.com/		http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  2:38     ` Adam Goldstein
                         ` (2 preceding siblings ...)
  2002-09-25 22:54       ` Jose Luis Domingo Lopez
@ 2002-09-26 17:09       ` Joachim Breuer
  2002-09-26 17:16         ` Rik van Riel
  3 siblings, 1 reply; 25+ messages in thread
From: Joachim Breuer @ 2002-09-26 17:09 UTC (permalink / raw)
  To: Adam Goldstein; +Cc: linux-kernel

Adam Goldstein <Whitewlf@Whitewlf.net> writes:

> [...]
> cooperative data? Personally, I can't make heads or tails of the
> vmstat  output, and, I still have as of yet to get a -real- answer for
> what   "load" is.. besides the knee-jerk answer of "its the avg load
> over X  minutes".  :)

In the olden days (at least I learnt that definition for a system
based on 3.x BSD), the "load average" is the number of runnable
processes (i.e. those that could do work if they got a slice of CPU
time) averaged over some period of time (1, 2, 5, 10 minutes).

So, naively speaking upgrading the box to the number of CPUs indicated
by an average load average will keep it well busy while getting the
maximum amount of work done. [Yes, of course this rule of thumb does
not include the considerable overhead were one to really implement
that scheme - we used this measure when scaling hardware well before
SMP x86 became competitively available].

For Linux the load average also seems to include some notion of the
fraction of time spent waiting for disk accesses; possibly Linux
counts the number of processes which are either Runnable or Waiting
for Disk.

I don't know the concise definition in Linux's case either.


So long,
   Joe

-- 
"I use emacs, which might be thought of as a thermonuclear
 word processor."
-- Neal Stephenson, "In the beginning... was the command line"

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25 20:16 ` Adam Goldstein
  2002-09-25 21:26   ` Roger Larsson
@ 2002-09-26  3:03   ` Ernst Herzberg
  2002-09-26 18:36     ` Marco Colombo
  2002-10-01  5:36   ` David Rees
  2 siblings, 1 reply; 25+ messages in thread
From: Ernst Herzberg @ 2002-09-26  3:03 UTC (permalink / raw)
  To: Adam Goldstein; +Cc: linux-kernel

On Mittwoch, 25. September 2002 22:16, Adam Goldstein wrote:

> [.....]  It has still
> reached >25 load when apache processes reached 120 (112 active
> according to server-status) and page loads come to near dead stop...
> segfaults still exist, even with fixed mysql connection calls. :(
> 1-4/min under present  25+ load.
>
> [.....]

> Server uptime: 2 hours 10 minutes 6 seconds
> 43 requests currently being processed, 13 idle servers

> KK_WW_WW_K_KWLWWWKW_KKKK.__K_WWW_WWW_K_WWWWK_WKWW_WKK.W...W....W...W..

>>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>> 16800 apache    20   0  4732 4260  2988 R    37.7  0.2   0:35 httpd
>> 21171 apache    16   0  4976 4548  3268 R    36.6  0.2   2:02 httpd
>>  6949 apache    17   0  4604 4132  2936 R    36.5  0.2   0:53 httpd
>> 29183 apache    17   0  4900 4468  3192 R    36.0  0.2   6:18 httpd

--------------------------------------------------------

Looks very bad. Not the '>25 load', don't panic if that reaches more than 50 or more, 
if at the same time the processors don't reaches the 100%. 

First reconfigure your apache, with

MaxClients 256  # absolute minimum, maybe you have to recompile apache
MinSpareServers 100  # better 150 to 200
MaxSpareServers 200 # bring it near MaxClients

Make shure, you have enough resources available, su to apache, make shure
 ulimit -a
data seg size (kbytes)      unlimited
file size (blocks)          unlimited
max locked memory (kbytes)  unlimited
max memory size (kbytes)    unlimited
open files                  65536 (!!)
pipe size (512 bytes)       8
stack size (kbytes)         unlimited
cpu time (seconds)          unlimited
max user processes          4095 (!!)
virtual memory (kbytes)     unlimited

cat /proc/sys/fs/file-max
131072

Your machine should handle that. 

Reason: Bring the count of forks of apache clients to a minimum. But you have to be careful.
You need everywhere the needed resources, max client connects to mysql for example. 

And increase the apache-servers in several steps. If you have a bug or bad implementation
in your php scripts, you can run out of your cpu-resources.

If still the cpu-usage is about 100%, redesign your software or buy a bigger machine ;-)

<Earny>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  2:38     ` Adam Goldstein
  2002-09-25  5:24       ` Simon Kirby
  2002-09-25 13:13       ` Rik van Riel
@ 2002-09-25 22:54       ` Jose Luis Domingo Lopez
  2002-09-26 17:09       ` Joachim Breuer
  3 siblings, 0 replies; 25+ messages in thread
From: Jose Luis Domingo Lopez @ 2002-09-25 22:54 UTC (permalink / raw)
  To: linux-kernel

On Tuesday, 24 September 2002, at 22:38:56 -0400,
Adam Goldstein wrote:

> Can anyone recommend any long term cumulative monitors for vmstat,  
> and/or other processes that could run behind the scenes and gather  
> cooperative data? Personally, I can't make heads or tails of the vmstat  
> output, and, I still have as of yet to get a -real- answer for what   
> "load" is.. besides the knee-jerk answer of "its the avg load over X  
> minutes".  :)
> 
apt-cache show sysstat
...
Description: sar, iostat and mpstat - system performance tools for Linux

The above are very well known performance monitoring tools used in the
UNIX world, that can gather periodic measures of many of your system's 
usage parameters. Check the man pages for details :-)

Hope this helps.

-- 
Jose Luis Domingo Lopez
Linux Registered User #189436     Debian Linux Woody (Linux 2.4.19-pre6aa1)

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25 20:16 ` Adam Goldstein
@ 2002-09-25 21:26   ` Roger Larsson
  2002-09-26  3:03   ` Ernst Herzberg
  2002-10-01  5:36   ` David Rees
  2 siblings, 0 replies; 25+ messages in thread
From: Roger Larsson @ 2002-09-25 21:26 UTC (permalink / raw)
  To: Adam Goldstein
  Cc: Paweł Krawczyk, Simon Kirby, Adam Taylor, linux-kernel,
	Sebastian Benoit

The big question is - why that much CPU usage?

Possible answers:
* PHP, mySQL, Apache - needs that amount of CPU to perform the requested 
function.
(you have got suggestions from others)

* The implementation if either has bugs that cause the CPU usage. Garbage 
collection? Ineffective algorithms?
- Not much to do other than collecting execution profiles, quite advanced - 
recompiling of the tools will probably be needed... And probably help from 
the tools developers...

* The implementation of the user code has bugs that cause the CPU usage.
One example:
SQL SELECT with unindexed data - this can usually be noticed as buffer in load 
in vmstat but since all data fits in memory - it would cause scans in memory, 
with lots of RAM cache misses... And it would work well as long as the 
scanned data was smaller than the CPU cache?
- Suggestion: Review your index keys and select statements to make sure that 
they match!

/RogerL

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
       [not found] <0EBC45FCABFC95428EBFC3A51B368C9501AF4F@jessica.herefordshire.gov.uk>
@ 2002-09-25 20:16 ` Adam Goldstein
  2002-09-25 21:26   ` Roger Larsson
                     ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Adam Goldstein @ 2002-09-25 20:16 UTC (permalink / raw)
  To: Randal, Phil
  Cc: Paweł Krawczyk, Simon Kirby, Adam Taylor, linux-kernel,
	Sebastian Benoit

During my investigation of php accelerator (which we put off before 
thinking it would be better to stabilize the server first) I came 
across a small blurb about php 4.1.2 (which we use) and mysql.

http://www.php-accelerator.co.uk/faq.php#segv2

Apparently this is how the site is written in some places, and it 
causes instability in the php portion of the apache process. We are 
fixing this now. Also, with the nodiratime, noatime, ext2 combination, 
the load has decreased a little, but, not very much. It has still 
reached >25 load when apache processes reached 120 (112 active 
according to server-status) and page loads come to near dead stop... 
segfaults still exist, even with fixed mysql connection calls. :(      
1-4/min under present  25+ load.

As for the syslog, unfort. almost every entry was marked async. I 
changed an auth log entry but messages was already async. I left 
kernel.errors sync, as It never really logs.

On Wednesday, September 25, 2002, at 04:55 AM, Randal, Phil wrote:

> Have you tried using PHP Accelerator?
>
> It's the only free PHP Cache which has survived my testing,
> and should certainly reduce your CPU load.
>
> Phil
>
> ---------------------------------------------
> Phil Randal
> Network Engineer
> Herefordshire Council
> Hereford, UK
>
>> -----Original Message-----
>> From: Adam Goldstein [mailto:Whitewlf@Whitewlf.net]
>> Sent: 25 September 2002 07:56
>> To: Simon Kirby
>> Cc: linux-kernel@vger.kernel.org; Adam Bernau; Adam Taylor
>> Subject: Re: Very High Load, kernel 2.4.18, apache/mysql
>>
>>
>> Have added nodiratime, missed that one, and switched to ext2 for
>> testing... ;)
>> It is still running high load, but seems only slightly better
>> , but, i
>> will know more later.
>> It is currently at 12-23 load, with 76 httpd processes running (75
>> mysql)
>> 0% idle, 89% user per cpu avg. about 8-12 httpd processes active
>> simultaneously.
>>
>> Using postfix on new server, not sure how to disable locking?
>> Same with mysql.. can locking be disabled? how? safe?
>>
>> No, no slappers ;) We did get that on the old server(almost
>> the moment
>> it was around, before it was widely known), one of the reasons I
>> scrapped it so fast.
>>
>> All patches are applied to the new servers, even though
>> openssl reports
>> old
>> 'c' version, it is patched (many distros have done this, I do
>> not know
>> why they simply
>> don't use the new 'e'+ versions.)
>>
>> The site uses php heavily, everypage has php includes and
>> mysql lookups
>> (multiple languages, banner rotation, article rotation, etc...)
>>
>> The customer/developer is really quite good with php/html... just not
>> very adept at linux..yet ;)
>>
>> You can take a look at the site (ok netiquette?) http://delcampe.com
>> ... please excuse the
>> intense lag, however, a we "are experiencing technical difficulties"
>> ... <har har>
>>
>> I will assume the combination of diratime, journaling, software raid,
>> mail locking and logging are a
>> bad combination.... however, I have been finding many
>> instances online
>> about software
>> raid performing as well, or in some cases better, than hardware raid
>> setups (their tests,
>> not mine.. I would have assumed the reverse)  but, that reiser-fs
>> performs far better than ext3 under load (with notail, noatime
>> enabled... tho ext3 can out do it under light load.)
>>
>> Thanks for all the info... I am going to run tests on it tomorrow
>> morning during 'rush hour'. (This server's users are mostly european,
>> so my peak times differ from our other sites... which is good
>> for most
>> of the day...)
>>
>>
>> On Wednesday, September 25, 2002, at 01:24 AM, Simon Kirby wrote:
>>
>>> On Tue, Sep 24, 2002 at 10:38:56PM -0400, Adam Goldstein wrote:
>>>
>>>> [root@nosferatu whitewlf]# vmstat -n 1
>>>>    procs                      memory  swap       io
>> system
>>>> cpu
>>>>  r  b  w   swpd   free   buff  cache si so  bi   bo   in
>> cs   us sy
>>>> id
>>>>  5  5  2  94076 1181592 61740 219676  0  0  10   16  125
>> 111  69 12
>>>> 19
>>>>  7  2  4  94076 1186024 61752 219664  0  0   0  948  454
>> 1421  95  5
>>>>  0
>>>> 10  2  2  94076 1172288 61764 219672  0  0   0 1024  468
>> 1425  88 12
>>>>  0
>>>>  7  2  3  94076 1175220 61772 219660  0  0   0 1236  509
>> 1513  93  7
>>>>  0
>>>>  5  2  2  94076 1187824 61784 219664  0  0   0  864  419
>> 1524  87 13
>>>>  0
>>>>  8  1  2  94076 1170140 61792 219656  0  0   0  656  362
>> 945  88 12
>>>>  0
>>>>  5  7  3  94076 1182448 61800 219712  0  0  36  696  580
>> 1616  93  7
>>>>  0
>>>>  5  4  3  94076 1186500 61808 219740  0  0  12 1252  595
>> 1766  90 10
>>>>  0
>>>>  8  1  3  94076 1177424 61812 219744  0  0   0 1124  497
>> 1588  96  4
>>>>  0
>>>>  8  3  3  94076 1167564 61824 219748  0  0   0 1136  485
>> 1476  88 12
>>>>  0
>>>>  5  4  2  94076 1187024 61836 219740  0  0   0 1204  473
>> 1659  93  7
>>>>  0
>>>> 10  6  3  94076 1180816 61840 219832  0  0  52 1124  668
>> 3079  73 27
>>>>  0
>>>>  6  6  2  94076 1184404 61840 219932  0  0  88 1356 1110
>> 1886  94  6
>>>>  0
>>>>  8  4  2  94076 1176276 61852 219948  0  0   0 1324  683
>> 1819  89 11
>>>>  0
>>>>  6  4  3  94076 1183948 61860 219932  0  0   0  984  441
>> 1296  92  8
>>>>  0
>>>> 11  1  2  94076 1177320 61872 219940  0  0   0  948  448
>> 1351  88 12
>>>>  0
>>>> 12  2  2  94076 1150268 61880 219952  0  0   0  952  438
>> 1206  88 12
>>>>  0
>>>
>>> (Yes, I reformatted your vmstat.)
>>>
>>> It's mostly CPU bound (see first column), but there is some disk
>>> waiting
>>> going on too (next two).  Most of the disk activity shows writing
>>> ("bo"),
>>> not reading ("bi").  There is some swap use, but no swap occurred
>>> during
>>> your dump ("si", "so"), so it's probably fine.
>>>
>>> Free memory is huge, which indicates either the box hasn't been up
>>> long,
>>> some huge process just exited and cleared a lot of memory
>> with it, or
>>> your site really is small and doesn't need anywhere near that much
>>> memory.  Judging by the rate of disk reads ("bi"), it looks like it
>>> probably has more than enough memory.
>>>
>>> A lot of writeouts are happening, and they're happening all
>> the time
>>> (not
>>> in five second bursts which would indicate regular
>> asynchronous write
>>> out).  Are applications sync()ing, fsync()ing,
>> fdatasync()ing, or using
>>> O_SYNC?  Are you using a journalling FS and are doing a lot
>> of metadata
>>> (directory) changes?  We saw huge problems on our mail
>> servers when we
>>> switched to ext3 from ext2 when with ext2 they were almost
>> always idle
>>> (load went from 0.2, 0.4 to 20, 30) because we're using dotlocking
>>> which
>>> seems to annoy ext3.
>>>
>>> If you're using a database, try disabling fsync() mode.
>> Data integrity
>>> after crashes might be more interesting (insert fsync()
>> flamewar here),
>>> but it mith help a lot.  At least try it temporarily to see
>> if this is
>>> what is causing the load.
>>>
>>> Always mount your filesystems with "noatime" and
>> "nodiratime".  I mount
>>> hundreds of servers this way and nobody ever notices
>> (except that disks
>>> last a lot longer and there are a lot less writeouts on
>> servers that
>>> do a
>>> lot of reading, such as web servers).  If you don't do
>> this, _every_
>>> file
>>> read will result in a scheduled writeback to disk to update
>> the atime
>>> (last accessed time).  Writing atime to disk is usually a dumb idea,
>>> because almost nothing uses it.  I think the only program
>> in the wild
>>> I've ever seen that uses the atime field is the finger daemon (wow).
>>>
>>>> CPU0 states: 87.5% user, 12.0% system,  0.0% nice,  0.0% idle
>>>> CPU1 states: 90.2% user,  9.4% system,  0.0% nice,  0.0% idle
>>>
>>> Looks like mostly user CPU.
>>>
>>>>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM
>> TIME COMMAND
>>>> 16800 apache    20   0  4732 4260  2988 R    37.7  0.2   0:35 httpd
>>>> 21171 apache    16   0  4976 4548  3268 R    36.6  0.2   2:02 httpd
>>>>  6949 apache    17   0  4604 4132  2936 R    36.5  0.2   0:53 httpd
>>>> 29183 apache    17   0  4900 4468  3192 R    36.0  0.2   6:18 httpd
>>>
>>> First, check /tmp for .bugtraq.c, etc., and make sure this isn't the
>>> Slapper worm. :)
>>>
>>> Next, figure out why these processes have taken _minutes_
>> of CPU time
>>> and
>>> are still running!  If these aren't the worm, you're likely using
>>> mod_perl or mod_php or something which can make the httpd
>> proess take
>>> that much CPU.  Check which scripts and what conditions are creating
>>> those processes.  Play around in /proc/16800/fd, look at
>>> /proc/16800/cwd,
>>> etc., if you can't determine what is happening by the logs.
>>  If you're
>>> still stuck, try tracing them (see below).  If it's hard to
>> catch them
>>> (though it appears they are slugs), switching mod_perl/mod_php to
>>> standalone CGIs may help.
>>>
>>> To summarize, it looks like the box is both CPU bound (above Apache
>>> processes) and blocking on disk writes.  The processes
>> using the CPU
>>> are
>>> not responsible for the writing out because they are in 'R' state
>>> (running); if they were writing, they would be in mostly 'D' state.
>>>
>>> If you want to see which processes are writing out, try:
>>>
>>> 	ps auxw | grep ' D '
>>>
>>> 	(Might give false positives -- just looking for 'D' state.)
>>>
>>> If you want to see whether the journalling code is doing
>> the writing,
>>> try:
>>>
>>> 	ps -eo pid,stat,args,wchan | grep ' D '
>>>
>>> ...and see which functions the 'D' state processes are blocking in
>>> (requires your System.map file to be up-to-date).  If you
>> see something
>>> about do_get_write_access (a function in fs/jbd/transaction.c), it's
>>> likely the ext3 journalling causing all of the writing.
>> This is what I
>>> saw in our case with the mail servers.
>>>
>>> This "ps" command is also useful for figuring out what other
>>> non-running
>>> processes are doing, too.  However, the wchan field often shows just
>>> "down", which isn't very helpful.
>>>
>>> If you are getting a lot of processes sleeping in "down" and want to
>>> figure out where they are actually stuck, try heading over to the
>>> console
>>> and hit right_control-scroll_lock.  Modern kernels will
>> print a stack
>>> backtrace for each process, and you can manually translate
>> the the EIP
>>> locations in /System.map or /boot/System.map (whatever matches your
>>> kernel) to the function names to find functions the kernel
>> is/was in.
>>>
>>> To find the function in System.map, first make make sure it
>> is sorted.
>>> Next, incrementally search for the first EIP, number by
>> number.  The
>>> EIP
>>> provided in the process list dump will always be higher
>> than the actual
>>> function offset, because it will be somewhere in the middle of the
>>> function (System.map lists the beginning of each function).  If you
>>> don't
>>> have incremental search, this might be tedious.  Some versions of
>>> "klogd"
>>> will do this translation for you; you might want to check your
>>> kern.log.
>>> You may also be able to coax "ksymoops" into doing the
>> translation for
>>> you.
>>>
>>> If you cannot find a match in System.map, the EIP may be in a module
>>> (requires loading modules with a symbol dump to trace).
>> Try the next
>>> EIP
>>> first, you can often get a good idea of what is happening by just
>>> tracing
>>> further back.  Once you've done this a few times, you'll get used to
>>> seeing the module offsets being quite different from
>> built-in offsets.
>>>
>>> If you want to figure out what a running ('R') process is
>> doing, first
>>> try "strace -p <pid>".  If it's not making many or any system calls
>>> (eg:
>>> an endless loop or very user-CPU-intensive loop), try
>> ltrace.  If that
>>> provides nothing useful the only other option is to try
>> attaching to it
>>> with gdb and do a backtrace:
>>>
>>> 	gdb /proc/<pid>/exe
>>> 	attach <pid>
>>> 	bt
>>>
>>> ...but you may need to compile with debugging symbols for this to
>>> provide
>>> useful output.  Chances are you won't need to do this, and "strace"
>>> will give you a pretty good idea about what is happening.
>>>
>>> There should be enough information you can gather from
>> these tools to
>>> figure out what is happening.  "vmstat 1" is usually the
>> quickest way
>>> to
>>> get a general idea of what is happening, and "ps auxwr" and
>> "ps aux |
>>> grep ' D '" are useful for starting to narrow it down.
>>>
>>> Hope this helps. :)
>>>
>>> Simon-
>>>
>>> [  Stormix Technologies Inc.  ][  NetNation Communications Inc. ]
>>> [       sim@stormix.com       ][       sim@netnation.com        ]
>>> [ Opinions expressed are not necessarily those of my employers. ]
>>>
>> -- 
>> Adam Goldstein
>> White Wolf Networks
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe
>> linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>>
>>
-- 
Adam Goldstein
White Wolf Networks


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  2:38     ` Adam Goldstein
  2002-09-25  5:24       ` Simon Kirby
@ 2002-09-25 13:13       ` Rik van Riel
  2002-09-25 22:54       ` Jose Luis Domingo Lopez
  2002-09-26 17:09       ` Joachim Breuer
  3 siblings, 0 replies; 25+ messages in thread
From: Rik van Riel @ 2002-09-25 13:13 UTC (permalink / raw)
  To: Adam Goldstein; +Cc: Roger Larsson, linux-kernel, Adam Taylor

On Tue, 24 Sep 2002, Adam Goldstein wrote:

> These are under current load, i will run a full snap of tests tomorrow
> during peak load.

> 235 processes: 229 sleeping, 6 running, 0 zombie, 0 stopped
> CPU0 states: 87.5% user, 12.0% system,  0.0% nice,  0.0% idle
> CPU1 states: 90.2% user,  9.4% system,  0.0% nice,  0.0% idle

OK, this looks like you're just running out of CPU power.

Rik
-- 
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/		http://distro.conectiva.com/

Spamtraps of the month:  september@surriel.com trac@trac.org


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  7:20           ` Simon Kirby
@ 2002-09-25  7:51             ` Paweł Krawczyk
  0 siblings, 0 replies; 25+ messages in thread
From: Paweł Krawczyk @ 2002-09-25  7:51 UTC (permalink / raw)
  To: Simon Kirby; +Cc: Adam Goldstein, linux-kernel

On Wed, Sep 25, 2002 at 12:20:26AM -0700, Simon Kirby wrote:

> Again, not locking, but fsync().  It's safe providing your machine never
> crashes. :)  Of course, there's still a chance it can be corrupted
> _with_ fsync() anyway, but the difference is the clients will get a
> result beore it guarantees the data will be on disk.

Many Linux distributions configure syslog to use synchronous writes
for each logged line, which caused very high load on busy systems
I've seen.

Go through your /etc/syslog.conf and change every "/var/log/messages"
to "-/var/log/messages", the minus enables asynchronous writes.

Also try disabling logging for Apache at all for some time (set ErrorLog,
AccessLog or CustomLog to /dev/null) and see what happens.

-- 
Paweł Krawczyk, Kraków, Poland  http://echelon.pl/kravietz/
horses: http://kabardians.com/
crypto: http://ipsec.pl/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  6:56         ` Adam Goldstein
@ 2002-09-25  7:20           ` Simon Kirby
  2002-09-25  7:51             ` Paweł Krawczyk
  0 siblings, 1 reply; 25+ messages in thread
From: Simon Kirby @ 2002-09-25  7:20 UTC (permalink / raw)
  To: Adam Goldstein; +Cc: linux-kernel, Adam Bernau, Adam Taylor

On Wed, Sep 25, 2002 at 02:56:18AM -0400, Adam Goldstein wrote:

> Have added nodiratime, missed that one, and switched to ext2 for 
> testing... ;)
> It is still running high load, but seems only slightly better , but, i 
> will know more later.

Yes, nodiratime will only make a tiny difference.

> Using postfix on new server, not sure how to disable locking?

It's not locking you'd want to disable.  If anything, it's the
synchronous writes to disk of data which may or may not even need to go
to disk (eg: an email that gets delivered almost instantly and
subsequently removed from disk just after it was written).  The idea with
a journal, however, is that it can keep track of such emails sequentually
on disk rather than seeking all over the place, and write the ones that
will stick around later.  Your output rate is too low to be bounded by a
sequential write limit alone, especially on software RAID, so it's most
likely doing a lot of seeking while writing.

> Same with mysql.. can locking be disabled? how? safe?

Again, not locking, but fsync().  It's safe providing your machine never
crashes. :)  Of course, there's still a chance it can be corrupted
_with_ fsync() anyway, but the difference is the clients will get a
result beore it guarantees the data will be on disk.

First narrow down what is causing most of the writing activity.

> The site uses php heavily, everypage has php includes and mysql lookups
> (multiple languages, banner rotation, article rotation, etc...)

I see.  The cause of your CPU-wise load appears to be mostly the PHP under
mod_php (unless something else is running).  Those processes you showed
in top were running for so long that they were probably never going to
output anything (or at least the client wouldn't be there anymore), so it
looks like a code bug.  You should debug this.

> You can take a look at the site (ok netiquette?) http://delcampe.com 

It definitely seems slow. :)

> I will assume the combination of diratime, journaling, software raid, 
> mail locking and logging are a
> bad combination.... however, I have been finding many instances online 

Software RAID won't slow it down.  diratime won't make any noticeable
difference.  Logging is usually sequential.  Journalling _with_ mail
locking might be a concern, but more than likely you're just seeing the
result of fsync().  What sort of mail load do you have?  What about the
MySQL write load?

Simon-

[        Simon Kirby        ][        Network Operations        ]
[     sim@netnation.com     ][     NetNation Communications     ]
[  Opinions expressed are not necessarily those of my employer. ]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  5:24       ` Simon Kirby
@ 2002-09-25  6:56         ` Adam Goldstein
  2002-09-25  7:20           ` Simon Kirby
  0 siblings, 1 reply; 25+ messages in thread
From: Adam Goldstein @ 2002-09-25  6:56 UTC (permalink / raw)
  To: Simon Kirby; +Cc: linux-kernel, Adam Bernau, Adam Taylor

Have added nodiratime, missed that one, and switched to ext2 for 
testing... ;)
It is still running high load, but seems only slightly better , but, i 
will know more later.
It is currently at 12-23 load, with 76 httpd processes running (75 
mysql)
0% idle, 89% user per cpu avg. about 8-12 httpd processes active 
simultaneously.

Using postfix on new server, not sure how to disable locking?
Same with mysql.. can locking be disabled? how? safe?

No, no slappers ;) We did get that on the old server(almost the moment 
it was around, before it was widely known), one of the reasons I 
scrapped it so fast.

All patches are applied to the new servers, even though openssl reports 
old
'c' version, it is patched (many distros have done this, I do not know 
why they simply
don't use the new 'e'+ versions.)

The site uses php heavily, everypage has php includes and mysql lookups
(multiple languages, banner rotation, article rotation, etc...)

The customer/developer is really quite good with php/html... just not 
very adept at linux..yet ;)

You can take a look at the site (ok netiquette?) http://delcampe.com 
... please excuse the
intense lag, however, a we "are experiencing technical difficulties" 
... <har har>

I will assume the combination of diratime, journaling, software raid, 
mail locking and logging are a
bad combination.... however, I have been finding many instances online 
about software
raid performing as well, or in some cases better, than hardware raid 
setups (their tests,
not mine.. I would have assumed the reverse)  but, that reiser-fs 
performs far better than ext3 under load (with notail, noatime 
enabled... tho ext3 can out do it under light load.)

Thanks for all the info... I am going to run tests on it tomorrow 
morning during 'rush hour'. (This server's users are mostly european, 
so my peak times differ from our other sites... which is good for most 
of the day...)


On Wednesday, September 25, 2002, at 01:24 AM, Simon Kirby wrote:

> On Tue, Sep 24, 2002 at 10:38:56PM -0400, Adam Goldstein wrote:
>
>> [root@nosferatu whitewlf]# vmstat -n 1
>>    procs                      memory  swap       io     system       
>> cpu
>>  r  b  w   swpd   free   buff  cache si so  bi   bo   in   cs   us sy 
>> id
>>  5  5  2  94076 1181592 61740 219676  0  0  10   16  125   111  69 12 
>> 19
>>  7  2  4  94076 1186024 61752 219664  0  0   0  948  454  1421  95  5 
>>  0
>> 10  2  2  94076 1172288 61764 219672  0  0   0 1024  468  1425  88 12 
>>  0
>>  7  2  3  94076 1175220 61772 219660  0  0   0 1236  509  1513  93  7 
>>  0
>>  5  2  2  94076 1187824 61784 219664  0  0   0  864  419  1524  87 13 
>>  0
>>  8  1  2  94076 1170140 61792 219656  0  0   0  656  362   945  88 12 
>>  0
>>  5  7  3  94076 1182448 61800 219712  0  0  36  696  580  1616  93  7 
>>  0
>>  5  4  3  94076 1186500 61808 219740  0  0  12 1252  595  1766  90 10 
>>  0
>>  8  1  3  94076 1177424 61812 219744  0  0   0 1124  497  1588  96  4 
>>  0
>>  8  3  3  94076 1167564 61824 219748  0  0   0 1136  485  1476  88 12 
>>  0
>>  5  4  2  94076 1187024 61836 219740  0  0   0 1204  473  1659  93  7 
>>  0
>> 10  6  3  94076 1180816 61840 219832  0  0  52 1124  668  3079  73 27 
>>  0
>>  6  6  2  94076 1184404 61840 219932  0  0  88 1356 1110  1886  94  6 
>>  0
>>  8  4  2  94076 1176276 61852 219948  0  0   0 1324  683  1819  89 11 
>>  0
>>  6  4  3  94076 1183948 61860 219932  0  0   0  984  441  1296  92  8 
>>  0
>> 11  1  2  94076 1177320 61872 219940  0  0   0  948  448  1351  88 12 
>>  0
>> 12  2  2  94076 1150268 61880 219952  0  0   0  952  438  1206  88 12 
>>  0
>
> (Yes, I reformatted your vmstat.)
>
> It's mostly CPU bound (see first column), but there is some disk 
> waiting
> going on too (next two).  Most of the disk activity shows writing 
> ("bo"),
> not reading ("bi").  There is some swap use, but no swap occurred 
> during
> your dump ("si", "so"), so it's probably fine.
>
> Free memory is huge, which indicates either the box hasn't been up 
> long,
> some huge process just exited and cleared a lot of memory with it, or
> your site really is small and doesn't need anywhere near that much
> memory.  Judging by the rate of disk reads ("bi"), it looks like it
> probably has more than enough memory.
>
> A lot of writeouts are happening, and they're happening all the time 
> (not
> in five second bursts which would indicate regular asynchronous write
> out).  Are applications sync()ing, fsync()ing, fdatasync()ing, or using
> O_SYNC?  Are you using a journalling FS and are doing a lot of metadata
> (directory) changes?  We saw huge problems on our mail servers when we
> switched to ext3 from ext2 when with ext2 they were almost always idle
> (load went from 0.2, 0.4 to 20, 30) because we're using dotlocking 
> which
> seems to annoy ext3.
>
> If you're using a database, try disabling fsync() mode.  Data integrity
> after crashes might be more interesting (insert fsync() flamewar here),
> but it mith help a lot.  At least try it temporarily to see if this is
> what is causing the load.
>
> Always mount your filesystems with "noatime" and "nodiratime".  I mount
> hundreds of servers this way and nobody ever notices (except that disks
> last a lot longer and there are a lot less writeouts on servers that 
> do a
> lot of reading, such as web servers).  If you don't do this, _every_ 
> file
> read will result in a scheduled writeback to disk to update the atime
> (last accessed time).  Writing atime to disk is usually a dumb idea,
> because almost nothing uses it.  I think the only program in the wild
> I've ever seen that uses the atime field is the finger daemon (wow).
>
>> CPU0 states: 87.5% user, 12.0% system,  0.0% nice,  0.0% idle
>> CPU1 states: 90.2% user,  9.4% system,  0.0% nice,  0.0% idle
>
> Looks like mostly user CPU.
>
>>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>> 16800 apache    20   0  4732 4260  2988 R    37.7  0.2   0:35 httpd
>> 21171 apache    16   0  4976 4548  3268 R    36.6  0.2   2:02 httpd
>>  6949 apache    17   0  4604 4132  2936 R    36.5  0.2   0:53 httpd
>> 29183 apache    17   0  4900 4468  3192 R    36.0  0.2   6:18 httpd
>
> First, check /tmp for .bugtraq.c, etc., and make sure this isn't the
> Slapper worm. :)
>
> Next, figure out why these processes have taken _minutes_ of CPU time 
> and
> are still running!  If these aren't the worm, you're likely using
> mod_perl or mod_php or something which can make the httpd proess take
> that much CPU.  Check which scripts and what conditions are creating
> those processes.  Play around in /proc/16800/fd, look at 
> /proc/16800/cwd,
> etc., if you can't determine what is happening by the logs.  If you're
> still stuck, try tracing them (see below).  If it's hard to catch them
> (though it appears they are slugs), switching mod_perl/mod_php to
> standalone CGIs may help.
>
> To summarize, it looks like the box is both CPU bound (above Apache
> processes) and blocking on disk writes.  The processes using the CPU 
> are
> not responsible for the writing out because they are in 'R' state
> (running); if they were writing, they would be in mostly 'D' state.
>
> If you want to see which processes are writing out, try:
>
> 	ps auxw | grep ' D '
>
> 	(Might give false positives -- just looking for 'D' state.)
>
> If you want to see whether the journalling code is doing the writing,
> try:
>
> 	ps -eo pid,stat,args,wchan | grep ' D '
>
> ...and see which functions the 'D' state processes are blocking in
> (requires your System.map file to be up-to-date).  If you see something
> about do_get_write_access (a function in fs/jbd/transaction.c), it's
> likely the ext3 journalling causing all of the writing.  This is what I
> saw in our case with the mail servers.
>
> This "ps" command is also useful for figuring out what other 
> non-running
> processes are doing, too.  However, the wchan field often shows just
> "down", which isn't very helpful.
>
> If you are getting a lot of processes sleeping in "down" and want to
> figure out where they are actually stuck, try heading over to the 
> console
> and hit right_control-scroll_lock.  Modern kernels will print a stack
> backtrace for each process, and you can manually translate the the EIP
> locations in /System.map or /boot/System.map (whatever matches your
> kernel) to the function names to find functions the kernel is/was in.
>
> To find the function in System.map, first make make sure it is sorted.
> Next, incrementally search for the first EIP, number by number.  The 
> EIP
> provided in the process list dump will always be higher than the actual
> function offset, because it will be somewhere in the middle of the
> function (System.map lists the beginning of each function).  If you 
> don't
> have incremental search, this might be tedious.  Some versions of 
> "klogd"
> will do this translation for you; you might want to check your 
> kern.log.
> You may also be able to coax "ksymoops" into doing the translation for
> you.
>
> If you cannot find a match in System.map, the EIP may be in a module
> (requires loading modules with a symbol dump to trace).  Try the next 
> EIP
> first, you can often get a good idea of what is happening by just 
> tracing
> further back.  Once you've done this a few times, you'll get used to
> seeing the module offsets being quite different from built-in offsets.
>
> If you want to figure out what a running ('R') process is doing, first
> try "strace -p <pid>".  If it's not making many or any system calls 
> (eg:
> an endless loop or very user-CPU-intensive loop), try ltrace.  If that
> provides nothing useful the only other option is to try attaching to it
> with gdb and do a backtrace:
>
> 	gdb /proc/<pid>/exe
> 	attach <pid>
> 	bt
>
> ...but you may need to compile with debugging symbols for this to 
> provide
> useful output.  Chances are you won't need to do this, and "strace"
> will give you a pretty good idea about what is happening.
>
> There should be enough information you can gather from these tools to
> figure out what is happening.  "vmstat 1" is usually the quickest way 
> to
> get a general idea of what is happening, and "ps auxwr" and "ps aux |
> grep ' D '" are useful for starting to narrow it down.
>
> Hope this helps. :)
>
> Simon-
>
> [  Stormix Technologies Inc.  ][  NetNation Communications Inc. ]
> [       sim@stormix.com       ][       sim@netnation.com        ]
> [ Opinions expressed are not necessarily those of my employers. ]
>
-- 
Adam Goldstein
White Wolf Networks


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  2:38     ` Adam Goldstein
@ 2002-09-25  5:24       ` Simon Kirby
  2002-09-25  6:56         ` Adam Goldstein
  2002-09-25 13:13       ` Rik van Riel
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 25+ messages in thread
From: Simon Kirby @ 2002-09-25  5:24 UTC (permalink / raw)
  To: Adam Goldstein; +Cc: linux-kernel

On Tue, Sep 24, 2002 at 10:38:56PM -0400, Adam Goldstein wrote:

> [root@nosferatu whitewlf]# vmstat -n 1
>    procs                      memory  swap       io     system       cpu
>  r  b  w   swpd   free   buff  cache si so  bi   bo   in   cs   us sy id
>  5  5  2  94076 1181592 61740 219676  0  0  10   16  125   111  69 12 19
>  7  2  4  94076 1186024 61752 219664  0  0   0  948  454  1421  95  5  0
> 10  2  2  94076 1172288 61764 219672  0  0   0 1024  468  1425  88 12  0
>  7  2  3  94076 1175220 61772 219660  0  0   0 1236  509  1513  93  7  0
>  5  2  2  94076 1187824 61784 219664  0  0   0  864  419  1524  87 13  0
>  8  1  2  94076 1170140 61792 219656  0  0   0  656  362   945  88 12  0
>  5  7  3  94076 1182448 61800 219712  0  0  36  696  580  1616  93  7  0
>  5  4  3  94076 1186500 61808 219740  0  0  12 1252  595  1766  90 10  0
>  8  1  3  94076 1177424 61812 219744  0  0   0 1124  497  1588  96  4  0
>  8  3  3  94076 1167564 61824 219748  0  0   0 1136  485  1476  88 12  0
>  5  4  2  94076 1187024 61836 219740  0  0   0 1204  473  1659  93  7  0
> 10  6  3  94076 1180816 61840 219832  0  0  52 1124  668  3079  73 27  0
>  6  6  2  94076 1184404 61840 219932  0  0  88 1356 1110  1886  94  6  0
>  8  4  2  94076 1176276 61852 219948  0  0   0 1324  683  1819  89 11  0
>  6  4  3  94076 1183948 61860 219932  0  0   0  984  441  1296  92  8  0
> 11  1  2  94076 1177320 61872 219940  0  0   0  948  448  1351  88 12  0
> 12  2  2  94076 1150268 61880 219952  0  0   0  952  438  1206  88 12  0

(Yes, I reformatted your vmstat.)

It's mostly CPU bound (see first column), but there is some disk waiting
going on too (next two).  Most of the disk activity shows writing ("bo"),
not reading ("bi").  There is some swap use, but no swap occurred during
your dump ("si", "so"), so it's probably fine.

Free memory is huge, which indicates either the box hasn't been up long,
some huge process just exited and cleared a lot of memory with it, or
your site really is small and doesn't need anywhere near that much
memory.  Judging by the rate of disk reads ("bi"), it looks like it
probably has more than enough memory.

A lot of writeouts are happening, and they're happening all the time (not
in five second bursts which would indicate regular asynchronous write
out).  Are applications sync()ing, fsync()ing, fdatasync()ing, or using
O_SYNC?  Are you using a journalling FS and are doing a lot of metadata
(directory) changes?  We saw huge problems on our mail servers when we
switched to ext3 from ext2 when with ext2 they were almost always idle
(load went from 0.2, 0.4 to 20, 30) because we're using dotlocking which
seems to annoy ext3.

If you're using a database, try disabling fsync() mode.  Data integrity
after crashes might be more interesting (insert fsync() flamewar here),
but it mith help a lot.  At least try it temporarily to see if this is
what is causing the load.

Always mount your filesystems with "noatime" and "nodiratime".  I mount
hundreds of servers this way and nobody ever notices (except that disks
last a lot longer and there are a lot less writeouts on servers that do a
lot of reading, such as web servers).  If you don't do this, _every_ file
read will result in a scheduled writeback to disk to update the atime
(last accessed time).  Writing atime to disk is usually a dumb idea,
because almost nothing uses it.  I think the only program in the wild
I've ever seen that uses the atime field is the finger daemon (wow).

> CPU0 states: 87.5% user, 12.0% system,  0.0% nice,  0.0% idle
> CPU1 states: 90.2% user,  9.4% system,  0.0% nice,  0.0% idle

Looks like mostly user CPU.

>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
> 16800 apache    20   0  4732 4260  2988 R    37.7  0.2   0:35 httpd
> 21171 apache    16   0  4976 4548  3268 R    36.6  0.2   2:02 httpd
>  6949 apache    17   0  4604 4132  2936 R    36.5  0.2   0:53 httpd
> 29183 apache    17   0  4900 4468  3192 R    36.0  0.2   6:18 httpd

First, check /tmp for .bugtraq.c, etc., and make sure this isn't the
Slapper worm. :)

Next, figure out why these processes have taken _minutes_ of CPU time and
are still running!  If these aren't the worm, you're likely using
mod_perl or mod_php or something which can make the httpd proess take
that much CPU.  Check which scripts and what conditions are creating
those processes.  Play around in /proc/16800/fd, look at /proc/16800/cwd,
etc., if you can't determine what is happening by the logs.  If you're
still stuck, try tracing them (see below).  If it's hard to catch them
(though it appears they are slugs), switching mod_perl/mod_php to
standalone CGIs may help.

To summarize, it looks like the box is both CPU bound (above Apache
processes) and blocking on disk writes.  The processes using the CPU are
not responsible for the writing out because they are in 'R' state
(running); if they were writing, they would be in mostly 'D' state.

If you want to see which processes are writing out, try:

	ps auxw | grep ' D '

	(Might give false positives -- just looking for 'D' state.)

If you want to see whether the journalling code is doing the writing,
try:

	ps -eo pid,stat,args,wchan | grep ' D '

...and see which functions the 'D' state processes are blocking in
(requires your System.map file to be up-to-date).  If you see something
about do_get_write_access (a function in fs/jbd/transaction.c), it's
likely the ext3 journalling causing all of the writing.  This is what I
saw in our case with the mail servers.

This "ps" command is also useful for figuring out what other non-running
processes are doing, too.  However, the wchan field often shows just
"down", which isn't very helpful.

If you are getting a lot of processes sleeping in "down" and want to
figure out where they are actually stuck, try heading over to the console
and hit right_control-scroll_lock.  Modern kernels will print a stack
backtrace for each process, and you can manually translate the the EIP
locations in /System.map or /boot/System.map (whatever matches your
kernel) to the function names to find functions the kernel is/was in.

To find the function in System.map, first make make sure it is sorted.
Next, incrementally search for the first EIP, number by number.  The EIP
provided in the process list dump will always be higher than the actual
function offset, because it will be somewhere in the middle of the
function (System.map lists the beginning of each function).  If you don't
have incremental search, this might be tedious.  Some versions of "klogd"
will do this translation for you; you might want to check your kern.log. 
You may also be able to coax "ksymoops" into doing the translation for
you.

If you cannot find a match in System.map, the EIP may be in a module
(requires loading modules with a symbol dump to trace).  Try the next EIP
first, you can often get a good idea of what is happening by just tracing
further back.  Once you've done this a few times, you'll get used to
seeing the module offsets being quite different from built-in offsets.

If you want to figure out what a running ('R') process is doing, first
try "strace -p <pid>".  If it's not making many or any system calls (eg:
an endless loop or very user-CPU-intensive loop), try ltrace.  If that
provides nothing useful the only other option is to try attaching to it
with gdb and do a backtrace:

	gdb /proc/<pid>/exe
	attach <pid>
	bt

...but you may need to compile with debugging symbols for this to provide
useful output.  Chances are you won't need to do this, and "strace"
will give you a pretty good idea about what is happening.

There should be enough information you can gather from these tools to
figure out what is happening.  "vmstat 1" is usually the quickest way to
get a general idea of what is happening, and "ps auxwr" and "ps aux |
grep ' D '" are useful for starting to narrow it down.

Hope this helps. :)

Simon-

[  Stormix Technologies Inc.  ][  NetNation Communications Inc. ]
[       sim@stormix.com       ][       sim@netnation.com        ]
[ Opinions expressed are not necessarily those of my employers. ]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  1:28   ` Rik van Riel
  2002-09-25  2:38     ` Adam Goldstein
@ 2002-09-25  3:50     ` Bernd Eckenfels
  1 sibling, 0 replies; 25+ messages in thread
From: Bernd Eckenfels @ 2002-09-25  3:50 UTC (permalink / raw)
  To: linux-kernel

In article <Pine.LNX.4.44L.0209242223040.22735-100000@imladris.surriel.com> you wrote:
> If it's IO bound, it's quite possible the problem is the disk
> elevator and Andrew Morton's read-latency2 patch might help
> somewhat (if the system is heavy on both reads and writes).

hmm.. it does not look so from the posted stats, or do you think so? 10%
system time and 0% idle.

> It would make sense to study the output of top and vmstat for
> a few hours to identify exactly what the problem is

yes, I think so.

Greetings
Bernd

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-24 23:27 Adam Goldstein
  2002-09-25  0:59 ` Roger Larsson
@ 2002-09-25  3:48 ` Bernd Eckenfels
  1 sibling, 0 replies; 25+ messages in thread
From: Bernd Eckenfels @ 2002-09-25  3:48 UTC (permalink / raw)
  To: linux-kernel

In article <37EF12D6-D015-11D6-AD2E-000502C90EA3@Whitewlf.net> you wrote:
> I have been trying to find an answer to this for the past couple weeks, 
> but I have finally broken down and must post this to this list. ;)

Ok, but I wonder a bit about your question. You have a big workload and
therefore a high load on the system, there is nothing much you can do from a
kernel perspecitve about it. I guess the PHP is one of the Problems here,
perhaps you should start to benchmark your mostly used web pages for
database access patttern.

> We also see high amounts of apache children segfaulting under load... 
> as high as 2-10/minute at times.

This is strange. Is this a ressource limit congestion or a apache bug. Which
apache version are you using?

> reducing tcp timeouts, etc. The big users of CPU are typically apache 
> and mysql. About 110+ instances of apache and mysqld each run in top at 
> high load. CPU use bounces wildly, with most in user space.

What are your Apache Parameters for min/max spare, maxclients etc.

> Machines:
> Moya (Which ran OK):

does that mean it run ok and had high load, or does that mean it had not
such a high load?

> Dual Thunder K7 1900+MPs, 1.5Gddr/ecc/reg ram
> dual u160 scsi,   3x18G soft raid 5 /home(ext3), 9G / (ext3) & /boot 
> (ext2) & 512Mb swap

this is a bit small ram, and you are probably better using raid-1 here.

> Anubis:
> Dual PIII  440lx, 1.0Gpc100/ecc ram

this is much slower, no wonder the system is sluggish, if you have CPU bound
tasks. can you post some vmstat results, to see if you are io, cpu or ram
bound.

>   7:04pm  up 1 day, 15:56,  2 users,  load average: 18.95, 17.81, 16.21
> 236 processes: 223 sleeping, 13 running, 0 zombie, 0 stopped
> CPU0 states: 89.1% user, 10.5% system,  0.0% nice,  0.0% idle
> CPU1 states: 84.2% user, 15.4% system,  0.0% nice,  0.0% idle
> Mem:  2061772K av, 1949428K used,  112344K free,       0K shrd,  
> 290984K buff
> Swap: 1493808K av,   48420K used, 1445388K free                  
> 882200K cached

to me this looks good, it is a very balanced workload. No idle time. Your
system seems to be mostly cpu bound here.

Greetings
Bernd

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  1:28   ` Rik van Riel
@ 2002-09-25  2:38     ` Adam Goldstein
  2002-09-25  5:24       ` Simon Kirby
                         ` (3 more replies)
  2002-09-25  3:50     ` Bernd Eckenfels
  1 sibling, 4 replies; 25+ messages in thread
From: Adam Goldstein @ 2002-09-25  2:38 UTC (permalink / raw)
  To: Rik van Riel; +Cc: Roger Larsson, linux-kernel, Adam Taylor

Moya used ext3 as well (I listed the file systems&partitions for each  
machine near the bottom of the post)

It hasn't run out of ram, even though I have set mysql to use -a lot-.  
2Gigs of ram should
be more than enough for this... I would hope. 1.5G was OK before.

These are under current load, i will run a full snap of tests tomorrow  
during peak load.

Can anyone recommend any long term cumulative monitors for vmstat,  
and/or other processes that could run behind the scenes and gather  
cooperative data? Personally, I can't make heads or tails of the vmstat  
output, and, I still have as of yet to get a -real- answer for what   
"load" is.. besides the knee-jerk answer of "its the avg load over X  
minutes".  :)

[root@nosferatu whitewlf]# vmstat -n 1
    procs                      memory    swap          io     system      
     cpu
  r  b  w   swpd   free		buff	  cache 	si  so    bi    bo   in 	cs    us  
   sy  id
  5  5  2  94076 1181592  61740 219676   0   0    10    16  125   111   
69  12  19
  7  2  4  94076 1186024  61752 219664   0   0     0   948  454  1421   
95   5   0
10  2  2  94076 1172288  61764 219672   0   0     0  1024  468  1425   
88  12   0
  7  2  3  94076 1175220  61772 219660   0   0     0  1236  509  1513   
93   7   0
  5  2  2  94076 1187824  61784 219664   0   0     0   864  419  1524   
87  13   0
  8  1  2  94076 1170140  61792 219656   0   0     0   656  362   945   
88  12   0
  5  7  3  94076 1182448  61800 219712   0   0    36   696  580  1616   
93   7   0
  5  4  3  94076 1186500  61808 219740   0   0    12  1252  595  1766   
90  10   0
  8  1  3  94076 1177424  61812 219744   0   0     0  1124  497  1588   
96   4   0
  8  3  3  94076 1167564  61824 219748   0   0     0  1136  485  1476   
88  12   0
  5  4  2  94076 1187024  61836 219740   0   0     0  1204  473  1659   
93   7   0
10  6  3  94076 1180816  61840 219832   0   0    52  1124  668  3079   
73  27   0
  6  6  2  94076 1184404  61840 219932   0   0    88  1356 1110  1886   
94   6   0
  8  4  2  94076 1176276  61852 219948   0   0     0  1324  683  1819   
89  11   0
  6  4  3  94076 1183948  61860 219932   0   0     0   984  441  1296   
92   8   0
11  1  2  94076 1177320  61872 219940   0   0     0   948  448  1351   
88  12   0
12  2  2  94076 1150268  61880 219952   0   0     0   952  438  1206   
88  12   0

here is a snap of top (idles off):
  10:21pm  up 1 day, 19:13,  2 users,  load average: 12.53, 12.30, 11.85
235 processes: 229 sleeping, 6 running, 0 zombie, 0 stopped
CPU0 states: 87.5% user, 12.0% system,  0.0% nice,  0.0% idle
CPU1 states: 90.2% user,  9.4% system,  0.0% nice,  0.0% idle
Mem:  2061772K av,  867640K used, 1194132K free,       0K shrd,    
57560K buff
Swap: 1493808K av,   94080K used, 1399728K free                   
198052K cached

   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
16800 apache    20   0  4732 4260  2988 R    37.7  0.2   0:35 httpd
21171 apache    16   0  4976 4548  3268 R    36.6  0.2   2:02 httpd
  6949 apache    17   0  4604 4132  2936 R    36.5  0.2   0:53 httpd
29183 apache    17   0  4900 4468  3192 R    36.0  0.2   6:18 httpd
21179 root      19   0  1200 1200   812 R     9.3  0.0   0:07 top
21584 amavis     9   0  6840 6840   632 D     3.8  0.3   0:00 sweep
21585 amavis     9   0  6836 6836   632 D     3.8  0.3   0:00 sweep
    21 root      10   0     0    0     0 DW    1.2  0.0  16:52 kjournald
25742 postfix    9   0  1864 1864  1288 D     0.7  0.0   3:46 qmgr
17272 apache     9   0  4412 3924  2928 D     0.6  0.1   0:00 httpd
     4 root      19  19     0    0     0 RWN   0.0  0.0   0:01  
ksoftirqd_CPU1
20854 postfix    9   0  1540 1540  1188 D     0.0  0.0   0:00 cleanup
21362 postfix    9   0  1376 1376  1080 D     0.0  0.0   0:00 smtp
21365 postfix    9   0  1356 1356  1060 D     0.0  0.0   0:00 smtp
21399 apache     9   0  4344 4244  4072 D     0.0  0.2   0:00 httpd
21401 apache     9   0  5212 5112  4176 D     0.0  0.2   0:00 httpd

Also, I ran a bonnie++ test this eve during the nightly lull. It pushed  
the load from about
7 to 16, then settled in at about 12 during the test.

Version 1.02a       ------Sequential Output------ --Sequential Input-  
--Random-
                                   -Per Chr- --Block-- -Rewrite- -Per  
Chr- --Block-- --Seeks--
Machine       Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP   
/sec %CP
nosferatu        4G 14665  88 53478  46 21632  21  5415  27 39694  18  
216.2   2

                            ------Sequential Create------ --------Random  
Create--------
                              -Create-- --Read--- -Delete-- -Create--  
--Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP   
/sec %CP
     64:20000:16/512   876  11  4980  20  3596  14  2497  29   891   4    
578   3
nosferatu,4G,14665,88,53478,46,21632,21,5415,27,39694,18,216.2,2,64:2000 
0:16/512,876,11,4980,20,3596,14,2497,29,891,4,578,3

On Tuesday, September 24, 2002, at 09:28 PM, Rik van Riel wrote:

> On Wed, 25 Sep 2002, Roger Larsson wrote:
>
>> Have you been able to determine if it is I/O bound or CPU bound?
>> Or maybe using to much CPU to do I/O?
>>
>> Does anyone know what virtual memory system does Mandrake uses?
>
> If it's IO bound, it's quite possible the problem is the disk
> elevator and Andrew Morton's read-latency2 patch might help
> somewhat (if the system is heavy on both reads and writes).
>
> If the system is short on RAM and/or swapping, that might be
> a VM thing or just a shortage of RAM...
>
> It would make sense to study the output of top and vmstat for
> a few hours to identify exactly what the problem is, instead
> of trying to fix all kinds of random things that aren't the
> core problem.
>
> regards,
>
> Rik
> -- 
> Bravely reimplemented by the knights who say "NIH".
>
> http://www.surriel.com/		http://distro.conectiva.com/
>
> Spamtraps of the month:  september@surriel.com trac@trac.org
>
-- 
Adam Goldstein
White Wolf Networks


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-25  0:59 ` Roger Larsson
@ 2002-09-25  1:28   ` Rik van Riel
  2002-09-25  2:38     ` Adam Goldstein
  2002-09-25  3:50     ` Bernd Eckenfels
  0 siblings, 2 replies; 25+ messages in thread
From: Rik van Riel @ 2002-09-25  1:28 UTC (permalink / raw)
  To: Roger Larsson; +Cc: Adam Goldstein, linux-kernel, Adam Taylor

On Wed, 25 Sep 2002, Roger Larsson wrote:

> Have you been able to determine if it is I/O bound or CPU bound?
> Or maybe using to much CPU to do I/O?
>
> Does anyone know what virtual memory system does Mandrake uses?

If it's IO bound, it's quite possible the problem is the disk
elevator and Andrew Morton's read-latency2 patch might help
somewhat (if the system is heavy on both reads and writes).

If the system is short on RAM and/or swapping, that might be
a VM thing or just a shortage of RAM...

It would make sense to study the output of top and vmstat for
a few hours to identify exactly what the problem is, instead
of trying to fix all kinds of random things that aren't the
core problem.

regards,

Rik
-- 
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/		http://distro.conectiva.com/

Spamtraps of the month:  september@surriel.com trac@trac.org


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Very High Load, kernel 2.4.18, apache/mysql
  2002-09-24 23:27 Adam Goldstein
@ 2002-09-25  0:59 ` Roger Larsson
  2002-09-25  1:28   ` Rik van Riel
  2002-09-25  3:48 ` Bernd Eckenfels
  1 sibling, 1 reply; 25+ messages in thread
From: Roger Larsson @ 2002-09-25  0:59 UTC (permalink / raw)
  To: Adam Goldstein, linux-kernel; +Cc: Adam Taylor

Asking some of the things I guess others will ask later, but I won't
look into this anymore this night.

Have you been able to determine if it is I/O bound or CPU bound?
Or maybe using to much CPU to do I/O?

Does anyone know what virtual memory system does Mandrake uses? 
 Linus, Andreas or Riels?
 Have you tried Mandrakes support?

vmstat	over some time would be nice to get a hint on what it is doing.
ext3		do you use the same journaling mode as on Moja?
top		how much CPU time does the kernel processes use?

/RogerL

-- 
Roger Larsson
Skellefteå
Sweden


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Very High Load, kernel 2.4.18, apache/mysql
@ 2002-09-24 23:27 Adam Goldstein
  2002-09-25  0:59 ` Roger Larsson
  2002-09-25  3:48 ` Bernd Eckenfels
  0 siblings, 2 replies; 25+ messages in thread
From: Adam Goldstein @ 2002-09-24 23:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Adam Taylor

I have been trying to find an answer to this for the past couple weeks, 
but I have finally
broken down and must post this to this list. ;)

I am running a high user load site (>20million hits/month stamp auction 
site) which runs entirely on apache/php with mysql. It was running 
smoothly (for the most part) on as a virtual server on a relatively 
nice box(see Moya below), but started needing more and more disk space 
(from uploads, logs, etc) and kept running out of space on the root 
partition (including /var...which has mysql & weblogs)

I decided to build a new box for it, which we were shipping to a 
highspeed colo facility.

While this unit was slightly less powerful, it was a clean install with 
a larger root partition (See Anubis below).  This unit started acting 
pathetic, and had less other loads on it (old box also has lots of 
samba sharing and other low-traffic websites). The system load kept at 
high rates constantly, 5-8 during non-peak hours , average of 10-20 
during most times, and spikes >100... needless to say, any load  over 
5-10 made the unit a pile of dung.

My partner is running a similar site, under debian, on similar hardware 
(almost identical, actually) and is having -very- similar problems.

I stripped the old server, packed it into a new 4U case (-packed!-) and 
moved just the one site (29G including pictures and sql data) to it, 
and the results are no better. This unit has even more ram, and, more 
hard drive space. (See Nosferatu below)

We are at the end of our ropes, and are clearing our chalkboards to 
start testing pieces of our systems... problem is, testing these system 
is difficult due to needing to put live loads on them. We need to 
narrow down the search, and need your help... please...

We also see high amounts of apache children segfaulting under load... 
as high as 2-10/minute at times. I have tried turning off atimes, and 
reducing tcp timeouts, etc. The big users of CPU are typically apache 
and mysql. About 110+ instances of apache and mysqld each run in top at 
high load. CPU use bounces wildly, with most in user space.

Also the amount of open files/handlers on the machines is staggering.

[root@nosferatu whitewlf]# lsof | wc -l
   42068
[root@nosferatu whitewlf]# cat /proc/sys/fs/inode-nr
84976   36563
[root@nosferatu whitewlf]# cat /proc/sys/fs/fil
file-max  file-nr
[root@nosferatu whitewlf]# cat /proc/sys/fs/file-max
8192
[root@nosferatu whitewlf]# cat /proc/sys/fs/file-
file-max  file-nr
[root@nosferatu whitewlf]# cat /proc/sys/fs/file-nr
4198    1052    8192

Machines:
Moya (Which ran OK):
Dual Thunder K7 1900+MPs, 1.5Gddr/ecc/reg ram
dual u160 scsi,   3x18G soft raid 5 /home(ext3), 9G / (ext3) & /boot 
(ext2) & 512Mb swap
Mandrake 8.1, kernel 2.4.8,  40G ide backup
Apache 1.3.23, mod_ssl/2.8.7 OpenSSL/0.9.6c PHP/4.1.2
Mysql 3.23.47 (large & huge my.conf files)

Anubis:
Dual PIII  440lx, 1.0Gpc100/ecc ram
dual u2 scsi,   3x18G soft raid 5 /home(ext3), 18G / (ext3) & /boot 
(ext2)
Mandrake 8.2, kernel 2.4.18,   80G ide backup
Apache 1.3.23, mod_ssl/2.8.7 OpenSSL/0.9.6c PHP/4.1.2
Mysql 3.23.47 and 3.23.52 (large & huge my.conf files)

Nosferatu:
Dual Thunder K7 1900+MPs, 2.0Gddr/ecc/reg ram
dual u160 scsi,   7x18G soft raid 5 /(ext3) & (250 MB/boot sda & 250MB 
swap on others)
Mandrake 8.2, kernel 2.4.18,   80G ide backup
Apache 1.3.23, mod_ssl/2.8.7 OpenSSL/0.9.6c PHP/4.1.2
Mysql 3.23.47 (large & huge my.conf files)

Current snap use shots (low usage at this hour)
(Threads: 55  Questions: 13038219  Slow queries: 12879  Opens: 620  
Flush tables: 1  Open tables: 512 Queries per second avg: 90.952)

7:01pm  up 1 day, 15:54,  2 users,  load average: 20.51, 17.21, 15.78

   7:04pm  up 1 day, 15:56,  2 users,  load average: 18.95, 17.81, 16.21
236 processes: 223 sleeping, 13 running, 0 zombie, 0 stopped
CPU0 states: 89.1% user, 10.5% system,  0.0% nice,  0.0% idle
CPU1 states: 84.2% user, 15.4% system,  0.0% nice,  0.0% idle
Mem:  2061772K av, 1949428K used,  112344K free,       0K shrd,  
290984K buff
Swap: 1493808K av,   48420K used, 1445388K free                  
882200K cached

Server uptime: 2 hours 10 minutes 6 seconds
43 requests currently being processed, 13 idle servers

KK_WW_WW_K_KWLWWWKW_KKKK.__K_WWW_WWW_K_WWWWK_WKWW_WKK.W...W....W...W..

(My partner's boxes have been similar to above, except first was 
TigerMPX, 1G ram, ide drives (first--ran ok.. ran out of space as 
well), second nearly identical to Anubis except Debian Woody, no PHP 
used, mostly CGIs, 35million+hits. Same System loads, sometimes 
spiraling out of control, needing apache shutdown.)
-- 
Adam Goldstein
White Wolf Networks


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2002-10-01  5:31 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <3D90FD7B.9080209@wanadoo.fr>
2002-09-25  1:12 ` Very High Load, kernel 2.4.18, apache/mysql Adam Goldstein
     [not found] <0EBC45FCABFC95428EBFC3A51B368C9501AF4F@jessica.herefordshire.gov.uk>
2002-09-25 20:16 ` Adam Goldstein
2002-09-25 21:26   ` Roger Larsson
2002-09-26  3:03   ` Ernst Herzberg
2002-09-26 18:36     ` Marco Colombo
2002-09-26 19:27       ` Rik van Riel
2002-09-26 20:02         ` Marco Colombo
2002-09-26 20:09           ` Rik van Riel
2002-09-26 20:25         ` Ernst Herzberg
2002-09-27  8:52       ` Martin Brulisauer
2002-10-01  5:36   ` David Rees
2002-09-24 23:27 Adam Goldstein
2002-09-25  0:59 ` Roger Larsson
2002-09-25  1:28   ` Rik van Riel
2002-09-25  2:38     ` Adam Goldstein
2002-09-25  5:24       ` Simon Kirby
2002-09-25  6:56         ` Adam Goldstein
2002-09-25  7:20           ` Simon Kirby
2002-09-25  7:51             ` Paweł Krawczyk
2002-09-25 13:13       ` Rik van Riel
2002-09-25 22:54       ` Jose Luis Domingo Lopez
2002-09-26 17:09       ` Joachim Breuer
2002-09-26 17:16         ` Rik van Riel
2002-09-25  3:50     ` Bernd Eckenfels
2002-09-25  3:48 ` Bernd Eckenfels

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).