All of lore.kernel.org
 help / color / mirror / Atom feed
* Number of open files scalability?
@ 2009-06-20  9:32 Andrey Borzenkov
  0 siblings, 0 replies; only message in thread
From: Andrey Borzenkov @ 2009-06-20  9:32 UTC (permalink / raw)
  To: linux-kernel

[-- Attachment #1: Type: Text/Plain, Size: 975 bytes --]

Hi,

we have a customer that requires large number of open files. Basically, 
it is SAP with large Oracle database with relatively large number of 
concurrent connections from worker processes. Right now the amount 
permanently opened files is above 128000; with current trends of DB and 
load growth it could easily rocket up to and above of 1000000.

So the questions are

- is there any per-process or per-user limit for number of open files 
imposed by kernel (except of course set by rlimits)?

- is there any fs/file-max limit except imposed by data type (int)?

- finally, how scalable is the implementation? Will having one million 
of open files impose any noticeable slowdown? If yes, what operations 
are affected? I.e. opening new files/creating new process is not that 
important; but having to search 1000000 files for every operation would 
be fatal.

The platform is x86_64, SLES 9 with likely update to SLES10.

Thank you!

-andrey

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2009-06-20  9:38 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-20  9:32 Number of open files scalability? Andrey Borzenkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.