linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Linux 2.4: Allocation of >1GB in one chunk
@ 2003-08-12  2:01 Anthony Truong
  0 siblings, 0 replies; 3+ messages in thread
From: Anthony Truong @ 2003-08-12  2:01 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: Christian Mautner, Linux Kernel Development

On Tue, 2003-08-12 at 18:00, Geert Uytterhoeven wrote:

On Mon, 11 Aug 2003, Christian Mautner wrote:
> please forgive me to ask this (perhaps newbie?) question here on
> l-k, but I'm desperate. This is my problem:
> 
> I am running various kinds of EDA software on 32-bit Linux, and they
> need substantial amounts of memory. I am running 2.4.21 with with
> PAGE_OFFSET at 0xc0000000, so I can run processes just over 3GB. The
> machine (a dual Xeon) has 4GB memory and 4GB swap.
> 
> But there is this one program now that dies because it's out of
> memory. No surprise, as this happens frequently with tasks that would
> need 4GB or more.
> 
> But this one needs less than 3GB. But what it does need (I strace'ed
> this), is 1.3GB in one whole chunk.
> 
> I wrote a test program to mimic this:
> 
> The attached program allocates argv[1] MB in 1MB chunks and argv[2] MB
> in one big chunk. (The original version also touched every page, but
> this makes no difference here.)
> 
> [chm@trex7:~/C] ./foo 2500 500
> Will allocate 2621440000 bytes in 1MB chunks...
> Will allocate 524288000 bytes in one chunk...
> Succeeded.
> 
> [chm@trex7:~/C] ./foo 1500 1000
> Will allocate 1572864000 bytes in 1MB chunks...
> Will allocate 1048576000 bytes in one chunk...
> malloc: Cannot allocate memory
> Out of memory.
> 
> The first call allocated 3GB and succeeds, the second one only 2.5GB
> but fails!
> 
> The thing that comes to my mind is memory fragmentation, but how could
> that be, with virtual memory? 

Virtual memory fixes physical memory fragmentation only. I.e. you can
`glue'
multiple physical chunks together into one large virtual chunk.

Biut you're still limited to a 32-bit virtual address space (3 GB in
user
space). If this virtual 3 GB gets fragmented, you're still out of luck.

To check this, print all allocated virtual addresses, or look at
/proc/<pid>/maps, and see why it fails.

Gr{oetje,eeting}s,

                                                Geert


Hello,
There is indeed fragmentation problem even with virtual memory address
space.  I think in the second foo call, Christian might have run into
this fragmentation problem.

Regards,
Anthony Dominic Truong.




Disclaimer: The information contained in this transmission, including any
attachments, may contain confidential information of Matsushita Avionics
Systems Corporation.  This transmission is intended only for the use of the
addressee(s) listed above.  Unauthorized review, dissemination or other use
of the information contained in this transmission is strictly prohibited.
If you have received this transmission in error or have reason to believe
you are not authorized to receive it, please notify the sender by return
email and promptly delete the transmission.



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Linux 2.4: Allocation of >1GB in one chunk
  2003-08-11 17:49 Christian Mautner
@ 2003-08-12 10:00 ` Geert Uytterhoeven
  0 siblings, 0 replies; 3+ messages in thread
From: Geert Uytterhoeven @ 2003-08-12 10:00 UTC (permalink / raw)
  To: Christian Mautner; +Cc: Linux Kernel Development

On Mon, 11 Aug 2003, Christian Mautner wrote:
> please forgive me to ask this (perhaps newbie?) question here on
> l-k, but I'm desperate. This is my problem:
> 
> I am running various kinds of EDA software on 32-bit Linux, and they
> need substantial amounts of memory. I am running 2.4.21 with with
> PAGE_OFFSET at 0xc0000000, so I can run processes just over 3GB. The
> machine (a dual Xeon) has 4GB memory and 4GB swap.
> 
> But there is this one program now that dies because it's out of
> memory. No surprise, as this happens frequently with tasks that would
> need 4GB or more.
> 
> But this one needs less than 3GB. But what it does need (I strace'ed
> this), is 1.3GB in one whole chunk.
> 
> I wrote a test program to mimic this:
> 
> The attached program allocates argv[1] MB in 1MB chunks and argv[2] MB
> in one big chunk. (The original version also touched every page, but
> this makes no difference here.)
> 
> [chm@trex7:~/C] ./foo 2500 500
> Will allocate 2621440000 bytes in 1MB chunks...
> Will allocate 524288000 bytes in one chunk...
> Succeeded.
> 
> [chm@trex7:~/C] ./foo 1500 1000
> Will allocate 1572864000 bytes in 1MB chunks...
> Will allocate 1048576000 bytes in one chunk...
> malloc: Cannot allocate memory
> Out of memory.
> 
> The first call allocated 3GB and succeeds, the second one only 2.5GB
> but fails!
> 
> The thing that comes to my mind is memory fragmentation, but how could
> that be, with virtual memory? 

Virtual memory fixes physical memory fragmentation only. I.e. you can `glue'
multiple physical chunks together into one large virtual chunk.

Biut you're still limited to a 32-bit virtual address space (3 GB in user
space). If this virtual 3 GB gets fragmented, you're still out of luck.

To check this, print all allocated virtual addresses, or look at
/proc/<pid>/maps, and see why it fails.

Gr{oetje,eeting}s,

						Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
							    -- Linus Torvalds


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Linux 2.4: Allocation of >1GB in one chunk
@ 2003-08-11 17:49 Christian Mautner
  2003-08-12 10:00 ` Geert Uytterhoeven
  0 siblings, 1 reply; 3+ messages in thread
From: Christian Mautner @ 2003-08-11 17:49 UTC (permalink / raw)
  To: linux-kernel

Hello,

please forgive me to ask this (perhaps newbie?) question here on
l-k, but I'm desperate. This is my problem:

I am running various kinds of EDA software on 32-bit Linux, and they
need substantial amounts of memory. I am running 2.4.21 with with
PAGE_OFFSET at 0xc0000000, so I can run processes just over 3GB. The
machine (a dual Xeon) has 4GB memory and 4GB swap.

But there is this one program now that dies because it's out of
memory. No surprise, as this happens frequently with tasks that would
need 4GB or more.

But this one needs less than 3GB. But what it does need (I strace'ed
this), is 1.3GB in one whole chunk.

I wrote a test program to mimic this:

The attached program allocates argv[1] MB in 1MB chunks and argv[2] MB
in one big chunk. (The original version also touched every page, but
this makes no difference here.)

[chm@trex7:~/C] ./foo 2500 500
Will allocate 2621440000 bytes in 1MB chunks...
Will allocate 524288000 bytes in one chunk...
Succeeded.

[chm@trex7:~/C] ./foo 1500 1000
Will allocate 1572864000 bytes in 1MB chunks...
Will allocate 1048576000 bytes in one chunk...
malloc: Cannot allocate memory
Out of memory.

The first call allocated 3GB and succeeds, the second one only 2.5GB
but fails!

The thing that comes to my mind is memory fragmentation, but how could
that be, with virtual memory? 

rlimit is also unlimited (and it happens for root as well).

Skimming through the kernel sources shows too many places where memory
allocation could fail, unfortunately I don't know _where_ it fails. The
machine is used for production, I cannot simply take it down and run a
debugging kernel on it.

I have played with /proc/sys/vm/overcommit_memory, to no avail.

I have watched /proc/slabinfo, and /proc/sys/vm/* while
allocating. Still no idea.

Is there anything I can do to make this work?

Grateful for any help or pointers,
chm.

PS: Will the behaviour be different in 2.6?

----------------------------------------------------------------------
This is my test program (the sleep(60) is there to be able to peek
around in /proc after the memory has been allocated):

#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv)
{
  unsigned int i;
  unsigned long n1=0;
  unsigned long n2=0;
  char * p;

  if ( argc >= 2 )
    {
      n1=strtol(argv[1], 0, 10)*1024*1024;
    }

  if ( argc >= 3 )
    {
      n2=strtol(argv[2], 0, 10)*1024*1024;
    }

  fprintf(stderr, "Will allocate %lu bytes in 1MB chunks...\n", n1);

  for(i=0; i<n1; i+=1024*1024)
    {
      p=(char*)malloc(1024*1024);
      if ( p == 0 )
        {
          perror("malloc");
          fprintf(stderr, "Out of memory (%d).\n", i);
          sleep(60);
          exit(1);
        }
    }      

  fprintf(stderr, "Will allocate %lu bytes in one chunk...\n", n2);

  p=(char*)malloc(n2);
    
  if ( p == 0 )
    {
        perror("malloc");
        fprintf(stderr, "Out of memory.\n");
        sleep(60);
        exit(1);
    }
  
  fprintf(stderr, "Succeeded.\n");

  sleep(60);
        
  return 0;
} 


-- 
christian mautner -- chm bei istop punkt com -- ottawa, canada

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2003-08-12 16:50 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-08-12  2:01 Linux 2.4: Allocation of >1GB in one chunk Anthony Truong
  -- strict thread matches above, loose matches on Subject: below --
2003-08-11 17:49 Christian Mautner
2003-08-12 10:00 ` Geert Uytterhoeven

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).