All of lore.kernel.org
 help / color / mirror / Atom feed
* question about memsw of memory cgroup-subsystem
@ 2012-04-13 10:00 gaoqiang
  2012-04-13 14:49   ` Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: gaoqiang @ 2012-04-13 10:00 UTC (permalink / raw)
  To: cgroups-u79uwXL29TY76Z2rM5mHXA



I put a single process into a cgroup and set memory.limit_in_bytes to  
100M,and memory.memsw.limit_in_bytes to 1G.

howevery,the process was oom-killed before mem+swap hit 1G. I tried many  
times,and it was killed randomly when memory+swap

exceed 100M but less than 1G.  what is the matter ?

-- 
使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-13 14:49   ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2012-04-13 14:49 UTC (permalink / raw)
  To: gaoqiang; +Cc: cgroups, linux-mm

[CC linux-mm]

Hi,

On Fri 13-04-12 18:00:10, gaoqiang wrote:
> 
> 
> I put a single process into a cgroup and set memory.limit_in_bytes
> to 100M,and memory.memsw.limit_in_bytes to 1G.
> 
> howevery,the process was oom-killed before mem+swap hit 1G. I tried
> many times,and it was killed randomly when memory+swap
> 
> exceed 100M but less than 1G.  what is the matter ?

could you be more specific about your kernel version, workload and could
you provide us with GROUP/memory.stat snapshots taken during your test?

One reason for oom might be that you are hitting the hard limit (you
cannot get over even if memsw limit says more) and you cannot swap out
any pages (e.g. they are mlocked or under writeback).

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-13 14:49   ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2012-04-13 14:49 UTC (permalink / raw)
  To: gaoqiang; +Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, linux-mm-Bw31MaZKKs3YtjvyW6yDsg

[CC linux-mm]

Hi,

On Fri 13-04-12 18:00:10, gaoqiang wrote:
> 
> 
> I put a single process into a cgroup and set memory.limit_in_bytes
> to 100M,and memory.memsw.limit_in_bytes to 1G.
> 
> howevery,the process was oom-killed before mem+swap hit 1G. I tried
> many times,and it was killed randomly when memory+swap
> 
> exceed 100M but less than 1G.  what is the matter ?

could you be more specific about your kernel version, workload and could
you provide us with GROUP/memory.stat snapshots taken during your test?

One reason for oom might be that you are hitting the hard limit (you
cannot get over even if memsw limit says more) and you cannot swap out
any pages (e.g. they are mlocked or under writeback).

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-16  3:43     ` gaoqiang
  0 siblings, 0 replies; 11+ messages in thread
From: gaoqiang @ 2012-04-16  3:43 UTC (permalink / raw)
  To: Michal Hocko; +Cc: cgroups, linux-mm

[-- Attachment #1: Type: text/plain, Size: 1550 bytes --]

在 Fri, 13 Apr 2012 22:49:54 +0800,Michal Hocko <mhocko@suse.cz> 写道:

> [CC linux-mm]
>
> Hi,
>
> On Fri 13-04-12 18:00:10, gaoqiang wrote:
>>
>>
>> I put a single process into a cgroup and set memory.limit_in_bytes
>> to 100M,and memory.memsw.limit_in_bytes to 1G.
>>
>> howevery,the process was oom-killed before mem+swap hit 1G. I tried
>> many times,and it was killed randomly when memory+swap
>>
>> exceed 100M but less than 1G.  what is the matter ?
>
> could you be more specific about your kernel version, workload and could
> you provide us with GROUP/memory.stat snapshots taken during your test?
>
> One reason for oom might be that you are hitting the hard limit (you
> cannot get over even if memsw limit says more) and you cannot swap out
> any pages (e.g. they are mlocked or under writeback).
>

many thanks.


The system is a vmware virtual machine,running centos6.2 with kernel  
2.6.32-220.7.1.el6.x86_64.

the attachments are memory.stat, the test program and the /var/log/message  
of the oom.

the workload is nearly 0,with searal sshd and bash program running.

I just did the following command when testing:

./t
# this program will pause at the "getchar()" line and in another  
terminal,run :

cgclear
service cgconfig restart
mkdir /cgroup/memory/test
cd /cgroup/memory/test
echo 100m > memory.limit_in_bytes
echo 1G > memory.memsw.limit_in_bytes
echo 'pid' > tasks

# then continue the t command


-- 
使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/

[-- Attachment #2: memory.stat --]
[-- Type: application/octet-stream, Size: 460 bytes --]

cache 0
rss 52473856
mapped_file 0
pgpgin 39296
pgpgout 38749
swap 0
inactive_anon 52473856
active_anon 0
inactive_file 0
active_file 0
unevictable 0
hierarchical_memory_limit 104857600
hierarchical_memsw_limit 1073741824
total_cache 0
total_rss 52473856
total_mapped_file 0
total_pgpgin 39296
total_pgpgout 38749
total_swap 0
total_inactive_anon 52473856
total_active_anon 0
total_inactive_file 0
total_active_file 0
total_unevictable 0

[-- Attachment #3: oom_message.txt --]
[-- Type: text/plain, Size: 6019 bytes --]

Apr 16 11:34:50 localhost kernel: t invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0
Apr 16 11:34:50 localhost kernel: t cpuset=/ mems_allowed=0
Apr 16 11:34:50 localhost kernel: Pid: 15462, comm: t Not tainted 2.6.32-220.7.1.el6.x86_64 #1
Apr 16 11:34:50 localhost kernel: Call Trace:
Apr 16 11:34:50 localhost kernel: [<ffffffff810c2c61>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Apr 16 11:34:50 localhost kernel: [<ffffffff811139e0>] ? dump_header+0x90/0x1b0
Apr 16 11:34:50 localhost kernel: [<ffffffff811693b5>] ? task_in_mem_cgroup+0x35/0xb0
Apr 16 11:34:50 localhost kernel: [<ffffffff8120d7ac>] ? security_real_capable_noaudit+0x3c/0x70
Apr 16 11:34:50 localhost kernel: [<ffffffff81113e6a>] ? oom_kill_process+0x8a/0x2c0
Apr 16 11:34:50 localhost kernel: [<ffffffff81113d5e>] ? select_bad_process+0x9e/0x120
Apr 16 11:34:50 localhost kernel: [<ffffffff81114602>] ? mem_cgroup_out_of_memory+0x92/0xb0
Apr 16 11:34:50 localhost kernel: [<ffffffff81169357>] ? mem_cgroup_handle_oom+0x147/0x170
Apr 16 11:34:50 localhost kernel: [<ffffffff81090a90>] ? autoremove_wake_function+0x0/0x40
Apr 16 11:34:50 localhost kernel: [<ffffffff8116a61b>] ? __mem_cgroup_try_charge+0x3bb/0x420
Apr 16 11:34:50 localhost kernel: [<ffffffff81123851>] ? __alloc_pages_nodemask+0x111/0x940
Apr 16 11:34:50 localhost kernel: [<ffffffff8116b917>] ? mem_cgroup_charge_common+0x87/0xd0
Apr 16 11:34:50 localhost kernel: [<ffffffff8116bae8>] ? mem_cgroup_newpage_charge+0x48/0x50
Apr 16 11:34:50 localhost kernel: [<ffffffff8113beca>] ? handle_pte_fault+0x79a/0xb50
Apr 16 11:34:50 localhost kernel: [<ffffffff810471c7>] ? pte_alloc_one+0x37/0x50
Apr 16 11:34:50 localhost kernel: [<ffffffff81171ad9>] ? do_huge_pmd_anonymous_page+0xb9/0x370
Apr 16 11:34:50 localhost kernel: [<ffffffff8100bc0e>] ? apic_timer_interrupt+0xe/0x20
Apr 16 11:34:50 localhost kernel: [<ffffffff8113c464>] ? handle_mm_fault+0x1e4/0x2b0
Apr 16 11:34:50 localhost kernel: [<ffffffff81042b79>] ? __do_page_fault+0x139/0x480
Apr 16 11:34:50 localhost kernel: [<ffffffff811424ea>] ? do_mmap_pgoff+0x33a/0x380
Apr 16 11:34:50 localhost kernel: [<ffffffff814f253e>] ? do_page_fault+0x3e/0xa0
Apr 16 11:34:50 localhost kernel: [<ffffffff814ef8f5>] ? page_fault+0x25/0x30
Apr 16 11:34:50 localhost kernel: Task in /test killed as a result of limit of /test
Apr 16 11:34:50 localhost kernel: memory: usage 102400kB, limit 102400kB, failcnt 756
Apr 16 11:34:50 localhost kernel: memory+swap: usage 206240kB, limit 1048576kB, failcnt 0
Apr 16 11:34:50 localhost kernel: Mem-Info:
Apr 16 11:34:50 localhost kernel: Node 0 DMA per-cpu:
Apr 16 11:34:50 localhost kernel: CPU    0: hi:    0, btch:   1 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    1: hi:    0, btch:   1 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    2: hi:    0, btch:   1 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    3: hi:    0, btch:   1 usd:   0
Apr 16 11:34:50 localhost kernel: Node 0 DMA32 per-cpu:
Apr 16 11:34:50 localhost kernel: CPU    0: hi:  186, btch:  31 usd:  88
Apr 16 11:34:50 localhost kernel: CPU    1: hi:  186, btch:  31 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    2: hi:  186, btch:  31 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    3: hi:  186, btch:  31 usd:  53
Apr 16 11:34:50 localhost kernel: active_anon:14198 inactive_anon:66406 isolated_anon:0
Apr 16 11:34:50 localhost kernel: active_file:62480 inactive_file:88538 isolated_file:0
Apr 16 11:34:50 localhost kernel: unevictable:0 dirty:0 writeback:12822 unstable:0
Apr 16 11:34:50 localhost kernel: free:27898 slab_reclaimable:8884 slab_unreclaimable:9427
Apr 16 11:34:50 localhost kernel: mapped:2723 shmem:68 pagetables:1747 bounce:0
Apr 16 11:34:50 localhost kernel: Node 0 DMA free:15704kB min:592kB low:740kB high:888kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15308kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 16 11:34:50 localhost kernel: lowmem_reserve[]: 0 1120 1120 1120
Apr 16 11:34:50 localhost kernel: Node 0 DMA32 free:95888kB min:44460kB low:55572kB high:66688kB active_anon:56792kB inactive_anon:265624kB active_file:249920kB inactive_file:354152kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1147232kB mlocked:0kB dirty:0kB writeback:51288kB mapped:10892kB shmem:272kB slab_reclaimable:35536kB slab_unreclaimable:37708kB kernel_stack:2216kB pagetables:6988kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 16 11:34:50 localhost kernel: lowmem_reserve[]: 0 0 0 0
Apr 16 11:34:50 localhost kernel: Node 0 DMA: 2*4kB 4*8kB 3*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15704kB
Apr 16 11:34:50 localhost kernel: Node 0 DMA32: 124*4kB 882*8kB 579*16kB 337*32kB 171*64kB 42*128kB 3*256kB 32*512kB 32*1024kB 1*2048kB 0*4096kB = 95888kB
Apr 16 11:34:50 localhost kernel: 211205 total pagecache pages
Apr 16 11:34:50 localhost kernel: 60108 pages in swap cache
Apr 16 11:34:50 localhost kernel: Swap cache stats: add 1240384, delete 1180276, find 400/507
Apr 16 11:34:50 localhost kernel: Free swap  = 1720104kB
Apr 16 11:34:50 localhost kernel: Total swap = 2064376kB
Apr 16 11:34:50 localhost kernel: 294896 pages RAM
Apr 16 11:34:50 localhost kernel: 7632 pages reserved
Apr 16 11:34:50 localhost kernel: 100154 pages shared
Apr 16 11:34:50 localhost kernel: 171738 pages non-shared
Apr 16 11:34:50 localhost kernel: [ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
Apr 16 11:34:50 localhost kernel: [15462]   500 15462    58346    12903   3       0             0 t
Apr 16 11:34:50 localhost kernel: Memory cgroup out of memory: Kill process 15462 (t) score 1000 or sacrifice child
Apr 16 11:34:50 localhost kernel: Killed process 15462, UID 500, (t) total-vm:233384kB, anon-rss:51228kB, file-rss:384kB

[-- Attachment #4: test.c --]
[-- Type: application/octet-stream, Size: 361 bytes --]

#include <stdio.h>
#include <unistd.h>
#define BUF_LEN (1024*1024*32)
int main()
{
	printf("pid= %d \n",getpid());
	getchar();
	int cnt=0;
	while(1)
	{
		char*p=malloc(BUF_LEN);
		if(p==NULL)
		{
			printf("p=NULL\n");
			return 0;
		}
		memset(p,0,BUF_LEN);
		cnt+=BUF_LEN;
		printf("usage: %dk\n",cnt/1024);
		//sleep(1);
	}
	return 0;
}

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-16  3:43     ` gaoqiang
  0 siblings, 0 replies; 11+ messages in thread
From: gaoqiang @ 2012-04-16  3:43 UTC (permalink / raw)
  To: Michal Hocko
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, linux-mm-Bw31MaZKKs3YtjvyW6yDsg

[-- Attachment #1: Type: text/plain, Size: 1571 bytes --]

在 Fri, 13 Apr 2012 22:49:54 +0800,Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org> 写道:

> [CC linux-mm]
>
> Hi,
>
> On Fri 13-04-12 18:00:10, gaoqiang wrote:
>>
>>
>> I put a single process into a cgroup and set memory.limit_in_bytes
>> to 100M,and memory.memsw.limit_in_bytes to 1G.
>>
>> howevery,the process was oom-killed before mem+swap hit 1G. I tried
>> many times,and it was killed randomly when memory+swap
>>
>> exceed 100M but less than 1G.  what is the matter ?
>
> could you be more specific about your kernel version, workload and could
> you provide us with GROUP/memory.stat snapshots taken during your test?
>
> One reason for oom might be that you are hitting the hard limit (you
> cannot get over even if memsw limit says more) and you cannot swap out
> any pages (e.g. they are mlocked or under writeback).
>

many thanks.


The system is a vmware virtual machine,running centos6.2 with kernel  
2.6.32-220.7.1.el6.x86_64.

the attachments are memory.stat, the test program and the /var/log/message  
of the oom.

the workload is nearly 0,with searal sshd and bash program running.

I just did the following command when testing:

./t
# this program will pause at the "getchar()" line and in another  
terminal,run :

cgclear
service cgconfig restart
mkdir /cgroup/memory/test
cd /cgroup/memory/test
echo 100m > memory.limit_in_bytes
echo 1G > memory.memsw.limit_in_bytes
echo 'pid' > tasks

# then continue the t command


-- 
使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/

[-- Attachment #2: memory.stat --]
[-- Type: application/octet-stream, Size: 460 bytes --]

cache 0
rss 52473856
mapped_file 0
pgpgin 39296
pgpgout 38749
swap 0
inactive_anon 52473856
active_anon 0
inactive_file 0
active_file 0
unevictable 0
hierarchical_memory_limit 104857600
hierarchical_memsw_limit 1073741824
total_cache 0
total_rss 52473856
total_mapped_file 0
total_pgpgin 39296
total_pgpgout 38749
total_swap 0
total_inactive_anon 52473856
total_active_anon 0
total_inactive_file 0
total_active_file 0
total_unevictable 0

[-- Attachment #3: oom_message.txt --]
[-- Type: text/plain, Size: 6019 bytes --]

Apr 16 11:34:50 localhost kernel: t invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0
Apr 16 11:34:50 localhost kernel: t cpuset=/ mems_allowed=0
Apr 16 11:34:50 localhost kernel: Pid: 15462, comm: t Not tainted 2.6.32-220.7.1.el6.x86_64 #1
Apr 16 11:34:50 localhost kernel: Call Trace:
Apr 16 11:34:50 localhost kernel: [<ffffffff810c2c61>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Apr 16 11:34:50 localhost kernel: [<ffffffff811139e0>] ? dump_header+0x90/0x1b0
Apr 16 11:34:50 localhost kernel: [<ffffffff811693b5>] ? task_in_mem_cgroup+0x35/0xb0
Apr 16 11:34:50 localhost kernel: [<ffffffff8120d7ac>] ? security_real_capable_noaudit+0x3c/0x70
Apr 16 11:34:50 localhost kernel: [<ffffffff81113e6a>] ? oom_kill_process+0x8a/0x2c0
Apr 16 11:34:50 localhost kernel: [<ffffffff81113d5e>] ? select_bad_process+0x9e/0x120
Apr 16 11:34:50 localhost kernel: [<ffffffff81114602>] ? mem_cgroup_out_of_memory+0x92/0xb0
Apr 16 11:34:50 localhost kernel: [<ffffffff81169357>] ? mem_cgroup_handle_oom+0x147/0x170
Apr 16 11:34:50 localhost kernel: [<ffffffff81090a90>] ? autoremove_wake_function+0x0/0x40
Apr 16 11:34:50 localhost kernel: [<ffffffff8116a61b>] ? __mem_cgroup_try_charge+0x3bb/0x420
Apr 16 11:34:50 localhost kernel: [<ffffffff81123851>] ? __alloc_pages_nodemask+0x111/0x940
Apr 16 11:34:50 localhost kernel: [<ffffffff8116b917>] ? mem_cgroup_charge_common+0x87/0xd0
Apr 16 11:34:50 localhost kernel: [<ffffffff8116bae8>] ? mem_cgroup_newpage_charge+0x48/0x50
Apr 16 11:34:50 localhost kernel: [<ffffffff8113beca>] ? handle_pte_fault+0x79a/0xb50
Apr 16 11:34:50 localhost kernel: [<ffffffff810471c7>] ? pte_alloc_one+0x37/0x50
Apr 16 11:34:50 localhost kernel: [<ffffffff81171ad9>] ? do_huge_pmd_anonymous_page+0xb9/0x370
Apr 16 11:34:50 localhost kernel: [<ffffffff8100bc0e>] ? apic_timer_interrupt+0xe/0x20
Apr 16 11:34:50 localhost kernel: [<ffffffff8113c464>] ? handle_mm_fault+0x1e4/0x2b0
Apr 16 11:34:50 localhost kernel: [<ffffffff81042b79>] ? __do_page_fault+0x139/0x480
Apr 16 11:34:50 localhost kernel: [<ffffffff811424ea>] ? do_mmap_pgoff+0x33a/0x380
Apr 16 11:34:50 localhost kernel: [<ffffffff814f253e>] ? do_page_fault+0x3e/0xa0
Apr 16 11:34:50 localhost kernel: [<ffffffff814ef8f5>] ? page_fault+0x25/0x30
Apr 16 11:34:50 localhost kernel: Task in /test killed as a result of limit of /test
Apr 16 11:34:50 localhost kernel: memory: usage 102400kB, limit 102400kB, failcnt 756
Apr 16 11:34:50 localhost kernel: memory+swap: usage 206240kB, limit 1048576kB, failcnt 0
Apr 16 11:34:50 localhost kernel: Mem-Info:
Apr 16 11:34:50 localhost kernel: Node 0 DMA per-cpu:
Apr 16 11:34:50 localhost kernel: CPU    0: hi:    0, btch:   1 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    1: hi:    0, btch:   1 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    2: hi:    0, btch:   1 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    3: hi:    0, btch:   1 usd:   0
Apr 16 11:34:50 localhost kernel: Node 0 DMA32 per-cpu:
Apr 16 11:34:50 localhost kernel: CPU    0: hi:  186, btch:  31 usd:  88
Apr 16 11:34:50 localhost kernel: CPU    1: hi:  186, btch:  31 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    2: hi:  186, btch:  31 usd:   0
Apr 16 11:34:50 localhost kernel: CPU    3: hi:  186, btch:  31 usd:  53
Apr 16 11:34:50 localhost kernel: active_anon:14198 inactive_anon:66406 isolated_anon:0
Apr 16 11:34:50 localhost kernel: active_file:62480 inactive_file:88538 isolated_file:0
Apr 16 11:34:50 localhost kernel: unevictable:0 dirty:0 writeback:12822 unstable:0
Apr 16 11:34:50 localhost kernel: free:27898 slab_reclaimable:8884 slab_unreclaimable:9427
Apr 16 11:34:50 localhost kernel: mapped:2723 shmem:68 pagetables:1747 bounce:0
Apr 16 11:34:50 localhost kernel: Node 0 DMA free:15704kB min:592kB low:740kB high:888kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15308kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 16 11:34:50 localhost kernel: lowmem_reserve[]: 0 1120 1120 1120
Apr 16 11:34:50 localhost kernel: Node 0 DMA32 free:95888kB min:44460kB low:55572kB high:66688kB active_anon:56792kB inactive_anon:265624kB active_file:249920kB inactive_file:354152kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1147232kB mlocked:0kB dirty:0kB writeback:51288kB mapped:10892kB shmem:272kB slab_reclaimable:35536kB slab_unreclaimable:37708kB kernel_stack:2216kB pagetables:6988kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 16 11:34:50 localhost kernel: lowmem_reserve[]: 0 0 0 0
Apr 16 11:34:50 localhost kernel: Node 0 DMA: 2*4kB 4*8kB 3*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15704kB
Apr 16 11:34:50 localhost kernel: Node 0 DMA32: 124*4kB 882*8kB 579*16kB 337*32kB 171*64kB 42*128kB 3*256kB 32*512kB 32*1024kB 1*2048kB 0*4096kB = 95888kB
Apr 16 11:34:50 localhost kernel: 211205 total pagecache pages
Apr 16 11:34:50 localhost kernel: 60108 pages in swap cache
Apr 16 11:34:50 localhost kernel: Swap cache stats: add 1240384, delete 1180276, find 400/507
Apr 16 11:34:50 localhost kernel: Free swap  = 1720104kB
Apr 16 11:34:50 localhost kernel: Total swap = 2064376kB
Apr 16 11:34:50 localhost kernel: 294896 pages RAM
Apr 16 11:34:50 localhost kernel: 7632 pages reserved
Apr 16 11:34:50 localhost kernel: 100154 pages shared
Apr 16 11:34:50 localhost kernel: 171738 pages non-shared
Apr 16 11:34:50 localhost kernel: [ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
Apr 16 11:34:50 localhost kernel: [15462]   500 15462    58346    12903   3       0             0 t
Apr 16 11:34:50 localhost kernel: Memory cgroup out of memory: Kill process 15462 (t) score 1000 or sacrifice child
Apr 16 11:34:50 localhost kernel: Killed process 15462, UID 500, (t) total-vm:233384kB, anon-rss:51228kB, file-rss:384kB

[-- Attachment #4: test.c --]
[-- Type: application/octet-stream, Size: 361 bytes --]

#include <stdio.h>
#include <unistd.h>
#define BUF_LEN (1024*1024*32)
int main()
{
	printf("pid= %d \n",getpid());
	getchar();
	int cnt=0;
	while(1)
	{
		char*p=malloc(BUF_LEN);
		if(p==NULL)
		{
			printf("p=NULL\n");
			return 0;
		}
		memset(p,0,BUF_LEN);
		cnt+=BUF_LEN;
		printf("usage: %dk\n",cnt/1024);
		//sleep(1);
	}
	return 0;
}

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-16 14:26       ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2012-04-16 14:26 UTC (permalink / raw)
  To: gaoqiang; +Cc: cgroups, linux-mm

On Mon 16-04-12 11:43:56, gaoqiang wrote:
> a?? Fri, 13 Apr 2012 22:49:54 +0800i 1/4 ?Michal Hocko <mhocko@suse.cz> a??e??:
> 
> >[CC linux-mm]
> >
> >Hi,
> >
> >On Fri 13-04-12 18:00:10, gaoqiang wrote:
> >>
> >>
> >>I put a single process into a cgroup and set memory.limit_in_bytes
> >>to 100M,and memory.memsw.limit_in_bytes to 1G.
> >>
> >>howevery,the process was oom-killed before mem+swap hit 1G. I tried
> >>many times,and it was killed randomly when memory+swap
> >>
> >>exceed 100M but less than 1G.  what is the matter ?
> >
> >could you be more specific about your kernel version, workload and could
> >you provide us with GROUP/memory.stat snapshots taken during your test?
> >
> >One reason for oom might be that you are hitting the hard limit (you
> >cannot get over even if memsw limit says more) and you cannot swap out
> >any pages (e.g. they are mlocked or under writeback).
> >
> 
> many thanks.
> 
> 
> The system is a vmware virtual machine,running centos6.2 with kernel
> 2.6.32-220.7.1.el6.x86_64.

Are you able to reproduce with the vanilla (same version) or a newer
kernel?

> the attachments are memory.stat, 

When did you take this one? Before/during/after the test

> the test program and the /var/log/message of the oom.
> 
> the workload is nearly 0,with searal sshd and bash program running.
> 
> I just did the following command when testing:
> 
> ./t
> # this program will pause at the "getchar()" line and in another
> terminal,run :
> 
> cgclear
> service cgconfig restart
> mkdir /cgroup/memory/test
> cd /cgroup/memory/test
> echo 100m > memory.limit_in_bytes
> echo 1G > memory.memsw.limit_in_bytes
> echo 'pid' > tasks
> 
> # then continue the t command
> 
> 
> Apr 16 11:34:50 localhost kernel: t invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0
> Apr 16 11:34:50 localhost kernel: t cpuset=/ mems_allowed=0
> Apr 16 11:34:50 localhost kernel: Pid: 15462, comm: t Not tainted 2.6.32-220.7.1.el6.x86_64 #1
> Apr 16 11:34:50 localhost kernel: Call Trace:
> Apr 16 11:34:50 localhost kernel: [<ffffffff810c2c61>] ? cpuset_print_task_mems_allowed+0x91/0xb0
> Apr 16 11:34:50 localhost kernel: [<ffffffff811139e0>] ? dump_header+0x90/0x1b0
> Apr 16 11:34:50 localhost kernel: [<ffffffff811693b5>] ? task_in_mem_cgroup+0x35/0xb0
> Apr 16 11:34:50 localhost kernel: [<ffffffff8120d7ac>] ? security_real_capable_noaudit+0x3c/0x70
> Apr 16 11:34:50 localhost kernel: [<ffffffff81113e6a>] ? oom_kill_process+0x8a/0x2c0
> Apr 16 11:34:50 localhost kernel: [<ffffffff81113d5e>] ? select_bad_process+0x9e/0x120
> Apr 16 11:34:50 localhost kernel: [<ffffffff81114602>] ? mem_cgroup_out_of_memory+0x92/0xb0
> Apr 16 11:34:50 localhost kernel: [<ffffffff81169357>] ? mem_cgroup_handle_oom+0x147/0x170
> Apr 16 11:34:50 localhost kernel: [<ffffffff81090a90>] ? autoremove_wake_function+0x0/0x40
> Apr 16 11:34:50 localhost kernel: [<ffffffff8116a61b>] ? __mem_cgroup_try_charge+0x3bb/0x420
> Apr 16 11:34:50 localhost kernel: [<ffffffff81123851>] ? __alloc_pages_nodemask+0x111/0x940
> Apr 16 11:34:50 localhost kernel: [<ffffffff8116b917>] ? mem_cgroup_charge_common+0x87/0xd0
> Apr 16 11:34:50 localhost kernel: [<ffffffff8116bae8>] ? mem_cgroup_newpage_charge+0x48/0x50
> Apr 16 11:34:50 localhost kernel: [<ffffffff8113beca>] ? handle_pte_fault+0x79a/0xb50
> Apr 16 11:34:50 localhost kernel: [<ffffffff810471c7>] ? pte_alloc_one+0x37/0x50
> Apr 16 11:34:50 localhost kernel: [<ffffffff81171ad9>] ? do_huge_pmd_anonymous_page+0xb9/0x370
> Apr 16 11:34:50 localhost kernel: [<ffffffff8100bc0e>] ? apic_timer_interrupt+0xe/0x20
> Apr 16 11:34:50 localhost kernel: [<ffffffff8113c464>] ? handle_mm_fault+0x1e4/0x2b0
> Apr 16 11:34:50 localhost kernel: [<ffffffff81042b79>] ? __do_page_fault+0x139/0x480
> Apr 16 11:34:50 localhost kernel: [<ffffffff811424ea>] ? do_mmap_pgoff+0x33a/0x380
> Apr 16 11:34:50 localhost kernel: [<ffffffff814f253e>] ? do_page_fault+0x3e/0xa0
> Apr 16 11:34:50 localhost kernel: [<ffffffff814ef8f5>] ? page_fault+0x25/0x30
> Apr 16 11:34:50 localhost kernel: Task in /test killed as a result of limit of /test
> Apr 16 11:34:50 localhost kernel: memory: usage 102400kB, limit 102400kB, failcnt 756
> Apr 16 11:34:50 localhost kernel: memory+swap: usage 206240kB, limit 1048576kB, failcnt 0
> Apr 16 11:34:50 localhost kernel: Mem-Info:
[...]
> Apr 16 11:34:50 localhost kernel: active_anon:14198 inactive_anon:66406 isolated_anon:0
> Apr 16 11:34:50 localhost kernel: active_file:62480 inactive_file:88538 isolated_file:0
> Apr 16 11:34:50 localhost kernel: unevictable:0 dirty:0 writeback:12822 unstable:0
> Apr 16 11:34:50 localhost kernel: free:27898 slab_reclaimable:8884 slab_unreclaimable:9427
> Apr 16 11:34:50 localhost kernel: mapped:2723 shmem:68 pagetables:1747 bounce:0

There still seem to be a lot of anon memory that could be reclaimed...
[...]
> Apr 16 11:34:50 localhost kernel: 211205 total pagecache pages
> Apr 16 11:34:50 localhost kernel: 60108 pages in swap cache
> Apr 16 11:34:50 localhost kernel: Swap cache stats: add 1240384, delete 1180276, find 400/507
> Apr 16 11:34:50 localhost kernel: Free swap  = 1720104kB

And a lot of swap space where to put that memory. I do not see any
reason why we should fail to swap out some memory and so get down under
the hard limit. Btw. oom would come sooner or later with your test case.

Anyway there were quite "some" fixes since 2.6.32...

> Apr 16 11:34:50 localhost kernel: Total swap = 2064376kB
> Apr 16 11:34:50 localhost kernel: 294896 pages RAM
> Apr 16 11:34:50 localhost kernel: 7632 pages reserved
> Apr 16 11:34:50 localhost kernel: 100154 pages shared
> Apr 16 11:34:50 localhost kernel: 171738 pages non-shared
> Apr 16 11:34:50 localhost kernel: [ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
> Apr 16 11:34:50 localhost kernel: [15462]   500 15462    58346    12903   3       0             0 t
> Apr 16 11:34:50 localhost kernel: Memory cgroup out of memory: Kill process 15462 (t) score 1000 or sacrifice child
> Apr 16 11:34:50 localhost kernel: Killed process 15462, UID 500, (t) total-vm:233384kB, anon-rss:51228kB, file-rss:384kB
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-16 14:26       ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2012-04-16 14:26 UTC (permalink / raw)
  To: gaoqiang; +Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, linux-mm-Bw31MaZKKs3YtjvyW6yDsg

On Mon 16-04-12 11:43:56, gaoqiang wrote:
> 在 Fri, 13 Apr 2012 22:49:54 +0800,Michal Hocko <mhocko@suse.cz> 写道:
> 
> >[CC linux-mm]
> >
> >Hi,
> >
> >On Fri 13-04-12 18:00:10, gaoqiang wrote:
> >>
> >>
> >>I put a single process into a cgroup and set memory.limit_in_bytes
> >>to 100M,and memory.memsw.limit_in_bytes to 1G.
> >>
> >>howevery,the process was oom-killed before mem+swap hit 1G. I tried
> >>many times,and it was killed randomly when memory+swap
> >>
> >>exceed 100M but less than 1G.  what is the matter ?
> >
> >could you be more specific about your kernel version, workload and could
> >you provide us with GROUP/memory.stat snapshots taken during your test?
> >
> >One reason for oom might be that you are hitting the hard limit (you
> >cannot get over even if memsw limit says more) and you cannot swap out
> >any pages (e.g. they are mlocked or under writeback).
> >
> 
> many thanks.
> 
> 
> The system is a vmware virtual machine,running centos6.2 with kernel
> 2.6.32-220.7.1.el6.x86_64.

Are you able to reproduce with the vanilla (same version) or a newer
kernel?

> the attachments are memory.stat, 

When did you take this one? Before/during/after the test

> the test program and the /var/log/message of the oom.
> 
> the workload is nearly 0,with searal sshd and bash program running.
> 
> I just did the following command when testing:
> 
> ./t
> # this program will pause at the "getchar()" line and in another
> terminal,run :
> 
> cgclear
> service cgconfig restart
> mkdir /cgroup/memory/test
> cd /cgroup/memory/test
> echo 100m > memory.limit_in_bytes
> echo 1G > memory.memsw.limit_in_bytes
> echo 'pid' > tasks
> 
> # then continue the t command
> 
> 
> Apr 16 11:34:50 localhost kernel: t invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0
> Apr 16 11:34:50 localhost kernel: t cpuset=/ mems_allowed=0
> Apr 16 11:34:50 localhost kernel: Pid: 15462, comm: t Not tainted 2.6.32-220.7.1.el6.x86_64 #1
> Apr 16 11:34:50 localhost kernel: Call Trace:
> Apr 16 11:34:50 localhost kernel: [<ffffffff810c2c61>] ? cpuset_print_task_mems_allowed+0x91/0xb0
> Apr 16 11:34:50 localhost kernel: [<ffffffff811139e0>] ? dump_header+0x90/0x1b0
> Apr 16 11:34:50 localhost kernel: [<ffffffff811693b5>] ? task_in_mem_cgroup+0x35/0xb0
> Apr 16 11:34:50 localhost kernel: [<ffffffff8120d7ac>] ? security_real_capable_noaudit+0x3c/0x70
> Apr 16 11:34:50 localhost kernel: [<ffffffff81113e6a>] ? oom_kill_process+0x8a/0x2c0
> Apr 16 11:34:50 localhost kernel: [<ffffffff81113d5e>] ? select_bad_process+0x9e/0x120
> Apr 16 11:34:50 localhost kernel: [<ffffffff81114602>] ? mem_cgroup_out_of_memory+0x92/0xb0
> Apr 16 11:34:50 localhost kernel: [<ffffffff81169357>] ? mem_cgroup_handle_oom+0x147/0x170
> Apr 16 11:34:50 localhost kernel: [<ffffffff81090a90>] ? autoremove_wake_function+0x0/0x40
> Apr 16 11:34:50 localhost kernel: [<ffffffff8116a61b>] ? __mem_cgroup_try_charge+0x3bb/0x420
> Apr 16 11:34:50 localhost kernel: [<ffffffff81123851>] ? __alloc_pages_nodemask+0x111/0x940
> Apr 16 11:34:50 localhost kernel: [<ffffffff8116b917>] ? mem_cgroup_charge_common+0x87/0xd0
> Apr 16 11:34:50 localhost kernel: [<ffffffff8116bae8>] ? mem_cgroup_newpage_charge+0x48/0x50
> Apr 16 11:34:50 localhost kernel: [<ffffffff8113beca>] ? handle_pte_fault+0x79a/0xb50
> Apr 16 11:34:50 localhost kernel: [<ffffffff810471c7>] ? pte_alloc_one+0x37/0x50
> Apr 16 11:34:50 localhost kernel: [<ffffffff81171ad9>] ? do_huge_pmd_anonymous_page+0xb9/0x370
> Apr 16 11:34:50 localhost kernel: [<ffffffff8100bc0e>] ? apic_timer_interrupt+0xe/0x20
> Apr 16 11:34:50 localhost kernel: [<ffffffff8113c464>] ? handle_mm_fault+0x1e4/0x2b0
> Apr 16 11:34:50 localhost kernel: [<ffffffff81042b79>] ? __do_page_fault+0x139/0x480
> Apr 16 11:34:50 localhost kernel: [<ffffffff811424ea>] ? do_mmap_pgoff+0x33a/0x380
> Apr 16 11:34:50 localhost kernel: [<ffffffff814f253e>] ? do_page_fault+0x3e/0xa0
> Apr 16 11:34:50 localhost kernel: [<ffffffff814ef8f5>] ? page_fault+0x25/0x30
> Apr 16 11:34:50 localhost kernel: Task in /test killed as a result of limit of /test
> Apr 16 11:34:50 localhost kernel: memory: usage 102400kB, limit 102400kB, failcnt 756
> Apr 16 11:34:50 localhost kernel: memory+swap: usage 206240kB, limit 1048576kB, failcnt 0
> Apr 16 11:34:50 localhost kernel: Mem-Info:
[...]
> Apr 16 11:34:50 localhost kernel: active_anon:14198 inactive_anon:66406 isolated_anon:0
> Apr 16 11:34:50 localhost kernel: active_file:62480 inactive_file:88538 isolated_file:0
> Apr 16 11:34:50 localhost kernel: unevictable:0 dirty:0 writeback:12822 unstable:0
> Apr 16 11:34:50 localhost kernel: free:27898 slab_reclaimable:8884 slab_unreclaimable:9427
> Apr 16 11:34:50 localhost kernel: mapped:2723 shmem:68 pagetables:1747 bounce:0

There still seem to be a lot of anon memory that could be reclaimed...
[...]
> Apr 16 11:34:50 localhost kernel: 211205 total pagecache pages
> Apr 16 11:34:50 localhost kernel: 60108 pages in swap cache
> Apr 16 11:34:50 localhost kernel: Swap cache stats: add 1240384, delete 1180276, find 400/507
> Apr 16 11:34:50 localhost kernel: Free swap  = 1720104kB

And a lot of swap space where to put that memory. I do not see any
reason why we should fail to swap out some memory and so get down under
the hard limit. Btw. oom would come sooner or later with your test case.

Anyway there were quite "some" fixes since 2.6.32...

> Apr 16 11:34:50 localhost kernel: Total swap = 2064376kB
> Apr 16 11:34:50 localhost kernel: 294896 pages RAM
> Apr 16 11:34:50 localhost kernel: 7632 pages reserved
> Apr 16 11:34:50 localhost kernel: 100154 pages shared
> Apr 16 11:34:50 localhost kernel: 171738 pages non-shared
> Apr 16 11:34:50 localhost kernel: [ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
> Apr 16 11:34:50 localhost kernel: [15462]   500 15462    58346    12903   3       0             0 t
> Apr 16 11:34:50 localhost kernel: Memory cgroup out of memory: Kill process 15462 (t) score 1000 or sacrifice child
> Apr 16 11:34:50 localhost kernel: Killed process 15462, UID 500, (t) total-vm:233384kB, anon-rss:51228kB, file-rss:384kB
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-17  3:25       ` Sha Zhengju
  0 siblings, 0 replies; 11+ messages in thread
From: Sha Zhengju @ 2012-04-17  3:25 UTC (permalink / raw)
  To: gaoqiang; +Cc: Michal Hocko, cgroups, linux-mm

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=x-gbk; format=flowed, Size: 2190 bytes --]

On 04/16/2012 11:43 AM, gaoqiang wrote:
> OU Fri, 13 Apr 2012 22:49:54 +0800GBP!Michal Hocko <mhocko@suse.cz> D'uA:
>
>> [CC linux-mm]
>>
>> Hi,
>>
>> On Fri 13-04-12 18:00:10, gaoqiang wrote:
>>>
>>>
>>> I put a single process into a cgroup and set memory.limit_in_bytes
>>> to 100M,and memory.memsw.limit_in_bytes to 1G.
>>>
>>> howevery,the process was oom-killed before mem+swap hit 1G. I tried
>>> many times,and it was killed randomly when memory+swap
>>>
>>> exceed 100M but less than 1G. what is the matter ?
>>
>> could you be more specific about your kernel version, workload and could
>> you provide us with GROUP/memory.stat snapshots taken during your test?
>>
>> One reason for oom might be that you are hitting the hard limit (you
>> cannot get over even if memsw limit says more) and you cannot swap out
>> any pages (e.g. they are mlocked or under writeback).
>>
>
> many thanks.
>
>
> The system is a vmware virtual machine,running centos6.2 with kernel 
> 2.6.32-220.7.1.el6.x86_64.
>
> the attachments are memory.stat, the test program and the 
> /var/log/message of the oom.
>
> the workload is nearly 0,with searal sshd and bash program running.
>
> I just did the following command when testing:
>
> ./t
> # this program will pause at the "getchar()" line and in another 
> terminal,run :
>
> cgclear
> service cgconfig restart
> mkdir /cgroup/memory/test
> cd /cgroup/memory/test
> echo 100m > memory.limit_in_bytes
> echo 1G > memory.memsw.limit_in_bytes
> echo 'pid' > tasks
>
> # then continue the t command
>
>
Hi,

I run your test under RHEL6.1 with 2.6.32-220.7.1.el6.x86_64 (an 
internal version but
no changes in mm/memcg) in a real server and the process is killed with 
memsw reaching
1G. Does your vmware virtual machine have enough swap space?.. I've no 
idea whether
the different behavior come from the physical/virtual environment.


Thanks,
Sha


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-17  3:25       ` Sha Zhengju
  0 siblings, 0 replies; 11+ messages in thread
From: Sha Zhengju @ 2012-04-17  3:25 UTC (permalink / raw)
  To: gaoqiang
  Cc: Michal Hocko, cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg

On 04/16/2012 11:43 AM, gaoqiang wrote:
> ÔÚ Fri, 13 Apr 2012 22:49:54 +0800£¬Michal Hocko <mhocko@suse.cz> дµÀ:
>
>> [CC linux-mm]
>>
>> Hi,
>>
>> On Fri 13-04-12 18:00:10, gaoqiang wrote:
>>>
>>>
>>> I put a single process into a cgroup and set memory.limit_in_bytes
>>> to 100M,and memory.memsw.limit_in_bytes to 1G.
>>>
>>> howevery,the process was oom-killed before mem+swap hit 1G. I tried
>>> many times,and it was killed randomly when memory+swap
>>>
>>> exceed 100M but less than 1G. what is the matter ?
>>
>> could you be more specific about your kernel version, workload and could
>> you provide us with GROUP/memory.stat snapshots taken during your test?
>>
>> One reason for oom might be that you are hitting the hard limit (you
>> cannot get over even if memsw limit says more) and you cannot swap out
>> any pages (e.g. they are mlocked or under writeback).
>>
>
> many thanks.
>
>
> The system is a vmware virtual machine,running centos6.2 with kernel 
> 2.6.32-220.7.1.el6.x86_64.
>
> the attachments are memory.stat, the test program and the 
> /var/log/message of the oom.
>
> the workload is nearly 0,with searal sshd and bash program running.
>
> I just did the following command when testing:
>
> ./t
> # this program will pause at the "getchar()" line and in another 
> terminal,run :
>
> cgclear
> service cgconfig restart
> mkdir /cgroup/memory/test
> cd /cgroup/memory/test
> echo 100m > memory.limit_in_bytes
> echo 1G > memory.memsw.limit_in_bytes
> echo 'pid' > tasks
>
> # then continue the t command
>
>
Hi,

I run your test under RHEL6.1 with 2.6.32-220.7.1.el6.x86_64 (an 
internal version but
no changes in mm/memcg) in a real server and the process is killed with 
memsw reaching
1G. Does your vmware virtual machine have enough swap space?.. I've no 
idea whether
the different behavior come from the physical/virtual environment.


Thanks,
Sha


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-17  8:18         ` gaoqiang
  0 siblings, 0 replies; 11+ messages in thread
From: gaoqiang @ 2012-04-17  8:18 UTC (permalink / raw)
  To: Sha Zhengju; +Cc: Michal Hocko, cgroups, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2507 bytes --]

在 Tue, 17 Apr 2012 11:25:26 +0800,Sha Zhengju <handai.szj@gmail.com>  
写道:

the vmware machine has about 2G swap space,which should be quite enough..

Days ago I can produce it on a physical machine,  not any more now.
the /var/log/message- was still there (the attchment),so I think
it was not a mistake.

I tried on my laptop,with the same system, easy to reproduce..


> On 04/16/2012 11:43 AM, gaoqiang wrote:
>> 在 Fri, 13 Apr 2012 22:49:54 +0800,Michal Hocko <mhocko@suse.cz> 写道:
>>
>>> [CC linux-mm]
>>>
>>> Hi,
>>>
>>> On Fri 13-04-12 18:00:10, gaoqiang wrote:
>>>>
>>>>
>>>> I put a single process into a cgroup and set memory.limit_in_bytes
>>>> to 100M,and memory.memsw.limit_in_bytes to 1G.
>>>>
>>>> howevery,the process was oom-killed before mem+swap hit 1G. I tried
>>>> many times,and it was killed randomly when memory+swap
>>>>
>>>> exceed 100M but less than 1G. what is the matter ?
>>>
>>> could you be more specific about your kernel version, workload and  
>>> could
>>> you provide us with GROUP/memory.stat snapshots taken during your test?
>>>
>>> One reason for oom might be that you are hitting the hard limit (you
>>> cannot get over even if memsw limit says more) and you cannot swap out
>>> any pages (e.g. they are mlocked or under writeback).
>>>
>>
>> many thanks.
>>
>>
>> The system is a vmware virtual machine,running centos6.2 with kernel  
>> 2.6.32-220.7.1.el6.x86_64.
>>
>> the attachments are memory.stat, the test program and the  
>> /var/log/message of the oom.
>>
>> the workload is nearly 0,with searal sshd and bash program running.
>>
>> I just did the following command when testing:
>>
>> ./t
>> # this program will pause at the "getchar()" line and in another  
>> terminal,run :
>>
>> cgclear
>> service cgconfig restart
>> mkdir /cgroup/memory/test
>> cd /cgroup/memory/test
>> echo 100m > memory.limit_in_bytes
>> echo 1G > memory.memsw.limit_in_bytes
>> echo 'pid' > tasks
>>
>> # then continue the t command
>>
>>
> Hi,
>
> I run your test under RHEL6.1 with 2.6.32-220.7.1.el6.x86_64 (an  
> internal version but
> no changes in mm/memcg) in a real server and the process is killed with  
> memsw reaching
> 1G. Does your vmware virtual machine have enough swap space?.. I've no  
> idea whether
> the different behavior come from the physical/virtual environment.
>
>
> Thanks,
> Sha
>
>


-- 
使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/

[-- Attachment #2: messages --]
[-- Type: application/octet-stream, Size: 10753 bytes --]

Apr 12 19:43:25 c2 kernel: test invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0
Apr 12 19:43:25 c2 kernel: test cpuset=/ mems_allowed=0-1
Apr 12 19:43:25 c2 kernel: Pid: 9867, comm: test Not tainted 2.6.32-220.7.1.el6.x86_64 #1
Apr 12 19:43:25 c2 kernel: Call Trace:
Apr 12 19:43:25 c2 kernel: [<ffffffff810c2c61>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Apr 12 19:43:25 c2 kernel: [<ffffffff811139e0>] ? dump_header+0x90/0x1b0
Apr 12 19:43:25 c2 kernel: [<ffffffff8120d7ac>] ? security_real_capable_noaudit+0x3c/0x70
Apr 12 19:43:25 c2 kernel: [<ffffffff81113e6a>] ? oom_kill_process+0x8a/0x2c0
Apr 12 19:43:25 c2 kernel: [<ffffffff81113da1>] ? select_bad_process+0xe1/0x120
Apr 12 19:43:25 c2 kernel: [<ffffffff81114602>] ? mem_cgroup_out_of_memory+0x92/0xb0
Apr 12 19:43:25 c2 kernel: [<ffffffff81169357>] ? mem_cgroup_handle_oom+0x147/0x170
Apr 12 19:43:25 c2 kernel: [<ffffffff81090a90>] ? autoremove_wake_function+0x0/0x40
Apr 12 19:43:25 c2 kernel: [<ffffffff8116a61b>] ? __mem_cgroup_try_charge+0x3bb/0x420
Apr 12 19:43:25 c2 kernel: [<ffffffff81123851>] ? __alloc_pages_nodemask+0x111/0x940
Apr 12 19:43:25 c2 kernel: [<ffffffff8116b917>] ? mem_cgroup_charge_common+0x87/0xd0
Apr 12 19:43:25 c2 kernel: [<ffffffff8116bae8>] ? mem_cgroup_newpage_charge+0x48/0x50
Apr 12 19:43:25 c2 kernel: [<ffffffff8113beca>] ? handle_pte_fault+0x79a/0xb50
Apr 12 19:43:25 c2 kernel: [<ffffffff810471c7>] ? pte_alloc_one+0x37/0x50
Apr 12 19:43:25 c2 kernel: [<ffffffff81171ad9>] ? do_huge_pmd_anonymous_page+0xb9/0x370
Apr 12 19:43:25 c2 kernel: [<ffffffff8113c464>] ? handle_mm_fault+0x1e4/0x2b0
Apr 12 19:43:25 c2 kernel: [<ffffffff81042b79>] ? __do_page_fault+0x139/0x480
Apr 12 19:43:25 c2 kernel: [<ffffffff8100988e>] ? __switch_to+0x26e/0x320
Apr 12 19:43:25 c2 kernel: [<ffffffff814ecb0e>] ? thread_return+0x4e/0x760
Apr 12 19:43:25 c2 kernel: [<ffffffff814f253e>] ? do_page_fault+0x3e/0xa0
Apr 12 19:43:25 c2 kernel: [<ffffffff814ef8f5>] ? page_fault+0x25/0x30
Apr 12 19:43:25 c2 kernel: Task in /9866 killed as a result of limit of /9866
Apr 12 19:43:25 c2 kernel: memory: usage 102400kB, limit 102400kB, failcnt 756
Apr 12 19:43:25 c2 kernel: memory+swap: usage 199656kB, limit 1024000kB, failcnt 0
Apr 12 19:43:25 c2 kernel: Mem-Info:
Apr 12 19:43:25 c2 kernel: Node 0 DMA per-cpu:
Apr 12 19:43:25 c2 kernel: CPU    0: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    1: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    2: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    3: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    4: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    5: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    6: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    7: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    8: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    9: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   10: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   11: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   12: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   13: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   14: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   15: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: Node 0 DMA32 per-cpu:
Apr 12 19:43:25 c2 kernel: CPU    0: hi:  186, btch:  31 usd:  74
Apr 12 19:43:25 c2 kernel: CPU    1: hi:  186, btch:  31 usd:  34
Apr 12 19:43:25 c2 kernel: CPU    2: hi:  186, btch:  31 usd: 177
Apr 12 19:43:25 c2 kernel: CPU    3: hi:  186, btch:  31 usd: 164
Apr 12 19:43:25 c2 kernel: CPU    4: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    5: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    6: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    7: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    8: hi:  186, btch:  31 usd:  18
Apr 12 19:43:25 c2 kernel: CPU    9: hi:  186, btch:  31 usd:  33
Apr 12 19:43:25 c2 kernel: CPU   10: hi:  186, btch:  31 usd:  32
Apr 12 19:43:25 c2 kernel: CPU   11: hi:  186, btch:  31 usd: 167
Apr 12 19:43:25 c2 kernel: CPU   12: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   13: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   14: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   15: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: Node 0 Normal per-cpu:
Apr 12 19:43:25 c2 kernel: CPU    0: hi:  186, btch:  31 usd: 151
Apr 12 19:43:25 c2 kernel: CPU    1: hi:  186, btch:  31 usd: 163
Apr 12 19:43:25 c2 kernel: CPU    2: hi:  186, btch:  31 usd: 181
Apr 12 19:43:25 c2 kernel: CPU    3: hi:  186, btch:  31 usd: 180
Apr 12 19:43:25 c2 kernel: CPU    4: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    5: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    6: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    7: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    8: hi:  186, btch:  31 usd: 156
Apr 12 19:43:25 c2 kernel: CPU    9: hi:  186, btch:  31 usd: 114
Apr 12 19:43:25 c2 kernel: CPU   10: hi:  186, btch:  31 usd: 171
Apr 12 19:43:25 c2 kernel: CPU   11: hi:  186, btch:  31 usd: 160
Apr 12 19:43:25 c2 kernel: CPU   12: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   13: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   14: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   15: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: Node 1 Normal per-cpu:
Apr 12 19:43:25 c2 kernel: CPU    0: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    1: hi:  186, btch:  31 usd:  11
Apr 12 19:43:25 c2 kernel: CPU    2: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    3: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    4: hi:  186, btch:  31 usd:  53
Apr 12 19:43:25 c2 kernel: CPU    5: hi:  186, btch:  31 usd: 153
Apr 12 19:43:25 c2 kernel: CPU    6: hi:  186, btch:  31 usd:  40
Apr 12 19:43:25 c2 kernel: CPU    7: hi:  186, btch:  31 usd: 128
Apr 12 19:43:25 c2 kernel: CPU    8: hi:  186, btch:  31 usd: 134
Apr 12 19:43:25 c2 kernel: CPU    9: hi:  186, btch:  31 usd:  40
Apr 12 19:43:25 c2 kernel: CPU   10: hi:  186, btch:  31 usd:   5
Apr 12 19:43:25 c2 kernel: CPU   11: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   12: hi:  186, btch:  31 usd: 162
Apr 12 19:43:25 c2 kernel: CPU   13: hi:  186, btch:  31 usd:  26
Apr 12 19:43:25 c2 kernel: CPU   14: hi:  186, btch:  31 usd:  60
Apr 12 19:43:25 c2 kernel: CPU   15: hi:  186, btch:  31 usd: 165
Apr 12 19:43:25 c2 kernel: active_anon:23933 inactive_anon:12772 isolated_anon:0
Apr 12 19:43:25 c2 kernel: active_file:2735681 inactive_file:2912499 isolated_file:0
Apr 12 19:43:25 c2 kernel: unevictable:0 dirty:5 writeback:12826 unstable:0
Apr 12 19:43:25 c2 kernel: free:1850423 slab_reclaimable:563455 slab_unreclaimable:22302
Apr 12 19:43:25 c2 kernel: mapped:3156 shmem:58 pagetables:2532 bounce:0
Apr 12 19:43:25 c2 kernel: Node 0 DMA free:15660kB min:40kB low:48kB high:60kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15248kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 12 19:43:25 c2 kernel: lowmem_reserve[]: 0 2991 16121 16121
Apr 12 19:43:25 c2 kernel: Node 0 DMA32 free:1819872kB min:8344kB low:10428kB high:12516kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3063392kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:841860kB slab_unreclaimable:11124kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 12 19:43:25 c2 kernel: lowmem_reserve[]: 0 0 13130 13130
Apr 12 19:43:25 c2 kernel: Node 0 Normal free:45632kB min:36632kB low:45788kB high:54948kB active_anon:19128kB inactive_anon:28kB active_file:6287228kB inactive_file:5978212kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13445120kB mlocked:0kB dirty:0kB writeback:0kB mapped:9252kB shmem:108kB slab_reclaimable:1065380kB slab_unreclaimable:31404kB kernel_stack:3504kB pagetables:3860kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 12 19:43:25 c2 kernel: lowmem_reserve[]: 0 0 0 0
Apr 12 19:43:25 c2 kernel: Node 1 Normal free:5520528kB min:45088kB low:56360kB high:67632kB active_anon:76604kB inactive_anon:51060kB active_file:4655496kB inactive_file:5671784kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:16547840kB mlocked:0kB dirty:20kB writeback:51304kB mapped:3372kB shmem:124kB slab_reclaimable:346580kB slab_unreclaimable:46680kB kernel_stack:448kB pagetables:6268kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 12 19:43:25 c2 kernel: lowmem_reserve[]: 0 0 0 0
Apr 12 19:43:25 c2 kernel: Node 0 DMA: 3*4kB 0*8kB 0*16kB 1*32kB 2*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15660kB
Apr 12 19:43:25 c2 kernel: Node 0 DMA32: 66*4kB 109*8kB 47*16kB 24*32kB 28*64kB 17*128kB 7*256kB 8*512kB 9*1024kB 4*2048kB 437*4096kB = 1819872kB
Apr 12 19:43:25 c2 kernel: Node 0 Normal: 134*4kB 221*8kB 48*16kB 28*32kB 11*64kB 4*128kB 2*256kB 10*512kB 10*1024kB 2*2048kB 5*4096kB = 45632kB
Apr 12 19:43:25 c2 kernel: Node 1 Normal: 10*4kB 6*8kB 7*16kB 4*32kB 71*64kB 171*128kB 127*256kB 67*512kB 52*1024kB 2*2048kB 1311*4096kB = 5520776kB
Apr 12 19:43:25 c2 kernel: 5661062 total pagecache pages
Apr 12 19:43:25 c2 kernel: 12823 pages in swap cache
Apr 12 19:43:25 c2 kernel: Swap cache stats: add 37243, delete 24420, find 57/66
Apr 12 19:43:25 c2 kernel: Free swap  = 34946028kB
Apr 12 19:43:25 c2 kernel: Total swap = 35094520kB
Apr 12 19:43:25 c2 kernel: 8388607 pages RAM
Apr 12 19:43:25 c2 kernel: 171092 pages reserved
Apr 12 19:43:25 c2 kernel: 5343598 pages shared
Apr 12 19:43:25 c2 kernel: 1039448 pages non-shared
Apr 12 19:43:25 c2 kernel: [ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
Apr 12 19:43:25 c2 kernel: [ 9866]     0  9866    21650      455  13       0             0 cglimit
Apr 12 19:43:25 c2 kernel: [ 9867]     0  9867    58345    12846   4       0             0 test
Apr 12 19:43:25 c2 kernel: Memory cgroup out of memory: Kill process 9867 (test) score 1000 or sacrifice child
Apr 12 19:43:25 c2 kernel: Killed process 9867, UID 0, (test) total-vm:233380kB, anon-rss:51004kB, file-rss:380kB

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: question about memsw of memory cgroup-subsystem
@ 2012-04-17  8:18         ` gaoqiang
  0 siblings, 0 replies; 11+ messages in thread
From: gaoqiang @ 2012-04-17  8:18 UTC (permalink / raw)
  To: Sha Zhengju
  Cc: Michal Hocko, cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg

[-- Attachment #1: Type: text/plain, Size: 2507 bytes --]

在 Tue, 17 Apr 2012 11:25:26 +0800,Sha Zhengju <handai.szj@gmail.com>  
写道:

the vmware machine has about 2G swap space,which should be quite enough..

Days ago I can produce it on a physical machine,  not any more now.
the /var/log/message- was still there (the attchment),so I think
it was not a mistake.

I tried on my laptop,with the same system, easy to reproduce..


> On 04/16/2012 11:43 AM, gaoqiang wrote:
>> 在 Fri, 13 Apr 2012 22:49:54 +0800,Michal Hocko <mhocko@suse.cz> 写道:
>>
>>> [CC linux-mm]
>>>
>>> Hi,
>>>
>>> On Fri 13-04-12 18:00:10, gaoqiang wrote:
>>>>
>>>>
>>>> I put a single process into a cgroup and set memory.limit_in_bytes
>>>> to 100M,and memory.memsw.limit_in_bytes to 1G.
>>>>
>>>> howevery,the process was oom-killed before mem+swap hit 1G. I tried
>>>> many times,and it was killed randomly when memory+swap
>>>>
>>>> exceed 100M but less than 1G. what is the matter ?
>>>
>>> could you be more specific about your kernel version, workload and  
>>> could
>>> you provide us with GROUP/memory.stat snapshots taken during your test?
>>>
>>> One reason for oom might be that you are hitting the hard limit (you
>>> cannot get over even if memsw limit says more) and you cannot swap out
>>> any pages (e.g. they are mlocked or under writeback).
>>>
>>
>> many thanks.
>>
>>
>> The system is a vmware virtual machine,running centos6.2 with kernel  
>> 2.6.32-220.7.1.el6.x86_64.
>>
>> the attachments are memory.stat, the test program and the  
>> /var/log/message of the oom.
>>
>> the workload is nearly 0,with searal sshd and bash program running.
>>
>> I just did the following command when testing:
>>
>> ./t
>> # this program will pause at the "getchar()" line and in another  
>> terminal,run :
>>
>> cgclear
>> service cgconfig restart
>> mkdir /cgroup/memory/test
>> cd /cgroup/memory/test
>> echo 100m > memory.limit_in_bytes
>> echo 1G > memory.memsw.limit_in_bytes
>> echo 'pid' > tasks
>>
>> # then continue the t command
>>
>>
> Hi,
>
> I run your test under RHEL6.1 with 2.6.32-220.7.1.el6.x86_64 (an  
> internal version but
> no changes in mm/memcg) in a real server and the process is killed with  
> memsw reaching
> 1G. Does your vmware virtual machine have enough swap space?.. I've no  
> idea whether
> the different behavior come from the physical/virtual environment.
>
>
> Thanks,
> Sha
>
>


-- 
使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/

[-- Attachment #2: messages --]
[-- Type: application/octet-stream, Size: 10753 bytes --]

Apr 12 19:43:25 c2 kernel: test invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0
Apr 12 19:43:25 c2 kernel: test cpuset=/ mems_allowed=0-1
Apr 12 19:43:25 c2 kernel: Pid: 9867, comm: test Not tainted 2.6.32-220.7.1.el6.x86_64 #1
Apr 12 19:43:25 c2 kernel: Call Trace:
Apr 12 19:43:25 c2 kernel: [<ffffffff810c2c61>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Apr 12 19:43:25 c2 kernel: [<ffffffff811139e0>] ? dump_header+0x90/0x1b0
Apr 12 19:43:25 c2 kernel: [<ffffffff8120d7ac>] ? security_real_capable_noaudit+0x3c/0x70
Apr 12 19:43:25 c2 kernel: [<ffffffff81113e6a>] ? oom_kill_process+0x8a/0x2c0
Apr 12 19:43:25 c2 kernel: [<ffffffff81113da1>] ? select_bad_process+0xe1/0x120
Apr 12 19:43:25 c2 kernel: [<ffffffff81114602>] ? mem_cgroup_out_of_memory+0x92/0xb0
Apr 12 19:43:25 c2 kernel: [<ffffffff81169357>] ? mem_cgroup_handle_oom+0x147/0x170
Apr 12 19:43:25 c2 kernel: [<ffffffff81090a90>] ? autoremove_wake_function+0x0/0x40
Apr 12 19:43:25 c2 kernel: [<ffffffff8116a61b>] ? __mem_cgroup_try_charge+0x3bb/0x420
Apr 12 19:43:25 c2 kernel: [<ffffffff81123851>] ? __alloc_pages_nodemask+0x111/0x940
Apr 12 19:43:25 c2 kernel: [<ffffffff8116b917>] ? mem_cgroup_charge_common+0x87/0xd0
Apr 12 19:43:25 c2 kernel: [<ffffffff8116bae8>] ? mem_cgroup_newpage_charge+0x48/0x50
Apr 12 19:43:25 c2 kernel: [<ffffffff8113beca>] ? handle_pte_fault+0x79a/0xb50
Apr 12 19:43:25 c2 kernel: [<ffffffff810471c7>] ? pte_alloc_one+0x37/0x50
Apr 12 19:43:25 c2 kernel: [<ffffffff81171ad9>] ? do_huge_pmd_anonymous_page+0xb9/0x370
Apr 12 19:43:25 c2 kernel: [<ffffffff8113c464>] ? handle_mm_fault+0x1e4/0x2b0
Apr 12 19:43:25 c2 kernel: [<ffffffff81042b79>] ? __do_page_fault+0x139/0x480
Apr 12 19:43:25 c2 kernel: [<ffffffff8100988e>] ? __switch_to+0x26e/0x320
Apr 12 19:43:25 c2 kernel: [<ffffffff814ecb0e>] ? thread_return+0x4e/0x760
Apr 12 19:43:25 c2 kernel: [<ffffffff814f253e>] ? do_page_fault+0x3e/0xa0
Apr 12 19:43:25 c2 kernel: [<ffffffff814ef8f5>] ? page_fault+0x25/0x30
Apr 12 19:43:25 c2 kernel: Task in /9866 killed as a result of limit of /9866
Apr 12 19:43:25 c2 kernel: memory: usage 102400kB, limit 102400kB, failcnt 756
Apr 12 19:43:25 c2 kernel: memory+swap: usage 199656kB, limit 1024000kB, failcnt 0
Apr 12 19:43:25 c2 kernel: Mem-Info:
Apr 12 19:43:25 c2 kernel: Node 0 DMA per-cpu:
Apr 12 19:43:25 c2 kernel: CPU    0: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    1: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    2: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    3: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    4: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    5: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    6: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    7: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    8: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    9: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   10: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   11: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   12: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   13: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   14: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   15: hi:    0, btch:   1 usd:   0
Apr 12 19:43:25 c2 kernel: Node 0 DMA32 per-cpu:
Apr 12 19:43:25 c2 kernel: CPU    0: hi:  186, btch:  31 usd:  74
Apr 12 19:43:25 c2 kernel: CPU    1: hi:  186, btch:  31 usd:  34
Apr 12 19:43:25 c2 kernel: CPU    2: hi:  186, btch:  31 usd: 177
Apr 12 19:43:25 c2 kernel: CPU    3: hi:  186, btch:  31 usd: 164
Apr 12 19:43:25 c2 kernel: CPU    4: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    5: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    6: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    7: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    8: hi:  186, btch:  31 usd:  18
Apr 12 19:43:25 c2 kernel: CPU    9: hi:  186, btch:  31 usd:  33
Apr 12 19:43:25 c2 kernel: CPU   10: hi:  186, btch:  31 usd:  32
Apr 12 19:43:25 c2 kernel: CPU   11: hi:  186, btch:  31 usd: 167
Apr 12 19:43:25 c2 kernel: CPU   12: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   13: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   14: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   15: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: Node 0 Normal per-cpu:
Apr 12 19:43:25 c2 kernel: CPU    0: hi:  186, btch:  31 usd: 151
Apr 12 19:43:25 c2 kernel: CPU    1: hi:  186, btch:  31 usd: 163
Apr 12 19:43:25 c2 kernel: CPU    2: hi:  186, btch:  31 usd: 181
Apr 12 19:43:25 c2 kernel: CPU    3: hi:  186, btch:  31 usd: 180
Apr 12 19:43:25 c2 kernel: CPU    4: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    5: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    6: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    7: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    8: hi:  186, btch:  31 usd: 156
Apr 12 19:43:25 c2 kernel: CPU    9: hi:  186, btch:  31 usd: 114
Apr 12 19:43:25 c2 kernel: CPU   10: hi:  186, btch:  31 usd: 171
Apr 12 19:43:25 c2 kernel: CPU   11: hi:  186, btch:  31 usd: 160
Apr 12 19:43:25 c2 kernel: CPU   12: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   13: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   14: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   15: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: Node 1 Normal per-cpu:
Apr 12 19:43:25 c2 kernel: CPU    0: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    1: hi:  186, btch:  31 usd:  11
Apr 12 19:43:25 c2 kernel: CPU    2: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    3: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU    4: hi:  186, btch:  31 usd:  53
Apr 12 19:43:25 c2 kernel: CPU    5: hi:  186, btch:  31 usd: 153
Apr 12 19:43:25 c2 kernel: CPU    6: hi:  186, btch:  31 usd:  40
Apr 12 19:43:25 c2 kernel: CPU    7: hi:  186, btch:  31 usd: 128
Apr 12 19:43:25 c2 kernel: CPU    8: hi:  186, btch:  31 usd: 134
Apr 12 19:43:25 c2 kernel: CPU    9: hi:  186, btch:  31 usd:  40
Apr 12 19:43:25 c2 kernel: CPU   10: hi:  186, btch:  31 usd:   5
Apr 12 19:43:25 c2 kernel: CPU   11: hi:  186, btch:  31 usd:   0
Apr 12 19:43:25 c2 kernel: CPU   12: hi:  186, btch:  31 usd: 162
Apr 12 19:43:25 c2 kernel: CPU   13: hi:  186, btch:  31 usd:  26
Apr 12 19:43:25 c2 kernel: CPU   14: hi:  186, btch:  31 usd:  60
Apr 12 19:43:25 c2 kernel: CPU   15: hi:  186, btch:  31 usd: 165
Apr 12 19:43:25 c2 kernel: active_anon:23933 inactive_anon:12772 isolated_anon:0
Apr 12 19:43:25 c2 kernel: active_file:2735681 inactive_file:2912499 isolated_file:0
Apr 12 19:43:25 c2 kernel: unevictable:0 dirty:5 writeback:12826 unstable:0
Apr 12 19:43:25 c2 kernel: free:1850423 slab_reclaimable:563455 slab_unreclaimable:22302
Apr 12 19:43:25 c2 kernel: mapped:3156 shmem:58 pagetables:2532 bounce:0
Apr 12 19:43:25 c2 kernel: Node 0 DMA free:15660kB min:40kB low:48kB high:60kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15248kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 12 19:43:25 c2 kernel: lowmem_reserve[]: 0 2991 16121 16121
Apr 12 19:43:25 c2 kernel: Node 0 DMA32 free:1819872kB min:8344kB low:10428kB high:12516kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3063392kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:841860kB slab_unreclaimable:11124kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 12 19:43:25 c2 kernel: lowmem_reserve[]: 0 0 13130 13130
Apr 12 19:43:25 c2 kernel: Node 0 Normal free:45632kB min:36632kB low:45788kB high:54948kB active_anon:19128kB inactive_anon:28kB active_file:6287228kB inactive_file:5978212kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13445120kB mlocked:0kB dirty:0kB writeback:0kB mapped:9252kB shmem:108kB slab_reclaimable:1065380kB slab_unreclaimable:31404kB kernel_stack:3504kB pagetables:3860kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 12 19:43:25 c2 kernel: lowmem_reserve[]: 0 0 0 0
Apr 12 19:43:25 c2 kernel: Node 1 Normal free:5520528kB min:45088kB low:56360kB high:67632kB active_anon:76604kB inactive_anon:51060kB active_file:4655496kB inactive_file:5671784kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:16547840kB mlocked:0kB dirty:20kB writeback:51304kB mapped:3372kB shmem:124kB slab_reclaimable:346580kB slab_unreclaimable:46680kB kernel_stack:448kB pagetables:6268kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Apr 12 19:43:25 c2 kernel: lowmem_reserve[]: 0 0 0 0
Apr 12 19:43:25 c2 kernel: Node 0 DMA: 3*4kB 0*8kB 0*16kB 1*32kB 2*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15660kB
Apr 12 19:43:25 c2 kernel: Node 0 DMA32: 66*4kB 109*8kB 47*16kB 24*32kB 28*64kB 17*128kB 7*256kB 8*512kB 9*1024kB 4*2048kB 437*4096kB = 1819872kB
Apr 12 19:43:25 c2 kernel: Node 0 Normal: 134*4kB 221*8kB 48*16kB 28*32kB 11*64kB 4*128kB 2*256kB 10*512kB 10*1024kB 2*2048kB 5*4096kB = 45632kB
Apr 12 19:43:25 c2 kernel: Node 1 Normal: 10*4kB 6*8kB 7*16kB 4*32kB 71*64kB 171*128kB 127*256kB 67*512kB 52*1024kB 2*2048kB 1311*4096kB = 5520776kB
Apr 12 19:43:25 c2 kernel: 5661062 total pagecache pages
Apr 12 19:43:25 c2 kernel: 12823 pages in swap cache
Apr 12 19:43:25 c2 kernel: Swap cache stats: add 37243, delete 24420, find 57/66
Apr 12 19:43:25 c2 kernel: Free swap  = 34946028kB
Apr 12 19:43:25 c2 kernel: Total swap = 35094520kB
Apr 12 19:43:25 c2 kernel: 8388607 pages RAM
Apr 12 19:43:25 c2 kernel: 171092 pages reserved
Apr 12 19:43:25 c2 kernel: 5343598 pages shared
Apr 12 19:43:25 c2 kernel: 1039448 pages non-shared
Apr 12 19:43:25 c2 kernel: [ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
Apr 12 19:43:25 c2 kernel: [ 9866]     0  9866    21650      455  13       0             0 cglimit
Apr 12 19:43:25 c2 kernel: [ 9867]     0  9867    58345    12846   4       0             0 test
Apr 12 19:43:25 c2 kernel: Memory cgroup out of memory: Kill process 9867 (test) score 1000 or sacrifice child
Apr 12 19:43:25 c2 kernel: Killed process 9867, UID 0, (test) total-vm:233380kB, anon-rss:51004kB, file-rss:380kB

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-04-17  8:20 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-13 10:00 question about memsw of memory cgroup-subsystem gaoqiang
2012-04-13 14:49 ` Michal Hocko
2012-04-13 14:49   ` Michal Hocko
2012-04-16  3:43   ` gaoqiang
2012-04-16  3:43     ` gaoqiang
2012-04-16 14:26     ` Michal Hocko
2012-04-16 14:26       ` Michal Hocko
2012-04-17  3:25     ` Sha Zhengju
2012-04-17  3:25       ` Sha Zhengju
2012-04-17  8:18       ` gaoqiang
2012-04-17  8:18         ` gaoqiang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.