All of lore.kernel.org
 help / color / mirror / Atom feed
* ceph fs
@ 2011-05-12 18:29 Fyodor Ustinov
  2011-05-13 15:53 ` Sage Weil
  0 siblings, 1 reply; 7+ messages in thread
From: Fyodor Ustinov @ 2011-05-12 18:29 UTC (permalink / raw)
  To: ceph-devel

Hi!

Like previous, but ceph fs instead of rbd.
(i.e. iozone with 4G file).


[  783.295035] ceph: loaded (mds proto 32)
[  783.300122] libceph: client4125 fsid ff352dfd-078c-e65f-a769-d25abb384d92
[  783.300642] libceph: mon0 77.120.112.193:6789 session established
[  941.278185] libceph: msg_new can't create type 0 front 4096
[  941.278456] libceph: msgpool osd_op alloc failed
[  941.278539] libceph: msg_new can't create type 0 front 512
[  941.278606] libceph: msgpool osd_op_reply alloc failed
[  941.278670] libceph: msg_new can't create type 0 front 4096
[  941.278737] libceph: msgpool osd_op alloc failed
[  941.278808] libceph: msg_new can't create type 0 front 512
[  941.278875] libceph: msgpool osd_op_reply alloc failed
[  941.279011] libceph: msg_new can't create type 0 front 4096
[  941.279079] libceph: msgpool osd_op alloc failed
[  941.279153] libceph: msg_new can't create type 0 front 512
[  941.279220] libceph: msgpool osd_op_reply alloc failed
[  941.279286] libceph: msg_new can't create type 0 front 4096
[  941.279352] libceph: msgpool osd_op alloc failed
[  941.279422] libceph: msg_new can't create type 0 front 512
[  941.279623] libceph: msgpool osd_op_reply alloc failed
[  941.279692] libceph: msg_new can't create type 0 front 4096
[  941.279897] libceph: msgpool osd_op alloc failed
[  941.280042] libceph: msg_new can't create type 0 front 512
[  941.280300] libceph: msgpool osd_op_reply alloc failed
[  941.280418] libceph: msg_new can't create type 0 front 4096
[  941.280534] libceph: msgpool osd_op alloc failed
[  941.280793] libceph: msg_new can't create type 0 front 512
[  941.280958] libceph: msgpool osd_op_reply alloc failed
[  941.281074] libceph: msg_new can't create type 0 front 4096
[  941.281236] libceph: msgpool osd_op alloc failed

WBR,
    Fyodor.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ceph fs
  2011-05-12 18:29 ceph fs Fyodor Ustinov
@ 2011-05-13 15:53 ` Sage Weil
  0 siblings, 0 replies; 7+ messages in thread
From: Sage Weil @ 2011-05-13 15:53 UTC (permalink / raw)
  To: Fyodor Ustinov; +Cc: ceph-devel

On Thu, 12 May 2011, Fyodor Ustinov wrote:
> Hi!
> 
> Like previous, but ceph fs instead of rbd.
> (i.e. iozone with 4G file).
> 
> [  783.295035] ceph: loaded (mds proto 32)
> [  783.300122] libceph: client4125 fsid ff352dfd-078c-e65f-a769-d25abb384d92
> [  783.300642] libceph: mon0 77.120.112.193:6789 session established
> [  941.278185] libceph: msg_new can't create type 0 front 4096
> [  941.278456] libceph: msgpool osd_op alloc failed
> [  941.278539] libceph: msg_new can't create type 0 front 512
> [  941.278606] libceph: msgpool osd_op_reply alloc failed
> [  941.278670] libceph: msg_new can't create type 0 front 4096
> [  941.278737] libceph: msgpool osd_op alloc failed
> [  941.278808] libceph: msg_new can't create type 0 front 512
> [  941.278875] libceph: msgpool osd_op_reply alloc failed
> [  941.279011] libceph: msg_new can't create type 0 front 4096
> [  941.279079] libceph: msgpool osd_op alloc failed
> [  941.279153] libceph: msg_new can't create type 0 front 512
> [  941.279220] libceph: msgpool osd_op_reply alloc failed
> [  941.279286] libceph: msg_new can't create type 0 front 4096
> [  941.279352] libceph: msgpool osd_op alloc failed
> [  941.279422] libceph: msg_new can't create type 0 front 512
> [  941.279623] libceph: msgpool osd_op_reply alloc failed
> [  941.279692] libceph: msg_new can't create type 0 front 4096
> [  941.279897] libceph: msgpool osd_op alloc failed
> [  941.280042] libceph: msg_new can't create type 0 front 512
> [  941.280300] libceph: msgpool osd_op_reply alloc failed
> [  941.280418] libceph: msg_new can't create type 0 front 4096
> [  941.280534] libceph: msgpool osd_op alloc failed
> [  941.280793] libceph: msg_new can't create type 0 front 512
> [  941.280958] libceph: msgpool osd_op_reply alloc failed
> [  941.281074] libceph: msg_new can't create type 0 front 4096
> [  941.281236] libceph: msgpool osd_op alloc failed

The ceph memory allocations are definitely not bulletproof and low memory 
situations can still cause problems.  I'm not sure that that is what is 
going on here, but you might try adjusting vm.min_free_kbytes and see if 
that has an effect.  e.g.,

	sysctl -w vm.min_free_kbytes=262144

(although I'd check the current value first to make sure you're adjusting 
it up).

sage

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ceph fs
  2012-06-04  0:42   ` Sage Weil
@ 2012-06-04 11:58     ` Martin Wilderoth
  0 siblings, 0 replies; 7+ messages in thread
From: Martin Wilderoth @ 2012-06-04 11:58 UTC (permalink / raw)
  To: ceph-devel

> Hi Martin, 
>
> On Sat, 2 Jun 2012, Martin Wilderoth wrote: 
>
> > I have some problems with my ceph filesystem. I have a folder that i 
> > cant remove. 
> > 
> > I.E. 
> > root@lintx2:/mnt/backuppc/pc# ls -la toberemoved/ 
> > total 0 
> > drwxr-x--- 1 backuppc backuppc 28804802 May 15 13:29 . 
> > drwxr-x--- 1 backuppc backuppc 29421083732 Jun 1 15:16 .. 
> > 
> > root@lintx2:/mnt/backuppc/pc# rm -rf toberemoved 
> > rm: cannot remove `toberemoved': Directory not empty 
>
> This is a bug in the internal rstats accounting that the rmdir is relying 
> on to verify the directory is empty. Right now, the workaround is to just 
> rename that directory out of the way somewhere. This will be at the top 
> of the list of MDS bugs we'll look at when we shift focus to the fs in the 
> next month or two. 
>
I used the workaround. And will wait for the update later.
 
> http://tracker.newdream.net/issues/2494 
>
> > I also have a folder that when I do ls i get the following message 
> > 
> > [ 1828.569091] ceph: ceph_add_cap: couldn't find snap realm 100 
>
> This too. 
>
> In both cases, if you are able to reproduce the bug reliably, that is 
> incredibly useful information that will make it much easier to fix the 
> bug. Please include that info in the tracker, if you can! 
>
> For example, on the second bug, 
>
> http://tracker.newdream.net/issues/2506 
>
> - what snapshots did you create? 
> - what operations were you doing that triggered the bug? 
> - (ideally) how do you reproduce it from a fresh fs or fresh mount?

The error always appear when I do ls in this folder. I didn't do any snapshot that i'm aware of.

I have been running backuppc in this folder. It uses a lot of hardlinks in the filesystem.
It is this senario that was creating the error.

Not sure how easy it is to repeat exactly the same error. But running backuppc against a ceph share
is always creating some errors in my system. I have beet trying with a fresh folder. 

I will also do some testing with ceph-fuse mount as this didn't generate the ls error.

Best Regards Martin  
>
> Thanks! Sorry we can't be more help now, but we'll get to this soon... 
>
> sage 


> [ 1828.569105] ------------[ cut here ]------------ 
> [ 1828.569121] WARNING: at /build/buildd-linux-2.6_3.2.17-1~bpo60+1-amd64-CJo7Ex/linux-2.6-3.2.17/debian/build/source_amd64_none/fs/ceph/caps.c:590 ceph_add_cap+0x38e/0x49e [ceph]() 
> [ 1828.569139] Modules linked in: cryptd aes_x86_64 aes_generic cbc ceph libceph crc32c libcrc32c evdev snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr ext3 jbd mbcache xen_netfront xen_blkfront 
> [ 1828.569182] Pid: 18, comm: kworker/0:1 Tainted: G W 3.2.0-0.bpo.2-amd64 #1 
> [ 1828.569193] Call Trace: 
> [ 1828.569207] [<ffffffff810497ec>] ? warn_slowpath_common+0x78/0x8c 
> [ 1828.569221] [<ffffffffa00db647>] ? ceph_add_cap+0x38e/0x49e [ceph] 
> [ 1828.569233] [<ffffffffa00d220a>] ? fill_inode+0x4eb/0x602 [ceph] 
> [ 1828.569244] [<ffffffffa00d331b>] ? ceph_dentry_lru_touch+0x2a/0x68 [ceph] 
> [ 1828.569258] [<ffffffffa00d317d>] ? ceph_readdir_prepopulate+0x2de/0x375 [ceph] 
> [ 1828.569271] [<ffffffffa00e2d3f>] ? dispatch+0xa35/0xef2 [ceph] 
> [ 1828.569286] [<ffffffffa00ae841>] ? ceph_tcp_recvmsg+0x43/0x4f [libceph] 
> [ 1828.569297] [<ffffffffa00b0821>] ? con_work+0x1070/0x13b8 [libceph] 
> [ 1828.569308] [<ffffffff81044580>] ? update_curr+0xbc/0x160 
> [ 1828.569319] [<ffffffffa00af7b1>] ? try_write+0xbe1/0xbe1 [libceph] 
> [ 1828.569332] [<ffffffff8105f8bb>] ? process_one_work+0x1cc/0x2ea 
> [ 1828.569342] [<ffffffff8105fb06>] ? worker_thread+0x12d/0x247 
> [ 1828.569353] [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea 
> [ 1828.569361] [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea 
> [ 1828.569372] [<ffffffff81063311>] ? kthread+0x7a/0x82 
> [ 1828.569384] [<ffffffff8136bb34>] ? kernel_thread_helper+0x4/0x10 
> [ 1828.569395] [<ffffffff81369bf3>] ? int_ret_from_sys_call+0x7/0x1b 
> [ 1828.569406] [<ffffffff813646fc>] ? retint_restore_args+0x5/0x6 
> [ 1828.569417] [<ffffffff8136bb30>] ? gs_change+0x13/0x13 
> [ 1828.569423] ---[ end trace 98770cddb79a6a55 ]--- 
> [ 1828.569433] ceph: ceph_add_cap: couldn't find snap realm 100 
> [ 1828.569442] ------------[ cut here ]------------ 
> [ 1828.569452] WARNING: at /build/buildd-linux-2.6_3.2.17-1~bpo60+1-amd64-CJo7Ex/linux-2.6-3.2.17/debian/build/source_amd64_none/fs/ceph/caps.c:590 ceph_add_cap+0x38e/0x49e [ceph]() 
> [ 1828.569467] Modules linked in: cryptd aes_x86_64 aes_generic cbc ceph libceph crc32c libcrc32c evdev snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr ext3 jbd mbcache xen_netfront xen_blkfront 
> [ 1828.569500] Pid: 18, comm: kworker/0:1 Tainted: G W 3.2.0-0.bpo.2-amd64 #1 
> [ 1828.569508] Call Trace: 
> [ 1828.569513] [<ffffffff810497ec>] ? warn_slowpath_common+0x78/0x8c 
> [ 1828.569523] [<ffffffffa00db647>] ? ceph_add_cap+0x38e/0x49e [ceph] 
> [ 1828.569533] [<ffffffffa00d220a>] ? fill_inode+0x4eb/0x602 [ceph] 
> [ 1828.569543] [<ffffffffa00d331b>] ? ceph_dentry_lru_touch+0x2a/0x68 [ceph] 
> [ 1828.569552] [<ffffffffa00d317d>] ? ceph_readdir_prepopulate+0x2de/0x375 [ceph] 
> [ 1828.569563] [<ffffffffa00e2d3f>] ? dispatch+0xa35/0xef2 [ceph] 
> [ 1828.569573] [<ffffffffa00ae841>] ? ceph_tcp_recvmsg+0x43/0x4f [libceph] 
> [ 1828.569583] [<ffffffffa00b0821>] ? con_work+0x1070/0x13b8 [libceph] 
> [ 1828.569590] [<ffffffff81044580>] ? update_curr+0xbc/0x160 
> [ 1828.569599] [<ffffffffa00af7b1>] ? try_write+0xbe1/0xbe1 [libceph] 
> [ 1828.569607] [<ffffffff8105f8bb>] ? process_one_work+0x1cc/0x2ea 
> [ 1828.569615] [<ffffffff8105fb06>] ? worker_thread+0x12d/0x247 
> [ 1828.569622] [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea 
> [ 1828.569630] [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea 
> [ 1828.569637] [<ffffffff81063311>] ? kthread+0x7a/0x82 
> [ 1828.569644] [<ffffffff8136bb34>] ? kernel_thread_helper+0x4/0x10 
> [ 1828.569652] [<ffffffff81369bf3>] ? int_ret_from_sys_call+0x7/0x1b 
> [ 1828.569660] [<ffffffff813646fc>] ? retint_restore_args+0x5/0x6 
> [ 1828.569667] [<ffffffff8136bb30>] ? gs_change+0x13/0x13 
> [ 1828.569673] ---[ end trace 98770cddb79a6a56 ]--- 
> Then i see some folders 
> 
> Is there a way to remove this error directories or a reason / bug why I get this messages. 
> 
> The folder that I try to remove had a similar problem as the one above, I manages to remove 
> all visible files. 
> 
> /Regards Martin 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
> the body of a message to majordomo@vger.kernel.org 
> More majordomo info at http://vger.kernel.org/majordomo-info.html 
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ceph fs
  2012-06-02  6:03 ` Martin Wilderoth
@ 2012-06-04  0:42   ` Sage Weil
  2012-06-04 11:58     ` Martin Wilderoth
  0 siblings, 1 reply; 7+ messages in thread
From: Sage Weil @ 2012-06-04  0:42 UTC (permalink / raw)
  To: Martin Wilderoth; +Cc: ceph-devel

Hi Martin,

On Sat, 2 Jun 2012, Martin Wilderoth wrote:

> I have some problems with my ceph filesystem. I have a folder that i 
> cant remove.
> 
> I.E.
> root@lintx2:/mnt/backuppc/pc# ls -la toberemoved/ 
> total 0
> drwxr-x--- 1 backuppc backuppc    28804802 May 15 13:29 .
> drwxr-x--- 1 backuppc backuppc 29421083732 Jun  1 15:16 ..
> 
> root@lintx2:/mnt/backuppc/pc# rm -rf toberemoved 
> rm: cannot remove `toberemoved': Directory not empty

This is a bug in the internal rstats accounting that the rmdir is relying 
on to verify the directory is empty.  Right now, the workaround is to just 
rename that directory out of the way somewhere.  This will be at the top 
of the list of MDS bugs we'll look at when we shift focus to the fs in the 
next month or two.

http://tracker.newdream.net/issues/2494

> I also have a folder that when I do ls i get the following message
> 
> [ 1828.569091] ceph: ceph_add_cap: couldn't find snap realm 100

This too.  

In both cases, if you are able to reproduce the bug reliably, that is 
incredibly useful information that will make it much easier to fix the 
bug.  Please include that info in the tracker, if you can!

For example, on the second bug,

http://tracker.newdream.net/issues/2506

- what snapshots did you create?
- what operations were you doing that triggered the bug?
- (ideally) how do you reproduce it from a fresh fs or fresh mount?

Thanks!  Sorry we can't be more help now, but we'll get to this soon...

sage


> [ 1828.569105] ------------[ cut here ]------------
> [ 1828.569121] WARNING: at /build/buildd-linux-2.6_3.2.17-1~bpo60+1-amd64-CJo7Ex/linux-2.6-3.2.17/debian/build/source_amd64_none/fs/ceph/caps.c:590 ceph_add_cap+0x38e/0x49e [ceph]()
> [ 1828.569139] Modules linked in: cryptd aes_x86_64 aes_generic cbc ceph libceph crc32c libcrc32c evdev snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr ext3 jbd mbcache xen_netfront xen_blkfront
> [ 1828.569182] Pid: 18, comm: kworker/0:1 Tainted: G        W    3.2.0-0.bpo.2-amd64 #1
> [ 1828.569193] Call Trace:
> [ 1828.569207]  [<ffffffff810497ec>] ? warn_slowpath_common+0x78/0x8c
> [ 1828.569221]  [<ffffffffa00db647>] ? ceph_add_cap+0x38e/0x49e [ceph]
> [ 1828.569233]  [<ffffffffa00d220a>] ? fill_inode+0x4eb/0x602 [ceph]
> [ 1828.569244]  [<ffffffffa00d331b>] ? ceph_dentry_lru_touch+0x2a/0x68 [ceph]
> [ 1828.569258]  [<ffffffffa00d317d>] ? ceph_readdir_prepopulate+0x2de/0x375 [ceph]
> [ 1828.569271]  [<ffffffffa00e2d3f>] ? dispatch+0xa35/0xef2 [ceph]
> [ 1828.569286]  [<ffffffffa00ae841>] ? ceph_tcp_recvmsg+0x43/0x4f [libceph]
> [ 1828.569297]  [<ffffffffa00b0821>] ? con_work+0x1070/0x13b8 [libceph]
> [ 1828.569308]  [<ffffffff81044580>] ? update_curr+0xbc/0x160
> [ 1828.569319]  [<ffffffffa00af7b1>] ? try_write+0xbe1/0xbe1 [libceph]
> [ 1828.569332]  [<ffffffff8105f8bb>] ? process_one_work+0x1cc/0x2ea
> [ 1828.569342]  [<ffffffff8105fb06>] ? worker_thread+0x12d/0x247
> [ 1828.569353]  [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea
> [ 1828.569361]  [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea
> [ 1828.569372]  [<ffffffff81063311>] ? kthread+0x7a/0x82
> [ 1828.569384]  [<ffffffff8136bb34>] ? kernel_thread_helper+0x4/0x10
> [ 1828.569395]  [<ffffffff81369bf3>] ? int_ret_from_sys_call+0x7/0x1b
> [ 1828.569406]  [<ffffffff813646fc>] ? retint_restore_args+0x5/0x6
> [ 1828.569417]  [<ffffffff8136bb30>] ? gs_change+0x13/0x13
> [ 1828.569423] ---[ end trace 98770cddb79a6a55 ]---
> [ 1828.569433] ceph: ceph_add_cap: couldn't find snap realm 100
> [ 1828.569442] ------------[ cut here ]------------
> [ 1828.569452] WARNING: at /build/buildd-linux-2.6_3.2.17-1~bpo60+1-amd64-CJo7Ex/linux-2.6-3.2.17/debian/build/source_amd64_none/fs/ceph/caps.c:590 ceph_add_cap+0x38e/0x49e [ceph]()
> [ 1828.569467] Modules linked in: cryptd aes_x86_64 aes_generic cbc ceph libceph crc32c libcrc32c evdev snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr ext3 jbd mbcache xen_netfront xen_blkfront
> [ 1828.569500] Pid: 18, comm: kworker/0:1 Tainted: G        W    3.2.0-0.bpo.2-amd64 #1
> [ 1828.569508] Call Trace:
> [ 1828.569513]  [<ffffffff810497ec>] ? warn_slowpath_common+0x78/0x8c
> [ 1828.569523]  [<ffffffffa00db647>] ? ceph_add_cap+0x38e/0x49e [ceph]
> [ 1828.569533]  [<ffffffffa00d220a>] ? fill_inode+0x4eb/0x602 [ceph]
> [ 1828.569543]  [<ffffffffa00d331b>] ? ceph_dentry_lru_touch+0x2a/0x68 [ceph]
> [ 1828.569552]  [<ffffffffa00d317d>] ? ceph_readdir_prepopulate+0x2de/0x375 [ceph]
> [ 1828.569563]  [<ffffffffa00e2d3f>] ? dispatch+0xa35/0xef2 [ceph]
> [ 1828.569573]  [<ffffffffa00ae841>] ? ceph_tcp_recvmsg+0x43/0x4f [libceph]
> [ 1828.569583]  [<ffffffffa00b0821>] ? con_work+0x1070/0x13b8 [libceph]
> [ 1828.569590]  [<ffffffff81044580>] ? update_curr+0xbc/0x160
> [ 1828.569599]  [<ffffffffa00af7b1>] ? try_write+0xbe1/0xbe1 [libceph]
> [ 1828.569607]  [<ffffffff8105f8bb>] ? process_one_work+0x1cc/0x2ea
> [ 1828.569615]  [<ffffffff8105fb06>] ? worker_thread+0x12d/0x247
> [ 1828.569622]  [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea
> [ 1828.569630]  [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea
> [ 1828.569637]  [<ffffffff81063311>] ? kthread+0x7a/0x82
> [ 1828.569644]  [<ffffffff8136bb34>] ? kernel_thread_helper+0x4/0x10
> [ 1828.569652]  [<ffffffff81369bf3>] ? int_ret_from_sys_call+0x7/0x1b
> [ 1828.569660]  [<ffffffff813646fc>] ? retint_restore_args+0x5/0x6
> [ 1828.569667]  [<ffffffff8136bb30>] ? gs_change+0x13/0x13
> [ 1828.569673] ---[ end trace 98770cddb79a6a56 ]---
> Then i see some folders
> 
> Is there a way to remove this error directories or a reason / bug why I get this messages.
> 
> The folder that I try to remove had a similar problem as the one above, I manages to remove
> all visible files.
> 
>  /Regards Martin
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* ceph fs
       [not found] <32bad8d3-1ef7-4067-8899-20d72973df10@mail.linserv.se>
@ 2012-06-02  6:03 ` Martin Wilderoth
  2012-06-04  0:42   ` Sage Weil
  0 siblings, 1 reply; 7+ messages in thread
From: Martin Wilderoth @ 2012-06-02  6:03 UTC (permalink / raw)
  To: ceph-devel

I have some problems with my ceph filesystem. I have a folder that i cant remove.

I.E.
root@lintx2:/mnt/backuppc/pc# ls -la toberemoved/ 
total 0
drwxr-x--- 1 backuppc backuppc    28804802 May 15 13:29 .
drwxr-x--- 1 backuppc backuppc 29421083732 Jun  1 15:16 ..

root@lintx2:/mnt/backuppc/pc# rm -rf toberemoved 
rm: cannot remove `toberemoved': Directory not empty

I also have a folder that when I do ls i get the following message

[ 1828.569091] ceph: ceph_add_cap: couldn't find snap realm 100
[ 1828.569105] ------------[ cut here ]------------
[ 1828.569121] WARNING: at /build/buildd-linux-2.6_3.2.17-1~bpo60+1-amd64-CJo7Ex/linux-2.6-3.2.17/debian/build/source_amd64_none/fs/ceph/caps.c:590 ceph_add_cap+0x38e/0x49e [ceph]()
[ 1828.569139] Modules linked in: cryptd aes_x86_64 aes_generic cbc ceph libceph crc32c libcrc32c evdev snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr ext3 jbd mbcache xen_netfront xen_blkfront
[ 1828.569182] Pid: 18, comm: kworker/0:1 Tainted: G        W    3.2.0-0.bpo.2-amd64 #1
[ 1828.569193] Call Trace:
[ 1828.569207]  [<ffffffff810497ec>] ? warn_slowpath_common+0x78/0x8c
[ 1828.569221]  [<ffffffffa00db647>] ? ceph_add_cap+0x38e/0x49e [ceph]
[ 1828.569233]  [<ffffffffa00d220a>] ? fill_inode+0x4eb/0x602 [ceph]
[ 1828.569244]  [<ffffffffa00d331b>] ? ceph_dentry_lru_touch+0x2a/0x68 [ceph]
[ 1828.569258]  [<ffffffffa00d317d>] ? ceph_readdir_prepopulate+0x2de/0x375 [ceph]
[ 1828.569271]  [<ffffffffa00e2d3f>] ? dispatch+0xa35/0xef2 [ceph]
[ 1828.569286]  [<ffffffffa00ae841>] ? ceph_tcp_recvmsg+0x43/0x4f [libceph]
[ 1828.569297]  [<ffffffffa00b0821>] ? con_work+0x1070/0x13b8 [libceph]
[ 1828.569308]  [<ffffffff81044580>] ? update_curr+0xbc/0x160
[ 1828.569319]  [<ffffffffa00af7b1>] ? try_write+0xbe1/0xbe1 [libceph]
[ 1828.569332]  [<ffffffff8105f8bb>] ? process_one_work+0x1cc/0x2ea
[ 1828.569342]  [<ffffffff8105fb06>] ? worker_thread+0x12d/0x247
[ 1828.569353]  [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea
[ 1828.569361]  [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea
[ 1828.569372]  [<ffffffff81063311>] ? kthread+0x7a/0x82
[ 1828.569384]  [<ffffffff8136bb34>] ? kernel_thread_helper+0x4/0x10
[ 1828.569395]  [<ffffffff81369bf3>] ? int_ret_from_sys_call+0x7/0x1b
[ 1828.569406]  [<ffffffff813646fc>] ? retint_restore_args+0x5/0x6
[ 1828.569417]  [<ffffffff8136bb30>] ? gs_change+0x13/0x13
[ 1828.569423] ---[ end trace 98770cddb79a6a55 ]---
[ 1828.569433] ceph: ceph_add_cap: couldn't find snap realm 100
[ 1828.569442] ------------[ cut here ]------------
[ 1828.569452] WARNING: at /build/buildd-linux-2.6_3.2.17-1~bpo60+1-amd64-CJo7Ex/linux-2.6-3.2.17/debian/build/source_amd64_none/fs/ceph/caps.c:590 ceph_add_cap+0x38e/0x49e [ceph]()
[ 1828.569467] Modules linked in: cryptd aes_x86_64 aes_generic cbc ceph libceph crc32c libcrc32c evdev snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr ext3 jbd mbcache xen_netfront xen_blkfront
[ 1828.569500] Pid: 18, comm: kworker/0:1 Tainted: G        W    3.2.0-0.bpo.2-amd64 #1
[ 1828.569508] Call Trace:
[ 1828.569513]  [<ffffffff810497ec>] ? warn_slowpath_common+0x78/0x8c
[ 1828.569523]  [<ffffffffa00db647>] ? ceph_add_cap+0x38e/0x49e [ceph]
[ 1828.569533]  [<ffffffffa00d220a>] ? fill_inode+0x4eb/0x602 [ceph]
[ 1828.569543]  [<ffffffffa00d331b>] ? ceph_dentry_lru_touch+0x2a/0x68 [ceph]
[ 1828.569552]  [<ffffffffa00d317d>] ? ceph_readdir_prepopulate+0x2de/0x375 [ceph]
[ 1828.569563]  [<ffffffffa00e2d3f>] ? dispatch+0xa35/0xef2 [ceph]
[ 1828.569573]  [<ffffffffa00ae841>] ? ceph_tcp_recvmsg+0x43/0x4f [libceph]
[ 1828.569583]  [<ffffffffa00b0821>] ? con_work+0x1070/0x13b8 [libceph]
[ 1828.569590]  [<ffffffff81044580>] ? update_curr+0xbc/0x160
[ 1828.569599]  [<ffffffffa00af7b1>] ? try_write+0xbe1/0xbe1 [libceph]
[ 1828.569607]  [<ffffffff8105f8bb>] ? process_one_work+0x1cc/0x2ea
[ 1828.569615]  [<ffffffff8105fb06>] ? worker_thread+0x12d/0x247
[ 1828.569622]  [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea
[ 1828.569630]  [<ffffffff8105f9d9>] ? process_one_work+0x2ea/0x2ea
[ 1828.569637]  [<ffffffff81063311>] ? kthread+0x7a/0x82
[ 1828.569644]  [<ffffffff8136bb34>] ? kernel_thread_helper+0x4/0x10
[ 1828.569652]  [<ffffffff81369bf3>] ? int_ret_from_sys_call+0x7/0x1b
[ 1828.569660]  [<ffffffff813646fc>] ? retint_restore_args+0x5/0x6
[ 1828.569667]  [<ffffffff8136bb30>] ? gs_change+0x13/0x13
[ 1828.569673] ---[ end trace 98770cddb79a6a56 ]---
Then i see some folders

Is there a way to remove this error directories or a reason / bug why I get this messages.

The folder that I try to remove had a similar problem as the one above, I manages to remove
all visible files.

 /Regards Martin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ceph fs
  2011-05-13 17:47 Fyodor Ustinov
@ 2011-05-13 19:28 ` Sage Weil
  0 siblings, 0 replies; 7+ messages in thread
From: Sage Weil @ 2011-05-13 19:28 UTC (permalink / raw)
  To: Fyodor Ustinov; +Cc: ceph-devel

On Fri, 13 May 2011, Fyodor Ustinov wrote:
> On Friday 13 May 2011 18:53:58 you wrote:
> > On Thu, 12 May 2011, Fyodor Ustinov wrote:
> > > Hi!
> > > 
> > > Like previous, but ceph fs instead of rbd.
> > > (i.e. iozone with 4G file).
> > > 
> > > [  783.295035] ceph: loaded (mds proto 32)
> > > [  783.300122] libceph: client4125 fsid
> > > ff352dfd-078c-e65f-a769-d25abb384d92 [  783.300642] libceph: mon0
> > > 77.120.112.193:6789 session established [  941.278185] libceph: msg_new
> > > can't create type 0 front 4096
> > > [  941.278456] libceph: msgpool osd_op alloc failed
> > > [  941.278539] libceph: msg_new can't create type 0 front 512
> > > [  941.278606] libceph: msgpool osd_op_reply alloc failed
> > > [  941.278670] libceph: msg_new can't create type 0 front 4096
> > > [  941.278737] libceph: msgpool osd_op alloc failed
> > > [  941.278808] libceph: msg_new can't create type 0 front 512
> > > [  941.278875] libceph: msgpool osd_op_reply alloc failed
> > > [  941.279011] libceph: msg_new can't create type 0 front 4096
> > > [  941.279079] libceph: msgpool osd_op alloc failed
> > > [  941.279153] libceph: msg_new can't create type 0 front 512
> > > [  941.279220] libceph: msgpool osd_op_reply alloc failed
> > > [  941.279286] libceph: msg_new can't create type 0 front 4096
> > > [  941.279352] libceph: msgpool osd_op alloc failed
> > > [  941.279422] libceph: msg_new can't create type 0 front 512
> > > [  941.279623] libceph: msgpool osd_op_reply alloc failed
> > > [  941.279692] libceph: msg_new can't create type 0 front 4096
> > > [  941.279897] libceph: msgpool osd_op alloc failed
> > > [  941.280042] libceph: msg_new can't create type 0 front 512
> > > [  941.280300] libceph: msgpool osd_op_reply alloc failed
> > > [  941.280418] libceph: msg_new can't create type 0 front 4096
> > > [  941.280534] libceph: msgpool osd_op alloc failed
> > > [  941.280793] libceph: msg_new can't create type 0 front 512
> > > [  941.280958] libceph: msgpool osd_op_reply alloc failed
> > > [  941.281074] libceph: msg_new can't create type 0 front 4096
> > > [  941.281236] libceph: msgpool osd_op alloc failed
> > 
> > The ceph memory allocations are definitely not bulletproof and low memory
> > situations can still cause problems. 
> 
> Lack of memory is not the worst. The worst thing is that after some time the 
> server crashed with a Kernel panic.

Do you mean the ext3 xattr bug?  We've never seen it with ext4 (or btrfs, 
obviously).  The server side is all userland so generally speaking you 
should never see a panic there unless there is a btrfs bug...

> > I'm not sure that that is what is
> > going on here, but you might try adjusting vm.min_free_kbytes and see if
> > that has an effect.  e.g.,
> > 
> > 	sysctl -w vm.min_free_kbytes=262144
> Old value  - vm.min_free_kbytes = 5752
> 
> Seem to be helped. But 260M "minimum free" - looks scary. :)

Yeah that number came from someone else on this list, adjust as needed!

sage

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ceph fs
@ 2011-05-13 17:47 Fyodor Ustinov
  2011-05-13 19:28 ` Sage Weil
  0 siblings, 1 reply; 7+ messages in thread
From: Fyodor Ustinov @ 2011-05-13 17:47 UTC (permalink / raw)
  To: ceph-devel

On Friday 13 May 2011 18:53:58 you wrote:
> On Thu, 12 May 2011, Fyodor Ustinov wrote:
> > Hi!
> > 
> > Like previous, but ceph fs instead of rbd.
> > (i.e. iozone with 4G file).
> > 
> > [  783.295035] ceph: loaded (mds proto 32)
> > [  783.300122] libceph: client4125 fsid
> > ff352dfd-078c-e65f-a769-d25abb384d92 [  783.300642] libceph: mon0
> > 77.120.112.193:6789 session established [  941.278185] libceph: msg_new
> > can't create type 0 front 4096
> > [  941.278456] libceph: msgpool osd_op alloc failed
> > [  941.278539] libceph: msg_new can't create type 0 front 512
> > [  941.278606] libceph: msgpool osd_op_reply alloc failed
> > [  941.278670] libceph: msg_new can't create type 0 front 4096
> > [  941.278737] libceph: msgpool osd_op alloc failed
> > [  941.278808] libceph: msg_new can't create type 0 front 512
> > [  941.278875] libceph: msgpool osd_op_reply alloc failed
> > [  941.279011] libceph: msg_new can't create type 0 front 4096
> > [  941.279079] libceph: msgpool osd_op alloc failed
> > [  941.279153] libceph: msg_new can't create type 0 front 512
> > [  941.279220] libceph: msgpool osd_op_reply alloc failed
> > [  941.279286] libceph: msg_new can't create type 0 front 4096
> > [  941.279352] libceph: msgpool osd_op alloc failed
> > [  941.279422] libceph: msg_new can't create type 0 front 512
> > [  941.279623] libceph: msgpool osd_op_reply alloc failed
> > [  941.279692] libceph: msg_new can't create type 0 front 4096
> > [  941.279897] libceph: msgpool osd_op alloc failed
> > [  941.280042] libceph: msg_new can't create type 0 front 512
> > [  941.280300] libceph: msgpool osd_op_reply alloc failed
> > [  941.280418] libceph: msg_new can't create type 0 front 4096
> > [  941.280534] libceph: msgpool osd_op alloc failed
> > [  941.280793] libceph: msg_new can't create type 0 front 512
> > [  941.280958] libceph: msgpool osd_op_reply alloc failed
> > [  941.281074] libceph: msg_new can't create type 0 front 4096
> > [  941.281236] libceph: msgpool osd_op alloc failed
> 
> The ceph memory allocations are definitely not bulletproof and low memory
> situations can still cause problems. 

Lack of memory is not the worst. The worst thing is that after some time the 
server crashed with a Kernel panic.

> I'm not sure that that is what is
> going on here, but you might try adjusting vm.min_free_kbytes and see if
> that has an effect.  e.g.,
> 
> 	sysctl -w vm.min_free_kbytes=262144
Old value  - vm.min_free_kbytes = 5752

Seem to be helped. But 260M "minimum free" - looks scary. :)

WBR,
    Fyodor.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-06-04 11:58 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-12 18:29 ceph fs Fyodor Ustinov
2011-05-13 15:53 ` Sage Weil
2011-05-13 17:47 Fyodor Ustinov
2011-05-13 19:28 ` Sage Weil
     [not found] <32bad8d3-1ef7-4067-8899-20d72973df10@mail.linserv.se>
2012-06-02  6:03 ` Martin Wilderoth
2012-06-04  0:42   ` Sage Weil
2012-06-04 11:58     ` Martin Wilderoth

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.