All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [linux-lvm] lvremove snapshot hangs LVM system
@ 2012-06-13 16:09 Da
  2012-06-19 13:34 ` Da
  0 siblings, 1 reply; 7+ messages in thread
From: Da @ 2012-06-13 16:09 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1979 bytes --]

I went a bit further and I think I discovered what is blocking lvm.

When I use "lvremove" I see that the involved "dm" are not deleted, but
SUSPENDED:



[root@node2 ~]# lvremove -f /dev/vgtest01/snap20
  Logical volume "snap20" successfully removed
[root@node2 ~]# dmsetup -vvv status vgtest01-snap20
dm version   OF   [16384]
dm status vgtest01-snap20  OF   [16384]
Name:              vgtest01-snap20
State:             SUSPENDED
vgtest01-snap20: read ahead is 256
Read Ahead:        256
Tables present:    LIVE & INACTIVE
Open count:        0
Event number:      0
Major, minor:      253, 4
Number of targets: 1
UUID: LVM-fU6kuI1yVWxAjsu1WmL1TmvishGAZaZNWytFJGEg5qYFByZ79PjPNoKPnf8KyiiZ

0 409600 snapshot 16/40960 16
[root@node2 ~]# dmsetup -vvv status vgtest01-snap20-cow
dm version   OF   [16384]
dm status vgtest01-snap20-cow  OF   [16384]
Name:              vgtest01-snap20-cow
State:             SUSPENDED
vgtest01-snap20-cow: read ahead is 256
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 3
Number of targets: 1
UUID:
LVM-fU6kuI1yVWxAjsu1WmL1TmvishGAZaZNWytFJGEg5qYFByZ79PjPNoKPnf8KyiiZ-cow

0 40960 linear

In that situation if I execute any "lvm" command, those suspended "dm" are
blocking any I/O activity, and the lvm command hangs for ever.

If, after or before, trying the command I  resume them:
[root@node2 ~]# dmsetup resume vgtest01-snap20
[root@node2 ~]# dmsetup resume vgtest01-snap20-cow

It will be unblocked. The lvdisplay will work perfectly.
But, for some reason something is not consistent in "lvm". I am not able to
create new snapshots:

[root@node2 ~]# lvcreate -s -L 20M -n snap30 /dev/vgtest01/lvtest-snap01
  /dev/vgtest01/snap30: not found: device not cleared
  Aborting. Failed to wipe snapshot exception store.


So, once here. Do someone know if this situation implies something
critical? is there any way to solve it without restarting clvm ?

I will keep in it.

Thanks!

[-- Attachment #2: Type: text/html, Size: 2243 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] lvremove snapshot hangs LVM system
  2012-06-13 16:09 [linux-lvm] lvremove snapshot hangs LVM system Da
@ 2012-06-19 13:34 ` Da
  0 siblings, 0 replies; 7+ messages in thread
From: Da @ 2012-06-19 13:34 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1609 bytes --]

I think I finally found the problem and a walk arround.

If I start the process normaly:
/etc/init.d/cman start
/etc/init.d/clvmd start

And make the snapshots source volume of "exclusive use":
lvchange -an volume
lvchange -aey volume

And I try to remove a snapshot:
lvremove -f snapshot

The involved "dm" will get suspended but not deleted causing the eternal
hang of the lvm system due to the blocked I/O .

Now. If BEFORE the "lvremove" I execute a built in clvmd restart:
clvmd -S

It will fail to restart it properly. Looking the code I think it is caused
because clvmd is trying to pass itself "-E" option but in my current lvm2
version is not declared (2.02.87) . I don't know if I may report this to
the lvm-devel :

[pid  2895] execve("/usr/sbin/clvmd", ["clvmd", "-E",
"fU6kuI1yVWxAjsu1WmL1TmvishGAZaZNH61u58pM21rrS3t8vyXvzsoiMaB4XHKX"], [/* 0
vars */] <unfinished ...>
[pid  2892] write(2, "Usage: clvmd [options]\n   -V       Show version of
clvmd\n   -h       Show this help information\n   -d[n]    Set debug
logging (0:none, 1:stderr (implies -f option), 2:syslog)\n   -f       Don't
fork, run in the foreground\n   -R       Tell all running clvmd"..., 611) =
611


Anyway. After the "clvmd -S" clvm will not be started, but I start it
manually with "/etc/init.d/clvmd start".
Now I have to mark the volume as exclusive again, but after that the lvm
systems seems to work fine forever... I can create and delete snapshots
without problems, without hanging the lvm.

It seems to me that the "clvmd -S" command does something that the system
needs and the normal "stop/start" is not executing.

[-- Attachment #2: Type: text/html, Size: 2119 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] lvremove snapshot hangs LVM system
@ 2012-06-13 14:46 Da
  0 siblings, 0 replies; 7+ messages in thread
From: Da @ 2012-06-13 14:46 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1166 bytes --]

It's been a long time, but finally I had time to test it a bit.

As Ray Morris suggested I tried disabling selinux, with no luck. It happens
all the same.

Any way, I did some more testing and I think I detected the problem, but
not yet the solution.
The problem is that "lvremove" will fail removing the "dm" device. Nothing
changes on "dmsetup ls" after an "lvremove" yet if "lvremove" says the
removal was successful.

If I try to remove the "dm" before the "lvremove" then it seems to work:
[root@node2 ~]# dmsetup remove vgtest01-snap02 && dmsetup remove
vgtest01-snap02-cow && lvremove /dev/vgtest01/snap02
  Logical volume "snap02" successfully removed


But yet if I try any other "lvm" command:
[root@node2 ~]# lvcreate -s -L 20M -n snap10 /dev/vgtest01/lvtest-snap01
  /dev/vgtest01/snap10: not found: device not cleared
  Aborting. Failed to wipe snapshot exception store.

So I have to restart "clvm (which is better than rebooting).

I tried changing the snapshot to  "--monitor n" and "--noudevsync" with the
same luck.

So, I guess something is failing in the process to remove the "dm",... I
will keep looking...

Do someone have any ideas?

Thanks!

[-- Attachment #2: Type: text/html, Size: 1401 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] lvremove snapshot hangs LVM system
@ 2012-05-07  9:19 Da
  0 siblings, 0 replies; 7+ messages in thread
From: Da @ 2012-05-07  9:19 UTC (permalink / raw)
  To: linux-lvm

Thanks a lot Ray,

I will give a try as soon as I can find a window as the server is in production.

Anyway, I add some information.

As I said, the lvm system stop responding after a lvremove of a
snapshot. This snapshot is a snapshot of a clustered system (clvm) but
with exclusive use.
When I remove it (and it works) the snapshot is active. I thought that
maybe that was the problem, so I tried to deactivate it first with the
result:

[root@s02 ~]# lvchange -an /dev/vols-001/vol01-autosnap-1333451491
  Can't change snapshot logical volume "vol01-autosnap-1333451491"

Don't know if I am doing it the correct way, but I am unable to
deactivate the snapshot.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] lvremove snapshot hangs LVM system
  2012-04-24 15:27 Da
@ 2012-04-24 17:33 ` Ray Morris
  0 siblings, 0 replies; 7+ messages in thread
From: Ray Morris @ 2012-04-24 17:33 UTC (permalink / raw)
  To: linux-lvm

> Well, I suppose no one else had this problem.

I suppose not. I got a hang using snapshots atop mdadm raid, but I see
no mention of such in your message or log. I do see selinux calls in
your stack trace. It would take only seconds to setenforce 0 and test
that way, and longer to disable selinux entirely and test it. In the
best case, you'd find it's selinux related. In the worst case, you'd
shorten up your stack trace, narrowing down the problem.

Not that I'm advocating turning off selinux in general, but since
selinux is in the stack trace a quick test with selinux off for a
momment might be useful.
-- 
Ray Morris
support@bettercgi.com

Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/

Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/

Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php




On Tue, 24 Apr 2012 17:27:16 +0200
Da <dcodix@gmail.com> wrote:

> Well, I suppose no one else had this problem.
> 
> Does it mean that no one is using snapshots in a clvm environment? or
> that they just work for everybody?
> 
> Just to keep it updated:
> I tried withe the last soft versions and the same happens, the
> "lvremove" of a snapshot just hangs my lvm system.
> Here the versions I am using:
> 
> kernel-2.6.32-220.13.1.el6.x86_64
> cman-3.0.12.1-23.el6.x86_64
> corosync-1.4.1-4.el6.x86_64
> corosynclib-1.4.1-4.el6.x86_64
> lvm2-cluster-2.02.88-3.el6.x86_64
> lvm2-libs-2.02.88-3.el6.x86_64
> lvm2-2.02.88-3.el6.x86_64
> lvm2-devel-2.02.88-3.el6.x86_64
> fence-virt-0.2.3-5.el6.x86_64
> fence-agents-3.1.5-10.el6.x86_64
> 
> Thanks!
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] lvremove snapshot hangs LVM system
@ 2012-04-24 15:27 Da
  2012-04-24 17:33 ` Ray Morris
  0 siblings, 1 reply; 7+ messages in thread
From: Da @ 2012-04-24 15:27 UTC (permalink / raw)
  To: linux-lvm

Well, I suppose no one else had this problem.

Does it mean that no one is using snapshots in a clvm environment? or
that they just work for everybody?

Just to keep it updated:
I tried withe the last soft versions and the same happens, the
"lvremove" of a snapshot just hangs my lvm system.
Here the versions I am using:

kernel-2.6.32-220.13.1.el6.x86_64
cman-3.0.12.1-23.el6.x86_64
corosync-1.4.1-4.el6.x86_64
corosynclib-1.4.1-4.el6.x86_64
lvm2-cluster-2.02.88-3.el6.x86_64
lvm2-libs-2.02.88-3.el6.x86_64
lvm2-2.02.88-3.el6.x86_64
lvm2-devel-2.02.88-3.el6.x86_64
fence-virt-0.2.3-5.el6.x86_64
fence-agents-3.1.5-10.el6.x86_64

Thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [linux-lvm] lvremove snapshot hangs LVM system
@ 2012-04-03 15:05 Dan C
  0 siblings, 0 replies; 7+ messages in thread
From: Dan C @ 2012-04-03 15:05 UTC (permalink / raw)
  To: linux-lvm


[-- Attachment #1.1: Type: text/plain, Size: 1745 bytes --]

I've read about that in the mailing list, but all the messages were quite
old, and always referencing old kernels, so I decided to post it again.

My system is as follows:
Linux 2.6.32-131.17.1.el6.x86_64 #1 SMP Wed Oct 5 17:19:54 CDT 2011 x86_64
x86_64 x86_64 GNU/Linux

cman-3.0.12-23.el6.x86_64
corosynclib-1.2.3-21.el6.x86_64
corosync-1.2.3-21.el6.x86_64
lvm2-devel-2.02.88-3.el6.x86_64
lvm2-libs-2.02.88-3.el6.x86_64
lvm2-cluster-2.02.88-3.el6.x86_64
lvm2-2.02.88-3.el6.x86_64

I am running LVM as a cluster as:
clvmd -T30

The discs on the LVM groups are all on a SAN connected with FC.

I am working in a two-node cluster.

As I needed to use the snapshot features I have some volumes just active in
one of the nodes. As far as I know this is the only way to snapshot volumes.

Everything works fine. I am able to create volumes. I am able to put
volumes in "exclusive use" and it gives me the capability of snapshot.
Snapshooting works fine.

My problem comes when I "lvremove" a snapshot. It seems to work fine, and
the snapshot is removed, but, whatever LVM command I execute after that
hangs forever in an "uninterruptible sleep" (D).

After that, for a while, I can see some errors on the log(attached
messages.log). After a while those messages stop, but LVM command yet won't
respond.

The rest of the system works perfectly, the only problem is that I cannot
make modifications on LVM system.

The only way to solve the problem is by rebooting the machine.

As a note say this only happens "lvremoving" a snapshot. Removing a volume
works fine.

I don't know if I am doing something wrong, as I read that problem use to
happen with older versions but it was supposedly solved.

Does someone else have this problem?

Thanks a lot.

[-- Attachment #1.2: Type: text/html, Size: 1928 bytes --]

[-- Attachment #2: messages.log --]
[-- Type: application/octet-stream, Size: 8856 bytes --]

Apr  2 10:06:44 s02 lvm[18444]: No longer monitoring snapshot ofx--virtdisks--001-SL6.1--mysql--10G--clone--prod--02--2nd.snap201203060956
Apr  2 10:06:44 s02 lvm[18444]: No longer monitoring snapshot ofx--virtdisks--001-SL6.1--mysql--10G--clone--prod--02--2nd.snap201203191211
Apr  2 10:06:44 s02 lvm[18444]: No longer monitoring snapshot ofx--virtdisks--001-SL6.1--mysql--10G--clone--prod--02--2nd--autosnap--1333353742
Apr  2 10:06:44 s02 lvm[18444]: Monitoring snapshot ofx--virtdisks--001-SL6.1--mysql--10G--clone--prod--02--2nd.snap201203191211
Apr  2 10:06:44 s02 lvm[18444]: Monitoring snapshot ofx--virtdisks--001-SL6.1--mysql--10G--clone--prod--02--2nd--autosnap--1333353742
Apr  2 10:08:26 s02 libvirtd: Could not find keytab file: /etc/libvirt/krb5.tab: Permission denied
Apr  2 10:08:26 s02 libvirtd: Could not find keytab file: /etc/libvirt/krb5.tab: Permission denied
Apr  2 10:08:47 s02 libvirtd: Could not find keytab file: /etc/libvirt/krb5.tab: Permission denied
Apr  2 10:09:08 s02 libvirtd: Could not find keytab file: /etc/libvirt/krb5.tab: Permission denied
Apr  2 10:09:10 s02 kernel: INFO: task lvdisplay:29372 blocked for more than 120 seconds.
Apr  2 10:09:10 s02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  2 10:09:10 s02 kernel: lvdisplay     D 0000000000000006     0 29372      1 0x00000084
Apr  2 10:09:10 s02 kernel: ffff88041b3f9b88 0000000000000086 ffff88041b3f9b48 ffffffffa00040bc
Apr  2 10:09:10 s02 kernel: ffff88041b3f9b58 00000000bfea3967 ffff88041b3f9b78 ffff880a24540200
Apr  2 10:09:10 s02 kernel: ffff88026c76b0b8 ffff88041b3f9fd8 000000000000f598 ffff88026c76b0b8
Apr  2 10:09:10 s02 kernel: Call Trace:
Apr  2 10:09:10 s02 kernel: [<ffffffffa00040bc>] ? dm_table_unplug_all+0x5c/0xd0 [dm_mod]
Apr  2 10:09:10 s02 kernel: [<ffffffff81098d19>] ? ktime_get_ts+0xa9/0xe0
Apr  2 10:09:10 s02 kernel: [<ffffffff814db743>] io_schedule+0x73/0xc0
Apr  2 10:09:10 s02 kernel: [<ffffffff811ac20e>] __blockdev_direct_IO+0x70e/0xc40
Apr  2 10:09:10 s02 kernel: [<ffffffff811a9e57>] blkdev_direct_IO+0x57/0x60
Apr  2 10:09:10 s02 kernel: [<ffffffff811a9020>] ? blkdev_get_blocks+0x0/0xc0
Apr  2 10:09:10 s02 kernel: [<ffffffff8110f19b>] generic_file_aio_read+0x6bb/0x700
Apr  2 10:09:10 s02 kernel: [<ffffffff8120c981>] ? avc_has_perm+0x71/0x90
Apr  2 10:09:10 s02 kernel: [<ffffffff812064af>] ? security_inode_permission+0x1f/0x30
Apr  2 10:09:10 s02 kernel: [<ffffffff8117269a>] do_sync_read+0xfa/0x140
Apr  2 10:09:10 s02 kernel: [<ffffffff8108e180>] ? autoremove_wake_function+0x0/0x40
Apr  2 10:09:10 s02 kernel: [<ffffffff811a93ec>] ? block_ioctl+0x3c/0x40
Apr  2 10:09:10 s02 kernel: [<ffffffff81185042>] ? vfs_ioctl+0x22/0xa0
Apr  2 10:09:10 s02 kernel: [<ffffffff81211edb>] ? selinux_file_permission+0xfb/0x150
Apr  2 10:09:10 s02 kernel: [<ffffffff81205346>] ? security_file_permission+0x16/0x20
Apr  2 10:09:10 s02 kernel: [<ffffffff811730c5>] vfs_read+0xb5/0x1a0
Apr  2 10:09:10 s02 kernel: [<ffffffff810d1b52>] ? audit_syscall_entry+0x272/0x2a0
Apr  2 10:09:10 s02 kernel: [<ffffffff81173201>] sys_read+0x51/0x90
Apr  2 10:09:10 s02 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b
Apr  2 10:10:17 s02 dnsmasq-dhcp[3183]: DHCP packet received on br3 which has no address
Apr  2 10:11:10 s02 kernel: INFO: task lvdisplay:29372 blocked for more than 120 seconds.
Apr  2 10:11:10 s02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  2 10:11:10 s02 kernel: lvdisplay     D 0000000000000006     0 29372      1 0x00000084
Apr  2 10:11:10 s02 kernel: ffff88041b3f9b88 0000000000000086 ffff88041b3f9b48 ffffffffa00040bc
Apr  2 10:11:10 s02 kernel: ffff88041b3f9b58 00000000bfea3967 ffff88041b3f9b78 ffff880a24540200
Apr  2 10:11:10 s02 kernel: ffff88026c76b0b8 ffff88041b3f9fd8 000000000000f598 ffff88026c76b0b8
Apr  2 10:11:10 s02 kernel: Call Trace:
Apr  2 10:11:10 s02 kernel: [<ffffffffa00040bc>] ? dm_table_unplug_all+0x5c/0xd0 [dm_mod]
Apr  2 10:11:10 s02 kernel: [<ffffffff81098d19>] ? ktime_get_ts+0xa9/0xe0
Apr  2 10:11:10 s02 kernel: [<ffffffff814db743>] io_schedule+0x73/0xc0
Apr  2 10:11:10 s02 kernel: [<ffffffff811ac20e>] __blockdev_direct_IO+0x70e/0xc40
Apr  2 10:11:10 s02 kernel: [<ffffffff811a9e57>] blkdev_direct_IO+0x57/0x60
Apr  2 10:11:10 s02 kernel: [<ffffffff811a9020>] ? blkdev_get_blocks+0x0/0xc0
Apr  2 10:11:10 s02 kernel: [<ffffffff8110f19b>] generic_file_aio_read+0x6bb/0x700
Apr  2 10:11:10 s02 kernel: [<ffffffff8120c981>] ? avc_has_perm+0x71/0x90
Apr  2 10:11:10 s02 kernel: [<ffffffff812064af>] ? security_inode_permission+0x1f/0x30
Apr  2 10:11:10 s02 kernel: [<ffffffff8117269a>] do_sync_read+0xfa/0x140
Apr  2 10:11:10 s02 kernel: [<ffffffff8108e180>] ? autoremove_wake_function+0x0/0x40
Apr  2 10:11:10 s02 kernel: [<ffffffff811a93ec>] ? block_ioctl+0x3c/0x40
Apr  2 10:11:10 s02 kernel: [<ffffffff81185042>] ? vfs_ioctl+0x22/0xa0
Apr  2 10:11:10 s02 kernel: [<ffffffff81211edb>] ? selinux_file_permission+0xfb/0x150
Apr  2 10:11:10 s02 kernel: [<ffffffff81205346>] ? security_file_permission+0x16/0x20
Apr  2 10:11:10 s02 kernel: [<ffffffff811730c5>] vfs_read+0xb5/0x1a0
Apr  2 10:11:10 s02 kernel: [<ffffffff810d1b52>] ? audit_syscall_entry+0x272/0x2a0
Apr  2 10:11:10 s02 kernel: [<ffffffff81173201>] sys_read+0x51/0x90
Apr  2 10:11:10 s02 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b
Apr  2 10:12:57 s02 dnsmasq-dhcp[3183]: DHCP packet received on br3 which has no address
Apr  2 10:13:10 s02 kernel: INFO: task qemu-kvm:22570 blocked for more than 120 seconds.
Apr  2 10:13:10 s02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  2 10:13:10 s02 kernel: qemu-kvm      D 0000000000000006     0 22570      1 0x00000080
Apr  2 10:13:10 s02 kernel: ffff8804ee3d5a68 0000000000000082 0000000000000000 ffffea000bbec910
Apr  2 10:13:10 s02 kernel: ffff8804ee3d59d8 ffffffff81012969 ffff8804ee3d5a18 00000001231112ef
Apr  2 10:13:10 s02 kernel: ffff880a2220c678 ffff8804ee3d5fd8 000000000000f598 ffff880a2220c678
Apr  2 10:13:10 s02 kernel: Call Trace:
Apr  2 10:13:10 s02 kernel: [<ffffffff81012969>] ? read_tsc+0x9/0x20
Apr  2 10:13:10 s02 kernel: [<ffffffff8110d3d0>] ? sync_page+0x0/0x50
Apr  2 10:13:10 s02 kernel: [<ffffffff814db743>] io_schedule+0x73/0xc0
Apr  2 10:13:10 s02 kernel: [<ffffffff8110d40d>] sync_page+0x3d/0x50
Apr  2 10:13:10 s02 kernel: [<ffffffff814dbfaf>] __wait_on_bit+0x5f/0x90
Apr  2 10:13:10 s02 kernel: [<ffffffff8110d5c3>] wait_on_page_bit+0x73/0x80
Apr  2 10:13:10 s02 kernel: [<ffffffff8108e1c0>] ? wake_bit_function+0x0/0x50
Apr  2 10:13:10 s02 kernel: [<ffffffff811232d5>] ? pagevec_lookup_tag+0x25/0x40
Apr  2 10:13:10 s02 kernel: [<ffffffff8110d9db>] wait_on_page_writeback_range+0xfb/0x190
Apr  2 10:13:10 s02 kernel: [<ffffffff8110dba8>] filemap_write_and_wait_range+0x78/0x90
Apr  2 10:13:10 s02 kernel: [<ffffffff811a0abe>] vfs_fsync_range+0x7e/0xe0
Apr  2 10:13:10 s02 kernel: [<ffffffff811a9501>] ? __invalidate_device+0x11/0x80
Apr  2 10:13:10 s02 kernel: [<ffffffff811a0b6b>] generic_write_sync+0x4b/0x50
Apr  2 10:13:10 s02 kernel: [<ffffffff811a95ee>] blkdev_aio_write+0x7e/0xa0
Apr  2 10:13:10 s02 kernel: [<ffffffff811a9570>] ? blkdev_aio_write+0x0/0xa0
Apr  2 10:13:10 s02 kernel: [<ffffffff8117241b>] do_sync_readv_writev+0xfb/0x140
Apr  2 10:13:10 s02 kernel: [<ffffffff8108e180>] ? autoremove_wake_function+0x0/0x40
Apr  2 10:13:10 s02 kernel: [<ffffffff81211edb>] ? selinux_file_permission+0xfb/0x150
Apr  2 10:13:10 s02 kernel: [<ffffffff81205346>] ? security_file_permission+0x16/0x20
Apr  2 10:13:10 s02 kernel: [<ffffffff811734df>] do_readv_writev+0xcf/0x1f0
Apr  2 10:13:10 s02 kernel: [<ffffffff8107ff76>] ? group_send_sig_info+0x56/0x70
Apr  2 10:13:10 s02 kernel: [<ffffffff8107ffcf>] ? kill_pid_info+0x3f/0x60
Apr  2 10:13:10 s02 kernel: [<ffffffff81173646>] vfs_writev+0x46/0x60
Apr  2 10:13:10 s02 kernel: [<ffffffff81173702>] sys_pwritev+0xa2/0xc0
Apr  2 10:13:10 s02 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b
Apr  2 10:13:10 s02 kernel: INFO: task lvdisplay:29372 blocked for more than 120 seconds.
Apr  2 10:13:10 s02 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  2 10:13:10 s02 kernel: lvdisplay     D 0000000000000006     0 29372      1 0x00000084
Apr  2 10:13:10 s02 kernel: ffff88041b3f9b88 0000000000000086 ffff88041b3f9b48 ffffffffa00040bc
Apr  2 10:13:10 s02 kernel: ffff88041b3f9b58 00000000bfea3967 ffff88041b3f9b78 ffff880a24540200
Apr  2 10:13:10 s02 kernel: ffff88026c76b0b8 ffff88041b3f9fd8 000000000000f598 ffff88026c76b0b8
Apr  2 10:13:10 s02 kernel: Call Trace:
Apr  2 10:13:10 s02 kernel: [<ffffffffa00040bc>] ? dm_table_unplug_all+0x5c/0xd0 [dm_mod]
Apr  2 10:13:10 s02 kernel: [<ffffffff81098d19>] ? ktime_get_ts+0xa9/0xe0
Apr  2 10:13:10 s02 kernel: [<ffffffff814db743>] io_schedule+0x73/0xc0

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-06-19 13:41 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-13 16:09 [linux-lvm] lvremove snapshot hangs LVM system Da
2012-06-19 13:34 ` Da
  -- strict thread matches above, loose matches on Subject: below --
2012-06-13 14:46 Da
2012-05-07  9:19 Da
2012-04-24 15:27 Da
2012-04-24 17:33 ` Ray Morris
2012-04-03 15:05 Dan C

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.