* [linux-lvm] Question on compatibility with 2.6.31 kernel.
@ 2009-09-18 19:25 Ben Greear
2009-09-18 19:48 ` Milan Broz
0 siblings, 1 reply; 17+ messages in thread
From: Ben Greear @ 2009-09-18 19:25 UTC (permalink / raw)
To: linux-lvm
Hello!
I recently tried to boot 2.6.31 on Fedora 8, and it couldn't
find the volume groups. The same kernel works fine on F11.
Someone on LKML said they had similar problems on an old Debian Etch
system and to fix it they installed a new version of lvm2 and put
that in the initrd.
I am trying to package a general purpose 2.6.31, and part of the install
logic is to run mkinitrd on the end-user's system, so it will be a big
pain to also require users to install a non-standard lvm for their
platform.
So, does anyone have any ideas about what incompatibily I might be hitting,
and perhaps have some ideas on how to fix it in the kernel so that new
kernels will boot with older lvm installs?
Thanks,
Ben
--
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc http://www.candelatech.com
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Question on compatibility with 2.6.31 kernel.
2009-09-18 19:25 [linux-lvm] Question on compatibility with 2.6.31 kernel Ben Greear
@ 2009-09-18 19:48 ` Milan Broz
2009-09-18 20:47 ` [linux-lvm] Disk crash on LVM Fredrik Skog
2009-09-18 20:56 ` [linux-lvm] Question on compatibility with 2.6.31 kernel Ben Greear
0 siblings, 2 replies; 17+ messages in thread
From: Milan Broz @ 2009-09-18 19:48 UTC (permalink / raw)
To: LVM general discussion and development
Ben Greear wrote:
> I recently tried to boot 2.6.31 on Fedora 8, and it couldn't
> find the volume groups. The same kernel works fine on F11.
try to recompile kernel with
CONFIG_SYSFS_DEPRECATED=y
CONFIG_SYSFS_DEPRECATED_V2=y
(old lvm will not understand new sysfs design, this should
provide old sysfs entries)
> Someone on LKML said they had similar problems on an old Debian Etch
> system and to fix it they installed a new version of lvm2 and put
> that in the initrd.
yes, this is another option, new lvm2 (I think >2.02.29) should work.
But note that also device-mapper library must be updated.
Milan
^ permalink raw reply [flat|nested] 17+ messages in thread
* [linux-lvm] Disk crash on LVM
2009-09-18 19:48 ` Milan Broz
@ 2009-09-18 20:47 ` Fredrik Skog
2009-09-18 21:19 ` Ray Morris
2009-09-18 20:56 ` [linux-lvm] Question on compatibility with 2.6.31 kernel Ben Greear
1 sibling, 1 reply; 17+ messages in thread
From: Fredrik Skog @ 2009-09-18 20:47 UTC (permalink / raw)
To: LVM general discussion and development
Hi
I'm a beginner with LVM2. I run Gentoo Linux with a LV consisting och 5
physical drives. I use LVM2 as it's installed so i guess it's not striped.
It started out with read problems of the drive at certain times, it took a
long time to access files. I then used smartctl to test the drive and it
reported a failure.
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED
WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail
s - 1453
3 Spin_Up_Time 0x0003 148 148 021 Pre-fail
s - 7591
4 Start_Stop_Count 0x0032 100 100 000 Old_age
ys - 38
5 Reallocated_Sector_Ct 0x0033 126 126 140 Pre-fail Always
FAILING_NOW 591
7 Seek_Error_Rate 0x000e 200 200 051 Old_age
ys - 0
....
...
I shut down the whole system and bought a new drive and added to the VG.
When the failed drive is cold it's regognized by LVM when i boot, but if it
gets warm it's not even recognized. a pvs results in this
# pvs
/dev/sdc: read failed after 0 of 4096 at 0: Input/output error
/dev/sdc1: read failed after 0 of 2048 at 0: Input/output error
/dev/block/253:0: read failed after 0 of 4096 at 500103577600:
Input/output error
/dev/block/253:0: read failed after 0 of 4096 at 500103634944:
Input/output error
/dev/sdc: read failed after 0 of 4096 at 0: Input/output error
/dev/sdc: read failed after 0 of 4096 at 500107771904: Input/output error
/dev/sdc: read failed after 0 of 4096 at 500107853824: Input/output error
/dev/sdc: read failed after 0 of 4096 at 0: Input/output error
/dev/sdc: read failed after 0 of 4096 at 4096: Input/output error
/dev/sdc: read failed after 0 of 4096 at 0: Input/output error
/dev/sdc1: read failed after 0 of 1024 at 500105150464: Input/output error
/dev/sdc1: read failed after 0 of 1024 at 500105207808: Input/output error
/dev/sdc1: read failed after 0 of 1024 at 0: Input/output error
PV VG Fmt Attr PSize PFree
/dev/hda1 vgftp lvm2 a- 74.51G 0
/dev/hda2 vgftp lvm2 a- 74.51G 0
/dev/hda3 vgftp lvm2 a- 74.51G 0
/dev/hda4 vgftp lvm2 a- 74.55G 0
/dev/hdb1 vgftp lvm2 a- 74.51G 0
/dev/hdb2 vgftp lvm2 a- 74.51G 0
/dev/hdb3 vgftp lvm2 a- 74.51G 0
/dev/hdb4 vgftp lvm2 a- 74.55G 0
/dev/sdb1 vgftp lvm2 a- 931.51G 0
/dev/sdc1 vgftp lvm2 a- 465.76G 0
/dev/sdd1 vgftp lvm2 a- 931.51G 0
/dev/sde1 vgftp lvm2 a- 1.36T 931.50G
I want to do a pvmove from the old drive to my newly added drive, but as
soon as i do that i get the same error as when i do the pvs command. Maybe I
will try to freeze my drive if nothing else works. Is there a way to force
pvmove or somethin similiar? I really would like to rescue as much data as
possible from the failed drive.
If it's not possible to rescue anything from the drive. How should i proceed
for best results regarding the rest of the drives? Will i still be able to
access the files on the other drives?
How do i remove the failed drive in a good maner? pvremove? vgreduce?
I couldn't seem to find any info on how to best remove a failed drive with
an accepted data loss.
thanks
/Fredrik
----- Original Message -----
From: "Milan Broz" <mbroz@redhat.com>
To: "LVM general discussion and development" <linux-lvm@redhat.com>
Sent: Friday, September 18, 2009 9:48 PM
Subject: Re: [linux-lvm] Question on compatibility with 2.6.31 kernel.
> Ben Greear wrote:
>> I recently tried to boot 2.6.31 on Fedora 8, and it couldn't
>> find the volume groups. The same kernel works fine on F11.
>
> try to recompile kernel with
> CONFIG_SYSFS_DEPRECATED=y
> CONFIG_SYSFS_DEPRECATED_V2=y
>
> (old lvm will not understand new sysfs design, this should
> provide old sysfs entries)
>
>> Someone on LKML said they had similar problems on an old Debian Etch
>> system and to fix it they installed a new version of lvm2 and put
>> that in the initrd.
>
> yes, this is another option, new lvm2 (I think >2.02.29) should work.
> But note that also device-mapper library must be updated.
>
> Milan
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Question on compatibility with 2.6.31 kernel.
2009-09-18 19:48 ` Milan Broz
2009-09-18 20:47 ` [linux-lvm] Disk crash on LVM Fredrik Skog
@ 2009-09-18 20:56 ` Ben Greear
2009-09-19 18:20 ` Charles Marcus
1 sibling, 1 reply; 17+ messages in thread
From: Ben Greear @ 2009-09-18 20:56 UTC (permalink / raw)
To: LVM general discussion and development
On 09/18/2009 12:48 PM, Milan Broz wrote:
> Ben Greear wrote:
>> I recently tried to boot 2.6.31 on Fedora 8, and it couldn't
>> find the volume groups. The same kernel works fine on F11.
>
> try to recompile kernel with
> CONFIG_SYSFS_DEPRECATED=y
> CONFIG_SYSFS_DEPRECATED_V2=y
>
> (old lvm will not understand new sysfs design, this should
> provide old sysfs entries)
That did the trick, thanks!
Ben
--
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc http://www.candelatech.com
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-18 20:47 ` [linux-lvm] Disk crash on LVM Fredrik Skog
@ 2009-09-18 21:19 ` Ray Morris
2009-09-18 21:31 ` Stuart D. Gathman
2009-09-19 14:11 ` André Gillibert
0 siblings, 2 replies; 17+ messages in thread
From: Ray Morris @ 2009-09-18 21:19 UTC (permalink / raw)
To: LVM general discussion and development
Here's one approach.
pvmove is very slow and very safe. You want
to get the data off that drive in a hurry, before
it heats up, so pvmove is not your friend in this
case. Freeze the drive, then find out which LVs
are on it:
pvmove -m /dev/sdc1
Hopefully, the drive contains whole LVs, or nearly
whole, as opposed to having just little portions of
many LVs. If most of the LV is on sdc1, we're going
to use dd to get the data off before the drive gets
too warm. For small portions of larger LVs, you can
use pvmove.
To prepare for the dd, create a new VG that doesn't
use sdc2. Then use lvdisplay and lvcreate -l to create
duplicate LVs:
lvcreate -n something -l sameextents copy
Then dd from the old copy of the LV to the new:
dd if=/dev/org/$1 bs=64M iflag=direct |
dd of=/dev/copy/$1 bs=64M oflag=direct
That piped dd is 2-3 times faster than the "obvious"
way to run dd.
It might also make sense to just dd the whole drive
instead of doing on LV at a time.
--
Ray Morris
support@bettercgi.com
Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/
Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/
Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php
On 09/18/2009 03:47:26 PM, Fredrik Skog wrote:
> Hi
>
> I'm a beginner with LVM2. I run Gentoo Linux with a LV consisting och
> 5
> physical drives. I use LVM2 as it's installed so i guess it's not
> striped.
> It started out with read problems of the drive at certain times, it
> took a
> long time to access files. I then used smartctl to test the drive and
> it
> reported a failure.
>
> ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
> UPDATED
> WHEN_FAILED RAW_VALUE
> 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail
> s - 1453
> 3 Spin_Up_Time 0x0003 148 148 021 Pre-fail
> s - 7591
> 4 Start_Stop_Count 0x0032 100 100 000 Old_age
> ys - 38
> 5 Reallocated_Sector_Ct 0x0033 126 126 140 Pre-fail
> Always
> FAILING_NOW 591
> 7 Seek_Error_Rate 0x000e 200 200 051 Old_age
> ys - 0
> ....
> ...
>
> I shut down the whole system and bought a new drive and added to the
> VG.
> When the failed drive is cold it's regognized by LVM when i boot, but
> if it
> gets warm it's not even recognized. a pvs results in this
>
> # pvs
> /dev/sdc: read failed after 0 of 4096 at 0: Input/output error
> /dev/sdc1: read failed after 0 of 2048 at 0: Input/output error
> /dev/block/253:0: read failed after 0 of 4096 at 500103577600:
> Input/output error
> /dev/block/253:0: read failed after 0 of 4096 at 500103634944:
> Input/output error
> /dev/sdc: read failed after 0 of 4096 at 0: Input/output error
> /dev/sdc: read failed after 0 of 4096 at 500107771904: Input/output
> error
> /dev/sdc: read failed after 0 of 4096 at 500107853824: Input/output
> error
> /dev/sdc: read failed after 0 of 4096 at 0: Input/output error
> /dev/sdc: read failed after 0 of 4096 at 4096: Input/output error
> /dev/sdc: read failed after 0 of 4096 at 0: Input/output error
> /dev/sdc1: read failed after 0 of 1024 at 500105150464: Input/
> output
> error
> /dev/sdc1: read failed after 0 of 1024 at 500105207808: Input/
> output
> error
> /dev/sdc1: read failed after 0 of 1024 at 0: Input/output error
> PV VG Fmt Attr PSize PFree
> /dev/hda1 vgftp lvm2 a- 74.51G 0
> /dev/hda2 vgftp lvm2 a- 74.51G 0
> /dev/hda3 vgftp lvm2 a- 74.51G 0
> /dev/hda4 vgftp lvm2 a- 74.55G 0
> /dev/hdb1 vgftp lvm2 a- 74.51G 0
> /dev/hdb2 vgftp lvm2 a- 74.51G 0
> /dev/hdb3 vgftp lvm2 a- 74.51G 0
> /dev/hdb4 vgftp lvm2 a- 74.55G 0
> /dev/sdb1 vgftp lvm2 a- 931.51G 0
> /dev/sdc1 vgftp lvm2 a- 465.76G 0
> /dev/sdd1 vgftp lvm2 a- 931.51G 0
> /dev/sde1 vgftp lvm2 a- 1.36T 931.50G
>
> I want to do a pvmove from the old drive to my newly added drive, but
> as
> soon as i do that i get the same error as when i do the pvs command.
> Maybe I
> will try to freeze my drive if nothing else works. Is there a way to
> force
> pvmove or somethin similiar? I really would like to rescue as much
> data as
> possible from the failed drive.
>
> If it's not possible to rescue anything from the drive. How should i
> proceed
> for best results regarding the rest of the drives? Will i still be
> able to
> access the files on the other drives?
> How do i remove the failed drive in a good maner? pvremove? vgreduce?
>
> I couldn't seem to find any info on how to best remove a failed drive
> with
> an accepted data loss.
>
> thanks
> /Fredrik
>
>
>
>
> ----- Original Message -----
> From: "Milan Broz" <mbroz@redhat.com>
> To: "LVM general discussion and development" <linux-lvm@redhat.com>
> Sent: Friday, September 18, 2009 9:48 PM
> Subject: Re: [linux-lvm] Question on compatibility with 2.6.31
> kernel.
>
>
> > Ben Greear wrote:
> >> I recently tried to boot 2.6.31 on Fedora 8, and it couldn't
> >> find the volume groups. The same kernel works fine on F11.
> >
> > try to recompile kernel with
> > CONFIG_SYSFS_DEPRECATED=y
> > CONFIG_SYSFS_DEPRECATED_V2=y
> >
> > (old lvm will not understand new sysfs design, this should
> > provide old sysfs entries)
> >
> >> Someone on LKML said they had similar problems on an old Debian
> Etch
> >> system and to fix it they installed a new version of lvm2 and put
> >> that in the initrd.
> >
> > yes, this is another option, new lvm2 (I think >2.02.29) should
> work.
> > But note that also device-mapper library must be updated.
> >
> > Milan
> >
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-18 21:19 ` Ray Morris
@ 2009-09-18 21:31 ` Stuart D. Gathman
2009-09-18 21:55 ` Ray Morris
2009-09-19 14:11 ` André Gillibert
1 sibling, 1 reply; 17+ messages in thread
From: Stuart D. Gathman @ 2009-09-18 21:31 UTC (permalink / raw)
To: LVM general discussion and development
On Fri, 18 Sep 2009, Ray Morris wrote:
> pvmove is very slow and very safe. You want
> to get the data off that drive in a hurry, before
> it heats up, so pvmove is not your friend in this
> case. Freeze the drive, then find out which LVs
> are on it:
I've found that external data/power cables that let you connect
an unmounted drive sitting on your workbench are your friend
in these cases. You can even keep the drive in the freezer
(just make sure it is dry enough to be non-condensing).
Stay-Dri hospital ice bags also work well.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-18 21:31 ` Stuart D. Gathman
@ 2009-09-18 21:55 ` Ray Morris
0 siblings, 0 replies; 17+ messages in thread
From: Ray Morris @ 2009-09-18 21:55 UTC (permalink / raw)
To: LVM general discussion and development
I forgot to say - pvmove is low partially because
it works well even for mounted volumes. Using dd,
you definitely want to unmount the volume.
--
Ray Morris
support@bettercgi.com
Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/
Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/
Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php
On 09/18/2009 04:31:19 PM, Stuart D. Gathman wrote:
> On Fri, 18 Sep 2009, Ray Morris wrote:
>
> > pvmove is very slow and very safe. You want
> > to get the data off that drive in a hurry, before
> > it heats up, so pvmove is not your friend in this
> > case. Freeze the drive, then find out which LVs
> > are on it:
>
> I've found that external data/power cables that let you connect
> an unmounted drive sitting on your workbench are your friend
> in these cases. You can even keep the drive in the freezer
> (just make sure it is dry enough to be non-condensing).
> Stay-Dri hospital ice bags also work well.
>
> --
> Stuart D. Gathman <stuart@bmsi.com>
> Business Management Systems Inc. Phone: 703 591-0911 Fax: 703
> 591-6154
> "Confutatis maledictis, flammis acribus addictis" - background song
> for
> a Microsoft sponsored "Where do you want to go from here?"
> commercial.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-18 21:19 ` Ray Morris
2009-09-18 21:31 ` Stuart D. Gathman
@ 2009-09-19 14:11 ` André Gillibert
2009-09-19 17:45 ` Fredrik Skog
1 sibling, 1 reply; 17+ messages in thread
From: André Gillibert @ 2009-09-19 14:11 UTC (permalink / raw)
To: linux-lvm
Ray Morris <support@bettercgi.com> wrote:
> [...]
> Then dd from the old copy of the LV to the new:
>
> dd if=/dev/org/$1 bs=64M iflag=direct |
> dd of=/dev/copy/$1 bs=64M oflag=direct
>
> That piped dd is 2-3 times faster than the "obvious"
> way to run dd.
> [...]
The issue with dd is that if any read() fails, it skips the entry (64M) and doesn't write to the output, making the output file smaller than the input file.
with conv=sync,noerror, it's better, but, still loosing a whole 64M block at once is a bad thing.
That's why I think dd_rescue would be better.
<http://www.garloff.de/kurt/linux/ddrescue/>
If it still gets warm too fast, I've heard that storing the hard drive in a freezer 24 hours may make it work again.
<http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html>
If it crashes when dd or dd_rescue fails, it's possible to continue copying later, from the point it failed.
--
André Gillibert
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-19 14:11 ` André Gillibert
@ 2009-09-19 17:45 ` Fredrik Skog
2009-09-21 11:27 ` Peter Keller
0 siblings, 1 reply; 17+ messages in thread
From: Fredrik Skog @ 2009-09-19 17:45 UTC (permalink / raw)
To: LVM general discussion and development
Thanks guys for your input on the matter.
I lenghtened the power cables and bought a full lenght SATA cable. Now the
disk is in the freezer and in progress with pvmove. 10% now. so far so good.
The reason i decided for the pvmove instead of dd or dd_rescue was the fact
that i tried a pvmove before, so the process was already started but it
stopped working on 1%. Now with a frozen and working disk it continued from
where it left off.
I can tell you how it turned out later.
Thanks
/Fredrik
----- Original Message -----
From: "André Gillibert" <rcvxdg@gmail.com>
To: <linux-lvm@redhat.com>
Sent: Saturday, September 19, 2009 4:11 PM
Subject: Re: [linux-lvm] Disk crash on LVM
> Ray Morris <support@bettercgi.com> wrote:
>> [...]
>> Then dd from the old copy of the LV to the new:
>>
>> dd if=/dev/org/$1 bs=64M iflag=direct |
>> dd of=/dev/copy/$1 bs=64M oflag=direct
>>
>> That piped dd is 2-3 times faster than the "obvious"
>> way to run dd.
>> [...]
>
> The issue with dd is that if any read() fails, it skips the entry (64M)
> and doesn't write to the output, making the output file smaller than the
> input file.
>
> with conv=sync,noerror, it's better, but, still loosing a whole 64M block
> at once is a bad thing.
>
> That's why I think dd_rescue would be better.
> <http://www.garloff.de/kurt/linux/ddrescue/>
>
> If it still gets warm too fast, I've heard that storing the hard drive in
> a freezer 24 hours may make it work again.
> <http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html>
>
> If it crashes when dd or dd_rescue fails, it's possible to continue
> copying later, from the point it failed.
>
> --
> André Gillibert
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Question on compatibility with 2.6.31 kernel.
2009-09-18 20:56 ` [linux-lvm] Question on compatibility with 2.6.31 kernel Ben Greear
@ 2009-09-19 18:20 ` Charles Marcus
2009-09-19 22:17 ` André Gillibert
0 siblings, 1 reply; 17+ messages in thread
From: Charles Marcus @ 2009-09-19 18:20 UTC (permalink / raw)
To: LVM general discussion and development
On 9/18/2009, Ben Greear (greearb@candelatech.com) wrote:
> > I recently tried to boot 2.6.31 on Fedora 8, and it couldn't
> > find the volume groups. The same kernel works fine on F11.
>
> try to recompile kernel with
> CONFIG_SYSFS_DEPRECATED=y
> CONFIG_SYSFS_DEPRECATED_V2=y
>
> (old lvm will not understand new sysfs design, this should
> provide old sysfs entries)
I'm also an lvm newbie, and since I'll obviously be running into this
sometime soon...
If lvm2 is updated first (before the kernel), I'm assuming it would
still be backward compatible with older kernels?
Or, if I updated the kernel first and compiled it with the above
options, when lvm2 gets updated, will I need to recompile the kernel
with the above options disabled before the next reboot?
Thanks,
--
Best regards,
Charles
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Question on compatibility with 2.6.31 kernel.
2009-09-19 18:20 ` Charles Marcus
@ 2009-09-19 22:17 ` André Gillibert
2009-09-20 8:12 ` Milan Broz
0 siblings, 1 reply; 17+ messages in thread
From: André Gillibert @ 2009-09-19 22:17 UTC (permalink / raw)
To: linux-lvm
Charles Marcus <CMarcus@Media-Brokers.com> wrote:
> On 9/18/2009, Ben Greear (greearb@candelatech.com) wrote:
> > > I recently tried to boot 2.6.31 on Fedora 8, and it couldn't
> > > find the volume groups. The same kernel works fine on F11.
> >
> > try to recompile kernel with
> > CONFIG_SYSFS_DEPRECATED=y
> > CONFIG_SYSFS_DEPRECATED_V2=y
> >
> > (old lvm will not understand new sysfs design, this should
> > provide old sysfs entries)
>
> I'm also an lvm newbie, and since I'll obviously be running into this
> sometime soon...
>
> If lvm2 is updated first (before the kernel), I'm assuming it would
> still be backward compatible with older kernels?
>
Yes, I'm almost sure it is. I run a 2.6.30 kernel with CONFIG_SYSFS_DEPRECATED=y and have a relatively recent LVM, and it works well.
> Or, if I updated the kernel first and compiled it with the above
> options, when lvm2 gets updated, will I need to recompile the kernel
> with the above options disabled before the next reboot?
>
I don't think so. LVM guys are not evil. :)
--
André Gillibert
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Question on compatibility with 2.6.31 kernel.
2009-09-19 22:17 ` André Gillibert
@ 2009-09-20 8:12 ` Milan Broz
0 siblings, 0 replies; 17+ messages in thread
From: Milan Broz @ 2009-09-20 8:12 UTC (permalink / raw)
To: LVM general discussion and development
André Gillibert wrote:
>> Or, if I updated the kernel first and compiled it with the above
>> options, when lvm2 gets updated, will I need to recompile the kernel
>> with the above options disabled before the next reboot?
>>
>
> I don't think so. LVM guys are not evil. :)
Are you sure? ;-)
Well in fact this is not lvm2 problem, but problem with incompatible
changes in sysfs design.
(Recent lvm2 userspace code will recognise devices in all cases.)
But maybe there can be other problems, not related to lvm2 with these
deprecated options set - see e.g. http://lkml.org/lkml/2009/3/24/628
So for new kernel with old userspace (like udev etc) enable it, for new
distro with recent userspace keep it disabled.
Milan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-19 17:45 ` Fredrik Skog
@ 2009-09-21 11:27 ` Peter Keller
2009-09-22 13:27 ` Fredrik Skog
0 siblings, 1 reply; 17+ messages in thread
From: Peter Keller @ 2009-09-21 11:27 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: TEXT/PLAIN, Size: 2892 bytes --]
Coming a bit late to this thread...
On Sat, 19 Sep 2009, Fredrik Skog wrote:
> Thanks guys for your input on the matter.
> I lenghtened the power cables and bought a full lenght SATA cable. Now the
> disk is in the freezer and in progress with pvmove. 10% now. so far so good.
> The reason i decided for the pvmove instead of dd or dd_rescue was the fact
> that i tried a pvmove before, so the process was already started but it
> stopped working on 1%. Now with a frozen and working disk it continued from
> where it left off.
I have found that with sequential reads like this, adjusting the readahead
of the device with something like 'blockdev --setra nnn' can dramatically
shorten the time needed to read the whole device.
The default usually seems to be too low when reading sequentially. If you
haven't already adjusted it, try adjusting it upwards. Values like 8192 or
16384 may help.
Good luck,
Peter.
> I can tell you how it turned out later.
>
> Thanks
>
> /Fredrik
>
>
>
> ----- Original Message ----- From: "André Gillibert" <rcvxdg@gmail.com>
> To: <linux-lvm@redhat.com>
> Sent: Saturday, September 19, 2009 4:11 PM
> Subject: Re: [linux-lvm] Disk crash on LVM
>
>
>> Ray Morris <support@bettercgi.com> wrote:
>>> [...]
>>> Then dd from the old copy of the LV to the new:
>>>
>>> dd if=/dev/org/$1 bs=64M iflag=direct |
>>> dd of=/dev/copy/$1 bs=64M oflag=direct
>>>
>>> That piped dd is 2-3 times faster than the "obvious"
>>> way to run dd.
>>> [...]
>>
>> The issue with dd is that if any read() fails, it skips the entry (64M) and
>> doesn't write to the output, making the output file smaller than the input
>> file.
>>
>> with conv=sync,noerror, it's better, but, still loosing a whole 64M block
>> at once is a bad thing.
>>
>> That's why I think dd_rescue would be better.
>> <http://www.garloff.de/kurt/linux/ddrescue/>
>>
>> If it still gets warm too fast, I've heard that storing the hard drive in a
>> freezer 24 hours may make it work again.
>> <http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html>
>>
>> If it crashes when dd or dd_rescue fails, it's possible to continue copying
>> later, from the point it failed.
>>
>> --
>> André Gillibert
>>
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
--
Peter Keller Tel.: +44 (0)1223 353033
Global Phasing Ltd., Fax.: +44 (0)1223 366889
Sheraton House,
Castle Park,
Cambridge CB3 0AX
United Kingdom
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-21 11:27 ` Peter Keller
@ 2009-09-22 13:27 ` Fredrik Skog
2009-09-22 13:43 ` Fredrik Skog
0 siblings, 1 reply; 17+ messages in thread
From: Fredrik Skog @ 2009-09-22 13:27 UTC (permalink / raw)
To: LVM general discussion and development
Hi
I'll update you on the progress with my crashed hd.
I'm not yet finished with the pvmove from the device because it won't work
for very long at a time.
If I have the drive in room temperature and then put it in the freezer for
25-30mins it works for maybe 15-20min (the drive still in the freezer)and i
get a pvmove progress of about 15-20%.
If I then shut it down and just let it cool again I get maybe 5-7% more out
of it before it dies.Prolonging the time in the freezer doesn't seem to help
at all, in fact sometimes it's worse and the drive doesn't start at all. The
drive seems to need a few hours warm-up time after it's been frozen and then
stopped working before I can refreeze it again.
It takes ridiculous amounts of time fiddling with this, but I hope to get to
the finish soon.
I have not yet tried to increase the readahead.
/Fredrik
----- Original Message -----
From: "Peter Keller" <pkeller@globalphasing.com>
To: "LVM general discussion and development" <linux-lvm@redhat.com>
Sent: Monday, September 21, 2009 1:27 PM
Subject: Re: [linux-lvm] Disk crash on LVM
> Coming a bit late to this thread...
>
> On Sat, 19 Sep 2009, Fredrik Skog wrote:
>
>> Thanks guys for your input on the matter.
>> I lenghtened the power cables and bought a full lenght SATA cable. Now
>> the
>> disk is in the freezer and in progress with pvmove. 10% now. so far so
>> good.
>> The reason i decided for the pvmove instead of dd or dd_rescue was the
>> fact
>> that i tried a pvmove before, so the process was already started but it
>> stopped working on 1%. Now with a frozen and working disk it continued
>> from
>> where it left off.
>
> I have found that with sequential reads like this, adjusting the readahead
> of the device with something like 'blockdev --setra nnn' can dramatically
> shorten the time needed to read the whole device.
>
> The default usually seems to be too low when reading sequentially. If you
> haven't already adjusted it, try adjusting it upwards. Values like 8192 or
> 16384 may help.
>
> Good luck,
> Peter.
>
>> I can tell you how it turned out later.
>>
>> Thanks
>>
>> /Fredrik
>>
>>
>>
>> ----- Original Message ----- From: "André Gillibert" <rcvxdg@gmail.com>
>> To: <linux-lvm@redhat.com>
>> Sent: Saturday, September 19, 2009 4:11 PM
>> Subject: Re: [linux-lvm] Disk crash on LVM
>>
>>
>>> Ray Morris <support@bettercgi.com> wrote:
>>>> [...]
>>>> Then dd from the old copy of the LV to the new:
>>>>
>>>> dd if=/dev/org/$1 bs=64M iflag=direct |
>>>> dd of=/dev/copy/$1 bs=64M oflag=direct
>>>>
>>>> That piped dd is 2-3 times faster than the "obvious"
>>>> way to run dd.
>>>> [...]
>>>
>>> The issue with dd is that if any read() fails, it skips the entry (64M)
>>> and
>>> doesn't write to the output, making the output file smaller than the
>>> input
>>> file.
>>>
>>> with conv=sync,noerror, it's better, but, still loosing a whole 64M
>>> block
>>> at once is a bad thing.
>>>
>>> That's why I think dd_rescue would be better.
>>> <http://www.garloff.de/kurt/linux/ddrescue/>
>>>
>>> If it still gets warm too fast, I've heard that storing the hard drive
>>> in a
>>> freezer 24 hours may make it work again.
>>> <http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html>
>>>
>>> If it crashes when dd or dd_rescue fails, it's possible to continue
>>> copying
>>> later, from the point it failed.
>>>
>>> --
>>> André Gillibert
>>>
>>>
>>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>
> --
> Peter Keller Tel.: +44 (0)1223 353033
> Global Phasing Ltd., Fax.: +44 (0)1223 366889
> Sheraton House,
> Castle Park,
> Cambridge CB3 0AX
> United Kingdom
--------------------------------------------------------------------------------
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-22 13:27 ` Fredrik Skog
@ 2009-09-22 13:43 ` Fredrik Skog
2009-09-22 15:54 ` Fredrik Skog
0 siblings, 1 reply; 17+ messages in thread
From: Fredrik Skog @ 2009-09-22 13:43 UTC (permalink / raw)
To: LVM general discussion and development
Hello again,
Is there any way to see how much the total progress of pvmove is? I now have
issued the command 7-8 times with different amount of progress. And I have
no idea how much is remaining of the total.
Thanks
/Fredrik
----- Original Message -----
From: "Fredrik Skog" <fredrik.skog@rodang.se>
To: "LVM general discussion and development" <linux-lvm@redhat.com>
Sent: Tuesday, September 22, 2009 3:27 PM
Subject: Re: [linux-lvm] Disk crash on LVM
> Hi
>
> I'll update you on the progress with my crashed hd.
> I'm not yet finished with the pvmove from the device because it won't work
> for very long at a time.
> If I have the drive in room temperature and then put it in the freezer for
> 25-30mins it works for maybe 15-20min (the drive still in the freezer)and
> i get a pvmove progress of about 15-20%.
> If I then shut it down and just let it cool again I get maybe 5-7% more
> out of it before it dies.Prolonging the time in the freezer doesn't seem
> to help at all, in fact sometimes it's worse and the drive doesn't start
> at all. The drive seems to need a few hours warm-up time after it's been
> frozen and then stopped working before I can refreeze it again.
> It takes ridiculous amounts of time fiddling with this, but I hope to get
> to the finish soon.
> I have not yet tried to increase the readahead.
>
> /Fredrik
>
>
> ----- Original Message -----
> From: "Peter Keller" <pkeller@globalphasing.com>
> To: "LVM general discussion and development" <linux-lvm@redhat.com>
> Sent: Monday, September 21, 2009 1:27 PM
> Subject: Re: [linux-lvm] Disk crash on LVM
>
>
>> Coming a bit late to this thread...
>>
>> On Sat, 19 Sep 2009, Fredrik Skog wrote:
>>
>>> Thanks guys for your input on the matter.
>>> I lenghtened the power cables and bought a full lenght SATA cable. Now
>>> the
>>> disk is in the freezer and in progress with pvmove. 10% now. so far so
>>> good.
>>> The reason i decided for the pvmove instead of dd or dd_rescue was the
>>> fact
>>> that i tried a pvmove before, so the process was already started but it
>>> stopped working on 1%. Now with a frozen and working disk it continued
>>> from
>>> where it left off.
>>
>> I have found that with sequential reads like this, adjusting the
>> readahead
>> of the device with something like 'blockdev --setra nnn' can dramatically
>> shorten the time needed to read the whole device.
>>
>> The default usually seems to be too low when reading sequentially. If you
>> haven't already adjusted it, try adjusting it upwards. Values like 8192
>> or
>> 16384 may help.
>>
>> Good luck,
>> Peter.
>>
>>> I can tell you how it turned out later.
>>>
>>> Thanks
>>>
>>> /Fredrik
>>>
>>>
>>>
>>> ----- Original Message ----- From: "André Gillibert" <rcvxdg@gmail.com>
>>> To: <linux-lvm@redhat.com>
>>> Sent: Saturday, September 19, 2009 4:11 PM
>>> Subject: Re: [linux-lvm] Disk crash on LVM
>>>
>>>
>>>> Ray Morris <support@bettercgi.com> wrote:
>>>>> [...]
>>>>> Then dd from the old copy of the LV to the new:
>>>>>
>>>>> dd if=/dev/org/$1 bs=64M iflag=direct |
>>>>> dd of=/dev/copy/$1 bs=64M oflag=direct
>>>>>
>>>>> That piped dd is 2-3 times faster than the "obvious"
>>>>> way to run dd.
>>>>> [...]
>>>>
>>>> The issue with dd is that if any read() fails, it skips the entry (64M)
>>>> and
>>>> doesn't write to the output, making the output file smaller than the
>>>> input
>>>> file.
>>>>
>>>> with conv=sync,noerror, it's better, but, still loosing a whole 64M
>>>> block
>>>> at once is a bad thing.
>>>>
>>>> That's why I think dd_rescue would be better.
>>>> <http://www.garloff.de/kurt/linux/ddrescue/>
>>>>
>>>> If it still gets warm too fast, I've heard that storing the hard drive
>>>> in a
>>>> freezer 24 hours may make it work again.
>>>> <http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html>
>>>>
>>>> If it crashes when dd or dd_rescue fails, it's possible to continue
>>>> copying
>>>> later, from the point it failed.
>>>>
>>>> --
>>>> André Gillibert
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> linux-lvm mailing list
>>>> linux-lvm@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>>
>>
>> --
>> Peter Keller Tel.: +44 (0)1223 353033
>> Global Phasing Ltd., Fax.: +44 (0)1223 366889
>> Sheraton House,
>> Castle Park,
>> Cambridge CB3 0AX
>> United Kingdom
>
>
> --------------------------------------------------------------------------------
>
>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-22 13:43 ` Fredrik Skog
@ 2009-09-22 15:54 ` Fredrik Skog
2009-09-22 16:11 ` Alasdair G Kergon
0 siblings, 1 reply; 17+ messages in thread
From: Fredrik Skog @ 2009-09-22 15:54 UTC (permalink / raw)
To: LVM general discussion and development
Hi
I'm a bit confused about how pvmove works now...
I have done alot och pvmove on the same PV now and the total percentage of
all the pvmoves is alot bigger than 100%
17%+21%+7%+5%+67%+5%+.....
Why is that?
thanks
/Fredrik
----- Original Message -----
From: "Fredrik Skog" <fredrik.skog@rodang.se>
To: "LVM general discussion and development" <linux-lvm@redhat.com>
Sent: Tuesday, September 22, 2009 3:43 PM
Subject: Re: [linux-lvm] Disk crash on LVM
> Hello again,
>
> Is there any way to see how much the total progress of pvmove is? I now
> have issued the command 7-8 times with different amount of progress. And I
> have no idea how much is remaining of the total.
>
> Thanks
> /Fredrik
>
>
> ----- Original Message -----
> From: "Fredrik Skog" <fredrik.skog@rodang.se>
> To: "LVM general discussion and development" <linux-lvm@redhat.com>
> Sent: Tuesday, September 22, 2009 3:27 PM
> Subject: Re: [linux-lvm] Disk crash on LVM
>
>
>> Hi
>>
>> I'll update you on the progress with my crashed hd.
>> I'm not yet finished with the pvmove from the device because it won't
>> work for very long at a time.
>> If I have the drive in room temperature and then put it in the freezer
>> for 25-30mins it works for maybe 15-20min (the drive still in the
>> freezer)and i get a pvmove progress of about 15-20%.
>> If I then shut it down and just let it cool again I get maybe 5-7% more
>> out of it before it dies.Prolonging the time in the freezer doesn't seem
>> to help at all, in fact sometimes it's worse and the drive doesn't start
>> at all. The drive seems to need a few hours warm-up time after it's been
>> frozen and then stopped working before I can refreeze it again.
>> It takes ridiculous amounts of time fiddling with this, but I hope to get
>> to the finish soon.
>> I have not yet tried to increase the readahead.
>>
>> /Fredrik
>>
>>
>> ----- Original Message -----
>> From: "Peter Keller" <pkeller@globalphasing.com>
>> To: "LVM general discussion and development" <linux-lvm@redhat.com>
>> Sent: Monday, September 21, 2009 1:27 PM
>> Subject: Re: [linux-lvm] Disk crash on LVM
>>
>>
>>> Coming a bit late to this thread...
>>>
>>> On Sat, 19 Sep 2009, Fredrik Skog wrote:
>>>
>>>> Thanks guys for your input on the matter.
>>>> I lenghtened the power cables and bought a full lenght SATA cable. Now
>>>> the
>>>> disk is in the freezer and in progress with pvmove. 10% now. so far so
>>>> good.
>>>> The reason i decided for the pvmove instead of dd or dd_rescue was the
>>>> fact
>>>> that i tried a pvmove before, so the process was already started but it
>>>> stopped working on 1%. Now with a frozen and working disk it continued
>>>> from
>>>> where it left off.
>>>
>>> I have found that with sequential reads like this, adjusting the
>>> readahead
>>> of the device with something like 'blockdev --setra nnn' can
>>> dramatically
>>> shorten the time needed to read the whole device.
>>>
>>> The default usually seems to be too low when reading sequentially. If
>>> you
>>> haven't already adjusted it, try adjusting it upwards. Values like 8192
>>> or
>>> 16384 may help.
>>>
>>> Good luck,
>>> Peter.
>>>
>>>> I can tell you how it turned out later.
>>>>
>>>> Thanks
>>>>
>>>> /Fredrik
>>>>
>>>>
>>>>
>>>> ----- Original Message ----- From: "André Gillibert" <rcvxdg@gmail.com>
>>>> To: <linux-lvm@redhat.com>
>>>> Sent: Saturday, September 19, 2009 4:11 PM
>>>> Subject: Re: [linux-lvm] Disk crash on LVM
>>>>
>>>>
>>>>> Ray Morris <support@bettercgi.com> wrote:
>>>>>> [...]
>>>>>> Then dd from the old copy of the LV to the new:
>>>>>>
>>>>>> dd if=/dev/org/$1 bs=64M iflag=direct |
>>>>>> dd of=/dev/copy/$1 bs=64M oflag=direct
>>>>>>
>>>>>> That piped dd is 2-3 times faster than the "obvious"
>>>>>> way to run dd.
>>>>>> [...]
>>>>>
>>>>> The issue with dd is that if any read() fails, it skips the entry
>>>>> (64M) and
>>>>> doesn't write to the output, making the output file smaller than the
>>>>> input
>>>>> file.
>>>>>
>>>>> with conv=sync,noerror, it's better, but, still loosing a whole 64M
>>>>> block
>>>>> at once is a bad thing.
>>>>>
>>>>> That's why I think dd_rescue would be better.
>>>>> <http://www.garloff.de/kurt/linux/ddrescue/>
>>>>>
>>>>> If it still gets warm too fast, I've heard that storing the hard drive
>>>>> in a
>>>>> freezer 24 hours may make it work again.
>>>>> <http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html>
>>>>>
>>>>> If it crashes when dd or dd_rescue fails, it's possible to continue
>>>>> copying
>>>>> later, from the point it failed.
>>>>>
>>>>> --
>>>>> André Gillibert
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> linux-lvm mailing list
>>>>> linux-lvm@redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>>>
>>>> _______________________________________________
>>>> linux-lvm mailing list
>>>> linux-lvm@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>>>
>>>
>>> --
>>> Peter Keller Tel.: +44 (0)1223
>>> 353033
>>> Global Phasing Ltd., Fax.: +44 (0)1223
>>> 366889
>>> Sheraton House,
>>> Castle Park,
>>> Cambridge CB3 0AX
>>> United Kingdom
>>
>>
>> --------------------------------------------------------------------------------
>>
>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [linux-lvm] Disk crash on LVM
2009-09-22 15:54 ` Fredrik Skog
@ 2009-09-22 16:11 ` Alasdair G Kergon
0 siblings, 0 replies; 17+ messages in thread
From: Alasdair G Kergon @ 2009-09-22 16:11 UTC (permalink / raw)
To: Fredrik Skog; +Cc: LVM general discussion and development
On Tue, Sep 22, 2009 at 05:54:59PM +0200, Fredrik Skog wrote:
> I'm a bit confused about how pvmove works now...
> I have done alot och pvmove on the same PV now and the total percentage
> of all the pvmoves is alot bigger than 100%
> 17%+21%+7%+5%+67%+5%+.....
> Why is that?
Read the pvmove man page.
You should break up the move into lots of smaller pieces using the PE range
syntax. If the data is contiguous, you'll have moved 67%, the largest
of those numbers, and restarted at the beginning every time.
Better to 'divide and conquer' so you get lots of smaller moves that reach 100%.
Alasdair
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2009-09-22 16:11 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-18 19:25 [linux-lvm] Question on compatibility with 2.6.31 kernel Ben Greear
2009-09-18 19:48 ` Milan Broz
2009-09-18 20:47 ` [linux-lvm] Disk crash on LVM Fredrik Skog
2009-09-18 21:19 ` Ray Morris
2009-09-18 21:31 ` Stuart D. Gathman
2009-09-18 21:55 ` Ray Morris
2009-09-19 14:11 ` André Gillibert
2009-09-19 17:45 ` Fredrik Skog
2009-09-21 11:27 ` Peter Keller
2009-09-22 13:27 ` Fredrik Skog
2009-09-22 13:43 ` Fredrik Skog
2009-09-22 15:54 ` Fredrik Skog
2009-09-22 16:11 ` Alasdair G Kergon
2009-09-18 20:56 ` [linux-lvm] Question on compatibility with 2.6.31 kernel Ben Greear
2009-09-19 18:20 ` Charles Marcus
2009-09-19 22:17 ` André Gillibert
2009-09-20 8:12 ` Milan Broz
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.