All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-lvm] Migrating LVM
@ 2010-04-21  4:06 M.Lewis
  2010-04-21  5:42 ` Luca Berra
  2010-04-21 14:18 ` Ray Morris
  0 siblings, 2 replies; 8+ messages in thread
From: M.Lewis @ 2010-04-21  4:06 UTC (permalink / raw)
  To: linux-lvm


I'm building a new machine with RAID1+LVM2. I plan to migrate the data 
from the old machine HD to the new machine.

Is it better to make the new VG the same name as the existing VG on the 
old box? Or should I make it different.

Is there an easy way to do this that I've not yet found? I was thinking 
to make the new VG the same name as the old, add the old drives to the 
machine then pvmove the data.

Thanks for any pointers!
Mike

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Migrating LVM
  2010-04-21  4:06 [linux-lvm] Migrating LVM M.Lewis
@ 2010-04-21  5:42 ` Luca Berra
  2010-04-21 14:18 ` Ray Morris
  1 sibling, 0 replies; 8+ messages in thread
From: Luca Berra @ 2010-04-21  5:42 UTC (permalink / raw)
  To: linux-lvm

On Tue, Apr 20, 2010 at 11:06:13PM -0500, M.Lewis wrote:
>
> I'm building a new machine with RAID1+LVM2. I plan to migrate the data from 
> the old machine HD to the new machine.
>
> Is it better to make the new VG the same name as the existing VG on the old 
> box? Or should I make it different.
>
> Is there an easy way to do this that I've not yet found? I was thinking to 
> make the new VG the same name as the old, add the old drives to the machine 
> then pvmove the data.

You cannot pvmove between two different VGs
You cannot have two VGs with the same name

You could skip creating the new vg, vgextend the old vg on the new
raid1, then pvmove, but i don't know if you need to change any parameter
in the new vg.
Another option is file level copy or dd-ing the logical volume.

L.

-- 
Luca Berra -- bluca@comedia.it
         Communication Media & Services S.r.l.
  /"\
  \ /     ASCII RIBBON CAMPAIGN
   X        AGAINST HTML MAIL
  / \

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Migrating LVM
  2010-04-21  4:06 [linux-lvm] Migrating LVM M.Lewis
  2010-04-21  5:42 ` Luca Berra
@ 2010-04-21 14:18 ` Ray Morris
  2010-04-21 15:36   ` malahal
  2010-04-21 16:24   ` Phillip Susi
  1 sibling, 2 replies; 8+ messages in thread
From: Ray Morris @ 2010-04-21 14:18 UTC (permalink / raw)
  To: LVM general discussion and development

    pvmove is good for when you have to keep the machine
live during the copy.  dd is about 10 times as fast if you
can be down during the copy.  This specific dd invocation
is the fastest I've found for the purpose, running 2-3 times
faster than a simple dd without arguments:

dd if=/dev/old_vg/$1 bs=64M iflag=direct | dd of=/dev/new_vg/$1 bs=64M  
oflag=direct
--
Ray Morris
support@bettercgi.com

Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/

Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/

Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php


On 04/20/2010 11:06:13 PM, M.Lewis wrote:
> 
> I'm building a new machine with RAID1+LVM2. I plan to migrate the  
> data from the old machine HD to the new machine.
> 
> Is it better to make the new VG the same name as the existing VG on  
> the old box? Or should I make it different.
> 
> Is there an easy way to do this that I've not yet found? I was  
> thinking to make the new VG the same name as the old, add the old  
> drives to the machine then pvmove the data.
> 
> Thanks for any pointers!
> Mike
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Migrating LVM
  2010-04-21 14:18 ` Ray Morris
@ 2010-04-21 15:36   ` malahal
  2010-04-21 16:27     ` Phillip Susi
  2010-04-21 16:24   ` Phillip Susi
  1 sibling, 1 reply; 8+ messages in thread
From: malahal @ 2010-04-21 15:36 UTC (permalink / raw)
  To: linux-lvm

Ray Morris [support@bettercgi.com] wrote:
>     pvmove is good for when you have to keep the machine
> live during the copy.  dd is about 10 times as fast if you
> can be down during the copy.  This specific dd invocation
> is the fastest I've found for the purpose, running 2-3 times
> faster than a simple dd without arguments:
> 
> dd if=/dev/old_vg/$1 bs=64M iflag=direct | dd of=/dev/new_vg/$1 bs=64M  
> oflag=direct

Interesting! You are doing direct I/O to avoid copying from cache to user
buffer for read and vice-versa for write, but you are losing the ability
to do them parallel! You are doing the next best, that is creating two
"dd" threads -- one for reading and another for writing. Since the pipe
is really implemented in memory, why should this be faster than normal
"dd" that uses page cache? Likely that kswapd is not kicking early
enough?

Enhancing "dd" to create a reader and a writer thread would really
help, I believe.

Thanks, Malahal.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Migrating LVM
  2010-04-21 14:18 ` Ray Morris
  2010-04-21 15:36   ` malahal
@ 2010-04-21 16:24   ` Phillip Susi
  1 sibling, 0 replies; 8+ messages in thread
From: Phillip Susi @ 2010-04-21 16:24 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Ray Morris

On 4/21/2010 10:18 AM, Ray Morris wrote:
>    pvmove is good for when you have to keep the machine
> live during the copy.  dd is about 10 times as fast if you
> can be down during the copy.  This specific dd invocation
> is the fastest I've found for the purpose, running 2-3 times
> faster than a simple dd without arguments:
> 
> dd if=/dev/old_vg/$1 bs=64M iflag=direct | dd of=/dev/new_vg/$1 bs=64M
> oflag=direct

What arguments are best depend on many factors.  These particular ones
should be rather good if the src and and dest volumes are on the same
physical disk.  It would not be very good if they are on different
disks, since while dd is reading from the source, the output disk is
idle, and vice versa.  If src and dest are different disks, then you
want to not use direct, and stick with a more reasonable block size, say
64k.

Also if the fs has much free space in it then dd will waste a lot of
time on that.  You probably would save a lot of time doing something
like dump | restore.  This also does not require that the source and
destination be the same size.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Migrating LVM
  2010-04-21 15:36   ` malahal
@ 2010-04-21 16:27     ` Phillip Susi
  2010-04-21 17:00       ` Ray Morris
  0 siblings, 1 reply; 8+ messages in thread
From: Phillip Susi @ 2010-04-21 16:27 UTC (permalink / raw)
  To: LVM general discussion and development

On 4/21/2010 11:36 AM, malahal@us.ibm.com wrote:
> Interesting! You are doing direct I/O to avoid copying from cache to user
> buffer for read and vice-versa for write, but you are losing the ability
> to do them parallel! You are doing the next best, that is creating two
> "dd" threads -- one for reading and another for writing. Since the pipe
> is really implemented in memory, why should this be faster than normal
> "dd" that uses page cache? Likely that kswapd is not kicking early
> enough?

Oops, my eyes missed the pipe and second dd when I made my previous
comments.  That is pretty good for different disks then yes... not so
good for same physical disk.

> Enhancing "dd" to create a reader and a writer thread would really
> help, I believe.

I actually have some old hacked up dd code I made once to use 16
concurrent aio requests with O_DIRECT.  I need to clean it up a bit but
it showed great promise.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Migrating LVM
  2010-04-21 16:27     ` Phillip Susi
@ 2010-04-21 17:00       ` Ray Morris
  2010-04-21 23:10         ` M.Lewis
  0 siblings, 1 reply; 8+ messages in thread
From: Ray Morris @ 2010-04-21 17:00 UTC (permalink / raw)
  To: LVM general discussion and development

> Oops, my eyes missed the pipe and second dd when I made my previous
> comments.  That is pretty good for different disks then yes... not
> so good for same physical disk.

    Indeed my tests were done copying from the "old" disk
to the "new" disk, as the OP is doing, I believe.

> I actually have some old hacked up dd code I made once to use 16
> concurrent aio requests with O_DIRECT.  I need to clean it up a
> bit but it showed great promise.

    Considering how often "dd" is used for copying large amounts
of data, even a modest improvement could save many thousands of
hours of admin time.  I would like to encourage you to do any
needed cleanup and make it available, preferably integrated with
GNU dd - it could save hundreds of thousands of dollars worth of
time.
--
Ray Morris
support@bettercgi.com

Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/

Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/

Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php


On 04/21/2010 11:27:22 AM, Phillip Susi wrote:
> On 4/21/2010 11:36 AM, malahal@us.ibm.com wrote:
> > Interesting! You are doing direct I/O to avoid copying from cache  
> to user
> > buffer for read and vice-versa for write, but you are losing the  
> ability
> > to do them parallel! You are doing the next best, that is creating  
> two
> > "dd" threads -- one for reading and another for writing. Since the  
> pipe
> > is really implemented in memory, why should this be faster than  
> normal
> > "dd" that uses page cache? Likely that kswapd is not kicking early
> > enough?
> 
> Oops, my eyes missed the pipe and second dd when I made my previous
> comments.  That is pretty good for different disks then yes... not so
> good for same physical disk.
> 
> > Enhancing "dd" to create a reader and a writer thread would really
> > help, I believe.
> 
> I actually have some old hacked up dd code I made once to use 16
> concurrent aio requests with O_DIRECT.  I need to clean it up a bit  
> but
> it showed great promise.
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Migrating LVM
  2010-04-21 17:00       ` Ray Morris
@ 2010-04-21 23:10         ` M.Lewis
  0 siblings, 0 replies; 8+ messages in thread
From: M.Lewis @ 2010-04-21 23:10 UTC (permalink / raw)
  To: LVM general discussion and development

Ray Morris wrote:
>> Oops, my eyes missed the pipe and second dd when I made my previous
>> comments.  That is pretty good for different disks then yes... not
>> so good for same physical disk.
> 
>    Indeed my tests were done copying from the "old" disk
> to the "new" disk, as the OP is doing, I believe.
> 
>> I actually have some old hacked up dd code I made once to use 16
>> concurrent aio requests with O_DIRECT.  I need to clean it up a
>> bit but it showed great promise.
> 
>    Considering how often "dd" is used for copying large amounts
> of data, even a modest improvement could save many thousands of
> hours of admin time.  I would like to encourage you to do any
> needed cleanup and make it available, preferably integrated with
> GNU dd - it could save hundreds of thousands of dollars worth of
> time.
> -- 
> Ray Morris
> support@bettercgi.com

Thanks to all for the suggestions. I believe I will go the dd route. 
That seems like the best (simplest) option.

Thanks,
Mike

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2010-04-21 23:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-04-21  4:06 [linux-lvm] Migrating LVM M.Lewis
2010-04-21  5:42 ` Luca Berra
2010-04-21 14:18 ` Ray Morris
2010-04-21 15:36   ` malahal
2010-04-21 16:27     ` Phillip Susi
2010-04-21 17:00       ` Ray Morris
2010-04-21 23:10         ` M.Lewis
2010-04-21 16:24   ` Phillip Susi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.