linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Re: How to handle Bad Block relocation with LVM?
@ 2003-02-14  8:52 Eric Hopper
  2003-02-14 11:27 ` Joe Thornber
  0 siblings, 1 reply; 12+ messages in thread
From: Eric Hopper @ 2003-02-14  8:52 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1816 bytes --]

I manually relocated a few bad blocks on a bad IBM drive I had when I
replaced the drive.  It took a lot of time and effort.  I had to run the
dd command many times very carefully to make it work.

One big problem for me was that read-ahead obscured which actual sectors
were in error.  I needed a 'raw' LVM device, but I don't think such a
thing exists for LVM1 on Linux 2.4.x.

What I did was used pvmove to move the PE containing the bad block to a
different spot on the hard drive, then allocated a new LV that was one
LE long, and forced it to allocate the PE containing the bad block. 
Then I used dd to carefully copy over the LE in sections, narrowing down
the location of the bad sectors until I had copied everything that could
possibly be read.

After that, I ran fsck on the filesystem that had originally contained
the bad block, and I was fine.  I checked carefully, and it didn't even
seem that I had lost any data.

Long, time consuming process though.

Actually, it may have been even ickier than I first thought.

It could be that pvmove wouldn't work, and I had to shorten the LV
containing the bad block (the BLV) to contain all PEs prior to the bad
one, allocate a new LV (the NLV) containing all the bad PE, lengthen the
BLV by 1 PE, using a brand new PE, then lengthen it to its original
length so it would contain all the PEs after that bad PE, the do the
procedure I outlined above.

Now that I think of it, I'm nearly positive that pvmove didn't work.  I
had dearly wished for some kind of option to pvmove that would force it
to try as hard as it could to get good reads of all the sectors in a PE,
then move the LE to a new PE, even if there were errors.

Have fun (if at all possible),
-- 
Eric Hopper <hopper@omnifarious.org>
Omnifarious Software

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] Re: How to handle Bad Block relocation with LVM?
  2003-02-14  8:52 [linux-lvm] Re: How to handle Bad Block relocation with LVM? Eric Hopper
@ 2003-02-14 11:27 ` Joe Thornber
  2012-11-28 13:27   ` [linux-lvm] " Brian Murrell
  0 siblings, 1 reply; 12+ messages in thread
From: Joe Thornber @ 2003-02-14 11:27 UTC (permalink / raw)
  To: linux-lvm

Eric,

We would like to automate the process that you have described in LVM2
at some point.  So if you get an error on an LV and new PE will be
allocated, as much data as possible copied from the bad PE to the new
PE, and then remap the LV so that it's using the new PE (very much
like a small pvmove).

The EVMS team are writing a bad block relocator target for device
mapper, but I don't feel it's neccessary to add yet another device
layer to the LVs.  If I have a bad block I don't mind loosing a whole
PE (people may not agree with me on this ?)

- Joe

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] How to handle Bad Block relocation with LVM?
  2003-02-14 11:27 ` Joe Thornber
@ 2012-11-28 13:27   ` Brian Murrell
  2012-11-28 13:57     ` Zdenek Kabelac
  0 siblings, 1 reply; 12+ messages in thread
From: Brian Murrell @ 2012-11-28 13:27 UTC (permalink / raw)
  To: linux-lvm

Joe Thornber <joe <at> fib011235813.fsnet.co.uk> writes:
> 
> Eric,
> 
> We would like to automate the process that you have described in LVM2
> at some point.  So if you get an error on an LV and new PE will be
> allocated, as much data as possible copied from the bad PE to the new
> PE, and then remap the LV so that it's using the new PE (very much
> like a small pvmove).
> 
> The EVMS team are writing a bad block relocator target for device
> mapper, but I don't feel it's neccessary to add yet another device
> layer to the LVs.  If I have a bad block I don't mind loosing a whole
> PE (people may not agree with me on this ?)

To resurrect a really, really, old thread, did anything ever get done in LVM2
to either automatically or manually map out PEs with bad blocks in them?

Does anyone have a recipe for doing this -- to save me the time of figuring it 
all out for myself?

Cheers,
b.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] How to handle Bad Block relocation with LVM?
  2012-11-28 13:27   ` [linux-lvm] " Brian Murrell
@ 2012-11-28 13:57     ` Zdenek Kabelac
  2012-11-29 12:26       ` Brian J. Murrell
  2012-11-29 15:55       ` Stuart D Gathman
  0 siblings, 2 replies; 12+ messages in thread
From: Zdenek Kabelac @ 2012-11-28 13:57 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Brian Murrell

Dne 28.11.2012 14:27, Brian Murrell napsal(a):
> Joe Thornber <joe <at> fib011235813.fsnet.co.uk> writes:
>>
>> Eric,
>>
>> We would like to automate the process that you have described in LVM2
>> at some point.  So if you get an error on an LV and new PE will be
>> allocated, as much data as possible copied from the bad PE to the new
>> PE, and then remap the LV so that it's using the new PE (very much
>> like a small pvmove).
>>
>> The EVMS team are writing a bad block relocator target for device
>> mapper, but I don't feel it's neccessary to add yet another device
>> layer to the LVs.  If I have a bad block I don't mind loosing a whole
>> PE (people may not agree with me on this ?)
>
> To resurrect a really, really, old thread, did anything ever get done in LVM2
> to either automatically or manually map out PEs with bad blocks in them?
>
> Does anyone have a recipe for doing this -- to save me the time of figuring it
> all out for myself?


Sorry, no automated tool.

You could possibly pvmove separated PEs manually with set of pvmove commands.
But I'd strongly recommend to get rid of such broken driver quickly then you 
loose any more data - IMHO it's the most efficient solution cost & time.

Zdenek

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] How to handle Bad Block relocation with LVM?
  2012-11-28 13:57     ` Zdenek Kabelac
@ 2012-11-29 12:26       ` Brian J. Murrell
  2012-11-29 14:04         ` Lars Ellenberg
  2012-11-29 15:55       ` Stuart D Gathman
  1 sibling, 1 reply; 12+ messages in thread
From: Brian J. Murrell @ 2012-11-29 12:26 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 985 bytes --]

On 12-11-28 08:57 AM, Zdenek Kabelac wrote:
> 
> Sorry, no automated tool.

Pity,

> You could possibly pvmove separated PEs manually with set of pvmove
> commands.

So, is the basic premise to just find the PE that is sitting on a bad
block and just pvmove it into an LV created just for the purpose of
holding PEs that are on bad blocks?

So what happens when I pvmove a PE out of an LV?  I take it LVM moves
the data (or at least tries in this case) on the PE being pvmoved onto
another PE before moving it?

Oh, but wait.  pvmove (typically) moves PEs between physical volumes.
Can it be used to remap PEs like this?

> But I'd strongly recommend to get rid of such broken driver quickly then
> you loose any more data - IMHO it's the most efficient solution cost &
> time.

I'm not overly worried.  There is no critical data on that disk/machine
and it's fully backed up twice a day.  I do appreciate your concern and
warning though.

Cheers,
b.



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 261 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] How to handle Bad Block relocation with LVM?
  2012-11-29 12:26       ` Brian J. Murrell
@ 2012-11-29 14:04         ` Lars Ellenberg
  2012-11-29 22:53           ` Brian J. Murrell
  0 siblings, 1 reply; 12+ messages in thread
From: Lars Ellenberg @ 2012-11-29 14:04 UTC (permalink / raw)
  To: linux-lvm

On Thu, Nov 29, 2012 at 07:26:24AM -0500, Brian J. Murrell wrote:
> On 12-11-28 08:57 AM, Zdenek Kabelac wrote:
> > 
> > Sorry, no automated tool.
> 
> Pity,
> 
> > You could possibly pvmove separated PEs manually with set of pvmove
> > commands.
> 
> So, is the basic premise to just find the PE that is sitting on a bad
> block and just pvmove it into an LV created just for the purpose of
> holding PEs that are on bad blocks?
> 
> So what happens when I pvmove a PE out of an LV?  I take it LVM moves
> the data (or at least tries in this case) on the PE being pvmoved onto
> another PE before moving it?
> 
> Oh, but wait.  pvmove (typically) moves PEs between physical volumes.
> Can it be used to remap PEs like this?

So what do you know?
You either know that pysical sector P on some physical disk is broken.
Or you know that logical sector L in some logical volume is broken.

If you do
pvs --unit s --segment -o vg_name,lv_name,seg_start,seg_size,seg_start_pe,pe_start,seg_pe_ranges

That should give you all you need to transform them into each other,
and to transform the sector number to PE number.

Having the PE number, you can easily do
pvmove /dev/broken:PE /dev/somewhere-else

Or with alloc anywhere even elsewhere on the same broken disk.
# If you don't have an other PV available,
# but there are free "healthy" extents on the same PV:
# pvmove --alloc anywhere /dev/broken:PE /dev/broken
Which would likely not be the smartest idea ;-)

You should then create one LV named e.g. "BAD_BLOCKS",
which you would create/extend to cover that bad PE,
so that won't be re-allocated again later:
lvextend VG/BAD_BLOCKS -l +1 /dev/broken:PE

Better yet, pvchange -an /dev/broken,
so it won't be used for new LVs anymore,
and pvmove /dev/broken completely to somewhere else.

So much for the theory, how I would try to do this.
In case I would do this at all.
Which I probably won't, if I had an other PV available.

	;-)

I'm unsure how pvmove will handle IO errors, though.

> > But I'd strongly recommend to get rid of such broken driver quickly then
> > you loose any more data - IMHO it's the most efficient solution cost &
> > time.

Right.

	Lars

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] How to handle Bad Block relocation with LVM?
  2012-11-28 13:57     ` Zdenek Kabelac
  2012-11-29 12:26       ` Brian J. Murrell
@ 2012-11-29 15:55       ` Stuart D Gathman
  2012-11-30  0:10         ` Brian J. Murrell
  1 sibling, 1 reply; 12+ messages in thread
From: Stuart D Gathman @ 2012-11-29 15:55 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1192 bytes --]

Long ago, Nostradamus foresaw that on 11/28/2012 08:57 AM, Zdenek
Kabelac would write:
>
>> To resurrect a really, really, old thread, did anything ever get done
>> in LVM2
>> to either automatically or manually map out PEs with bad blocks in them?
>>
>> Does anyone have a recipe for doing this -- to save me the time of
>> figuring it
>> all out for myself?
>
>
> Sorry, no automated tool.
>
> You could possibly pvmove separated PEs manually with set of pvmove
> commands.
> But I'd strongly recommend to get rid of such broken driver quickly
> then you loose any more data - IMHO it's the most efficient solution
> cost & time.
There are many situations where you are out with a laptop in a region
where there is no Fry's to purchase a new drive.  Often, such region
additionally impose stiff duties, many times the price, if you try to
order said drive online.  That's assuming internet is usable today in
said region.

So "replace the drive" might the best policy ideally, but is often
impossible for travellers in tech-primitive regions.

Having helped people in such situations (where internet at least was
working), I've used the attached script to help find affected LVs and files.

[-- Attachment #2: lbatofile.py --]
[-- Type: text/x-python, Size: 5652 bytes --]

#!/usr/bin/python
# Identify partition, LV, file containing a sector 

# Copyright (C) 2010,2012 Stuart D. Gathman
# Shared under GNU Public License v2 or later
#   This program is free software; you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation; either version 2 of the License, or
#   (at your option) any later version.

#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#   GNU General Public License for more details.

#   You should have received a copy of the GNU General Public License along
#   with this program; if not, write to the Free Software Foundation, Inc.,
#   51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

import sys
from subprocess import Popen,PIPE

ID_LVM = 0x8e
ID_LINUX = 0x83
ID_EXT = 0x05
ID_RAID = 0xfd

def idtoname(id):
  if id == ID_LVM: return "Linux LVM"
  if id == ID_LINUX: return "Linux Filesystem"
  if id == ID_EXT: return "Extended Partition"
  if id == ID_RAID: return "Software RAID"
  return hex(id)

class Segment(object):
  __slots__ = ('pe1st','pelst','lvpath','le1st','lelst')
  def __init__(self,pe1st,pelst):
    self.pe1st = pe1st;
    self.pelst = pelst;
  def __str__(self):
    return "Seg:%d-%d:%s:%d-%d" % (
      self.pe1st,self.pelst,self.lvpath,self.le1st,self.lelst)

def cmdoutput(cmd):
  p = Popen(cmd, shell=True, stdout=PIPE)
  try:
    for ln in p.stdout:
      yield ln
  finally:
    p.stdout.close()
    p.wait()

def icheck(fs,blk):
  "Return inum from block number, or 0 if free space."
  for ln in cmdoutput("debugfs -R 'icheck %d' '%s' 2>/dev/null"%(blk,fs)):
    b,i = ln.strip().split(None,1)
    if not b[0].isdigit(): continue
    if int(b) == blk:
      if i.startswith('<'):
	return 0
      return int(i)
  raise ValueError('%s: invalid block: %d'%(fs,blk))

def ncheck(fs,inum):
  "Return filename from inode number, or None if not linked."
  for ln in cmdoutput("debugfs -R 'ncheck %d' '%s' 2>/dev/null"%(inum,fs)):
    i,n = ln.strip().split(None,1)
    if not i[0].isdigit(): continue
    if int(i) == inum:
      return n
  return None

def blkid(fs):
  "Return dictionary of block device attributes"
  d = {}
  for ln in cmdoutput("blkid -o export '%s'"%fs):
    k,v = ln.strip().split('=',1)
    d[k] = v
  return d

def getpvmap(pv):
  pe_start = 192 * 2
  pe_size = None
  seg = None
  segs = []
  for ln in cmdoutput("pvdisplay --units k -m %s"%pv):
    a = ln.strip().split()
    if not a: continue
    if a[0] == 'Physical' and a[4].endswith(':'):
      pe1st = int(a[2])
      pelst = int(a[4][:-1])
      seg = Segment(pe1st,pelst)
    elif seg and a[0] == 'Logical':
      if a[1] == 'volume':
	seg.lvpath = a[2]
      elif a[1] == 'extents':
	seg.le1st = int(a[2])
	seg.lelst = int(a[4])
	segs.append(seg)
    elif a[0] == 'PE' and a[1] == 'Size':
      if a[2] == "(KByte)":
	pe_size = int(a[3]) * 2
      elif a[3] == 'KiB':
	pe_size = int(float(a[2])) * 2
  if segs:
    for ln in cmdoutput("pvs --units k -o+pe_start %s"%pv):
      a = ln.split()
      if a[0] == pv:
        lst = a[-1]
	if lst.lower().endswith('k'):
	  pe_start = int(float(lst[:-1]))*2
	  return pe_start,pe_size,segs
  return None

def findlv(pv,sect):
  res = getpvmap(pv)
  if not res: return None
  pe_start,pe_size,m = res
  if sect < pe_start:
    raise Exception("Bad sector in PV metadata area")
  pe = int((sect - pe_start)/pe_size)
  pebeg = pe * pe_size + pe_start
  peoff = sect - pebeg
  for s in m:
    if s.pe1st <= pe <= s.pelst:
      le = s.le1st + pe - s.pe1st
      return s.lvpath,le * pe_size + peoff

def getmdmap():
  with open('/proc/mdstat','rt') as fp:
    m = []
    for ln in fp:
      if ln.startswith('md'):
	a = ln.split(':')
	raid = a[0].strip()
	devs = []
	a = a[1].split()
	for d in a[2:]:
	  devs.append(d.split('[')[0])
	m.append((raid,devs))
    return m

def parse_sfdisk(s):
  for ln in s:
    try:
      part,desc = ln.split(':')
      if part.startswith('/dev/'):
        d = {}
        for p in desc.split(','):
	  name,val = p.split('=')
	  name = name.strip()
	  if name.lower() == 'id':
	    d[name] = int(val,16)
	  else:
	    d[name] = int(val)
	yield part.strip(),d
    except ValueError:
      continue

def findpart(wd,lba):
  s = cmdoutput("sfdisk -d %s"%wd)
  parts = [ (part,d['start'],d['size'],d['Id']) for part,d in parse_sfdisk(s) ]
  for part,start,sz,Id in parts:
    if Id == ID_EXT: continue
    if start <= lba < start + sz:
      return part,lba - start,Id
  return None

if __name__ == '__main__':
  wd = sys.argv[1]
  lba = int(sys.argv[2])
  print wd,lba,"Whole Disk"
  res = findpart(wd,lba)
  if not res:
    print "LBA is outside any partition"
    sys.exit(1)
  part,sect,Id = res
  print part,sect,idtoname(Id)
  if Id == ID_LVM:
    bd,sect = findlv(part,sect)
    # FIXME: problems if LV is snapshot
  elif Id == ID_LINUX:
    bd = part
  else:
    if Id == ID_RAID:
      for md,devs in getmdmap():
	for dev in devs:
	  if part == "/dev/"+dev:
	    part = "/dev/"+md
	    break
        else: continue
	break
    res = findlv(part,sect)
    if res:
      print "PV =",part
      bd,sect = res
    else:
      bd = part
  blksiz = 4096
  blk = int(sect * 512 / blksiz)
  p = blkid(bd)
  try:
    t = p['TYPE']
  except:
    print bd,p
    raise
  print "fs=%s block=%d %s"%(bd,blk,t)
  if t.startswith('ext'):
    inum = icheck(bd,blk)
    if inum:
      fn = ncheck(bd,inum)
      print "file=%s inum=%d"%(fn,inum)
    else:
      print "<free space>"

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] How to handle Bad Block relocation with LVM?
  2012-11-29 14:04         ` Lars Ellenberg
@ 2012-11-29 22:53           ` Brian J. Murrell
  0 siblings, 0 replies; 12+ messages in thread
From: Brian J. Murrell @ 2012-11-29 22:53 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1995 bytes --]

On 12-11-29 09:04 AM, Lars Ellenberg wrote:
> 
> If you do
> pvs --unit s --segment -o vg_name,lv_name,seg_start,seg_size,seg_start_pe,pe_start,seg_pe_ranges

Right.  Let's assume I can find the PE.

> Having the PE number, you can easily do
> pvmove /dev/broken:PE /dev/somewhere-else

Right but that...


> Or with alloc anywhere even elsewhere on the same broken disk.

As well as that just puts a new PE in where the one with the damaged
block is but returns the PE with the damaged block back to the free list
to be allocated again at some point in the future, yes?

> # If you don't have an other PV available,
> # but there are free "healthy" extents on the same PV:
> # pvmove --alloc anywhere /dev/broken:PE /dev/broken
> Which would likely not be the smartest idea ;-)

Right because of the above, yes?  Or is there something else nasty about it?

> You should then create one LV named e.g. "BAD_BLOCKS",
> which you would create/extend to cover that bad PE,
> so that won't be re-allocated again later:
> lvextend VG/BAD_BLOCKS -l +1 /dev/broken:PE

Ahhh.  But in this case I want lvcreate -l 1 /dev/broken:PE since I
don't yet have my "badblocks" LV.  I would of course use lvextend next
time.  :-)

> Better yet, pvchange -an /dev/broken,
> so it won't be used for new LVs anymore,
> and pvmove /dev/broken completely to somewhere else.

Yeah, of course ideally.  But as I mentioned, I'm not terribly worried
about loss in this case.

> I'm unsure how pvmove will handle IO errors, though.

I thought I read somewhere about pvmove being persistent through IO
errors but I can't seem to find it any more.  I guess we'll see.  :-)

It seems the pvmove just powered through.  Sweet.

I confirmed, using Stuart Gathman's (very nifty!) lbatofile.py program
that the file that was producing a read error from before the pvmove
read with no error after it and now I have my bad PE in a "badblocks" LV.

Super sweet!

Cheers,
b.



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 261 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] How to handle Bad Block relocation with LVM?
  2012-11-29 15:55       ` Stuart D Gathman
@ 2012-11-30  0:10         ` Brian J. Murrell
  0 siblings, 0 replies; 12+ messages in thread
From: Brian J. Murrell @ 2012-11-30  0:10 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 250 bytes --]

On 12-11-29 10:55 AM, Stuart D Gathman wrote:
> 
> Having helped people in such situations (where internet at least was
> working), I've used the attached script to help find affected LVs and files.

Awesome script!

Much thanks!

b.




[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 261 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] How to handle Bad Block relocation with LVM?
@ 2023-03-15 16:00 Roland
  0 siblings, 0 replies; 12+ messages in thread
From: Roland @ 2023-03-15 16:00 UTC (permalink / raw)
  To: linux-lvm

hello,

quite old thread - but damn interesting, though :)

 > Having the PE number, you can easily do
 > pvmove /dev/broken:PE /dev/somewhere-else

does somebody know if it's possible to easily remap a PE with standard
lvm tools,
instead of pvmoving it ?

trying to move data off from defective sectors can need a very long
time, especially
when multiple sectors being affected and if the disks are desktop drives.

let's think of some "re-partitioning" tool which sets up lvm on top of a
disk with bad
sectors and which scans/skips&remaps megabyte sized PE's to some spare
area, before the
disk is being used.  badblock remapping at the os level instead at the
disks controller
level.

yes, most of you will tell it's a bad idea but i have a cabinet full of
disks with bad
sectors and i'd be really be curious how good and how long a zfs raidz
would work on top
of such "badblocks lvm".  at least, i'd like to experiment with that.
let's call it
academical project for learning purpose and for demonstration of lvm
strength :D

such "remapping" could look like this:

# pvs --segments -ovg_name,lv_name,seg_start_pe,seg_size_pe,pvseg_start 
-O pvseg_start -S vg_name=VGloop0
   VG      LV               Start SSize Start
   VGloop0 blocks_good      0     4     0
   VGloop0 blocks_bad       1     1     4
   VGloop0 blocks_good      5   195     5
   VGloop0 blocks_bad       2     1   200
   VGloop0 blocks_good    201   699   201
   VGloop0 blocks_spare     0   120   900
   VGloop0 blocks_good    200     1  1020
   VGloop0 blocks_good      4     1  1021
   VGloop0 blocks_bad       0     1  1022


blocks_good is LV with healty PE's, blocks_bad is LV with bad PE's and
blocks_spare is LV
where you take healthy PE's from as a replacement for bad PE's found in
blocks_good LV

roland
sysadmin


 > linux-lvm] How to handle Bad Block relocation with LVM?
 > Lars Ellenberg lars.ellenberg at linbit.com
 > Thu Nov 29 14:04:01 UTC 2012
 >
 >     Previous message (by thread): [linux-lvm] How to handle Bad Block
relocation with LVM?
 >     Next message (by thread): [linux-lvm] How to handle Bad Block
relocation with LVM?
 >     Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
 >
 > On Thu, Nov 29, 2012 at 07:26:24AM -0500, Brian J. Murrell wrote:
 > > On 12-11-28 08:57 AM, Zdenek Kabelac wrote:
 > > >
 > > > Sorry, no automated tool.
 > >
 > > Pity,
 > >
 > > > You could possibly pvmove separated PEs manually with set of pvmove
 > > > commands.
 > >
 > > So, is the basic premise to just find the PE that is sitting on a bad
 > > block and just pvmove it into an LV created just for the purpose of
 > > holding PEs that are on bad blocks?
 > >
 > > So what happens when I pvmove a PE out of an LV?  I take it LVM moves
 > > the data (or at least tries in this case) on the PE being pvmoved onto
 > > another PE before moving it?
 > >
 > > Oh, but wait.  pvmove (typically) moves PEs between physical volumes.
 > > Can it be used to remap PEs like this?
 >
 > So what do you know?
 > You either know that pysical sector P on some physical disk is broken.
 > Or you know that logical sector L in some logical volume is broken.
 >
 > If you do
 > pvs --unit s --segment -o
vg_name,lv_name,seg_start,seg_size,seg_start_pe,pe_start,seg_pe_ranges
 >
 > That should give you all you need to transform them into each other,
 > and to transform the sector number to PE number.
 >
 > Having the PE number, you can easily do
 > pvmove /dev/broken:PE /dev/somewhere-else
 >
 > Or with alloc anywhere even elsewhere on the same broken disk.
 > # If you don't have an other PV available,
 > # but there are free "healthy" extents on the same PV:
 > # pvmove --alloc anywhere /dev/broken:PE /dev/broken
 > Which would likely not be the smartest idea ;-)
 >
 > You should then create one LV named e.g. "BAD_BLOCKS",
 > which you would create/extend to cover that bad PE,
 > so that won't be re-allocated again later:
 > lvextend VG/BAD_BLOCKS -l +1 /dev/broken:PE
 >
 > Better yet, pvchange -an /dev/broken,
 > so it won't be used for new LVs anymore,
 > and pvmove /dev/broken completely to somewhere else.
 >
 > So much for the theory, how I would try to do this.
 > In case I would do this at all.
 > Which I probably won't, if I had an other PV available.
 >
 >     ;-)
 >
 > I'm unsure how pvmove will handle IO errors, though.
 >
 > > > But I'd strongly recommend to get rid of such broken driver
quickly then
 > > > you loose any more data - IMHO it's the most efficient solution
cost &
 > > > time.
 >
 > Right.
 >
 >     Lars



_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] How to handle Bad Block relocation with LVM?
  2003-02-10 19:18 Rocky Lee
@ 2003-02-11  8:24 ` Heinz J . Mauelshagen
  0 siblings, 0 replies; 12+ messages in thread
From: Heinz J . Mauelshagen @ 2003-02-11  8:24 UTC (permalink / raw)
  To: linux-lvm

On Tue, Feb 11, 2003 at 09:17:16AM +0800, Rocky Lee wrote:
> 
> 
> Hi all
> 
> I heard LVM can't not handle  BBR, is that true?

No, not by now. IBM seems to be working on a BBR target for device-mapper
though.

> 
> If so,
> It maybe a serious problem to me.

Is drive-level bbr not sufficient for you ?

> is there a trick to handle BBR with LVM + MD?

No.

> 
> Thank you if anyone can help to answer these question.
> 
> Rocky Lee
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@sistina.com
> http://lists.sistina.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

-- 

Regards,
Heinz    -- The LVM Guy --

*** Software bugs are stupid.
    Nevertheless it needs not so stupid people to solve them ***

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Heinz Mauelshagen                                 Sistina Software Inc.
Senior Consultant/Developer                       Am Sonnenhang 11
                                                  56242 Marienrachdorf
                                                  Germany
Mauelshagen@Sistina.com                           +49 2626 141200
                                                       FAX 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [linux-lvm] How to handle Bad Block relocation with LVM?
@ 2003-02-10 19:18 Rocky Lee
  2003-02-11  8:24 ` Heinz J . Mauelshagen
  0 siblings, 1 reply; 12+ messages in thread
From: Rocky Lee @ 2003-02-10 19:18 UTC (permalink / raw)
  To: linux-lvm


Hi all

I heard LVM can't not handle  BBR, is that true?

If so,
It maybe a serious problem to me.
is there a trick to handle BBR with LVM + MD?

Thank you if anyone can help to answer these question.

Rocky Lee

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-03-15 16:01 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-02-14  8:52 [linux-lvm] Re: How to handle Bad Block relocation with LVM? Eric Hopper
2003-02-14 11:27 ` Joe Thornber
2012-11-28 13:27   ` [linux-lvm] " Brian Murrell
2012-11-28 13:57     ` Zdenek Kabelac
2012-11-29 12:26       ` Brian J. Murrell
2012-11-29 14:04         ` Lars Ellenberg
2012-11-29 22:53           ` Brian J. Murrell
2012-11-29 15:55       ` Stuart D Gathman
2012-11-30  0:10         ` Brian J. Murrell
  -- strict thread matches above, loose matches on Subject: below --
2023-03-15 16:00 Roland
2003-02-10 19:18 Rocky Lee
2003-02-11  8:24 ` Heinz J . Mauelshagen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).