linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
       [not found] <1043528017.520337.1681071486811.ref@mail.yahoo.com>
@ 2023-04-09 20:18 ` matthew patton
  2023-04-11  7:14   ` Roland
  2023-04-11 17:05   ` Roger Heflin
  0 siblings, 2 replies; 16+ messages in thread
From: matthew patton @ 2023-04-09 20:18 UTC (permalink / raw)
  To: LVM general discussion and development


[-- Attachment #1.1: Type: text/plain, Size: 555 bytes --]

> my plan is to scan a disk for usable sectors and map the logical volume> around the broken sectors.
1977 called, they'd like their non-self-correcting HD controller implementations back.
>From a real-world perspective there is ZERO (more like negative) utility to this exercise. Controllers remap blocks all on their own and the so-called geometry is entirely fictitious anyway. From a script/program "because I want to" perspective you could leave LVM entirely out of it and just use a file with arbitrary offsets scribbled with a "bad" signature.

[-- Attachment #1.2: Type: text/html, Size: 1725 bytes --]

[-- Attachment #2: Type: text/plain, Size: 202 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-09 20:18 ` [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected matthew patton
@ 2023-04-11  7:14   ` Roland
  2023-04-12  9:24     ` Roberto Fastec
  2023-04-12  9:28     ` Roberto Fastec
  2023-04-11 17:05   ` Roger Heflin
  1 sibling, 2 replies; 16+ messages in thread
From: Roland @ 2023-04-11  7:14 UTC (permalink / raw)
  To: LVM general discussion and development, matthew patton


[-- Attachment #1.1: Type: text/plain, Size: 2306 bytes --]

 >Controllers remap blocks all on their own and the so-called geometry
is entirely fictitious anyway

so tell me then, why i have a shelf full with dead disks where half of
them are out of business for nothing but a couple of bad sectors ?

i don't see the point that hardware capable storing terabytes of data is
being put to trash, because of some <0.01% of it's sectors is defective
for this or for that reason.  it's that "the vendor tells it's dead now
- so please better buy a new one" paradigm, which seems to rule
everewhere today.

i dislike this attitude.

if you had a self healing diving suit which quits healing itself after
the 5th small hole, would you throw that away after the 5th hole - or
would you put a patch on that? same goes for bicycle inner tubing. there
were times, where you put patches on that because new ones where
expensive. nowadays, everbody puts them to trash and buys a new one.

so, if some drive controller isn't able to fix your 20 broken sectors -
i'd like to fix it myself. and i'd like to try the lvm apporach, because
i think it's a sensible way of putting some abstraction layer between
your filesystem and your rotating disks.

and even if it's dumb to do or if it's something which will not succeed
, it's at least worth a try to show if it works or show why it can't
work - and if it doesn't work - there is at least something to learn
about lvm or dead disks.

roland


Am 09.04.23 um 22:18 schrieb matthew patton:
> > my plan is to scan a disk for usable sectors and map the logical volume
> > around the broken sectors.
>
> 1977 called, they'd like their non-self-correcting HD controller
> implementations back.
>
> From a real-world perspective there is ZERO (more like negative)
> utility to this exercise. Controllers remap blocks all on their own
> and the so-called geometry is entirely fictitious anyway. From a
> script/program "because I want to" perspective you could leave LVM
> entirely out of it and just use a file with arbitrary offsets
> scribbled with a "bad" signature.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO athttp://tldp.org/HOWTO/LVM-HOWTO/

[-- Attachment #1.2: Type: text/html, Size: 5057 bytes --]

[-- Attachment #2: Type: text/plain, Size: 202 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-09 20:18 ` [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected matthew patton
  2023-04-11  7:14   ` Roland
@ 2023-04-11 17:05   ` Roger Heflin
  1 sibling, 0 replies; 16+ messages in thread
From: Roger Heflin @ 2023-04-11 17:05 UTC (permalink / raw)
  To: LVM general discussion and development

On Tue, Apr 11, 2023 at 1:44 AM matthew patton <pattonme@yahoo.com> wrote:
>
> > my plan is to scan a disk for usable sectors and map the logical volume
> > around the broken sectors.
>
> 1977 called, they'd like their non-self-correcting HD controller implementations back.
>
> From a real-world perspective there is ZERO (more like negative) utility to this exercise. Controllers remap blocks all on their own and the so-called geometry is entirely fictitious anyway. From a script/program "because I want to" perspective you could leave LVM entirely out of it and just use a file with arbitrary offsets scribbled with a "bad" signature.


The disks should be able to remap sectors all on their own.  Few
(none?) of the sata/sas non-raid controllers I know of do any disk
level remapping.  Some of the hardware raid ones may.   As implemented
the decision to remap (in the disk) seems to not always work
correctly.  I have a number of disks over several generations that
will refuse to re-map what is a clearly bad sector (re-writes to a
given sector succeed, and then immediate re-reads fail and on another
re-write succeed again and immediately fail on re-read, but do not get
remapped.

So given this, if one does not want to be replacing the given disks
there is room for software level remaps still to use the significant
number of disks with limited bad sectors on them.

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-11  7:14   ` Roland
@ 2023-04-12  9:24     ` Roberto Fastec
  2023-04-12  9:28     ` Roberto Fastec
  1 sibling, 0 replies; 16+ messages in thread
From: Roberto Fastec @ 2023-04-12  9:24 UTC (permalink / raw)
  To: LVM general discussion and development


[-- Attachment #1.1: Type: text/plain, Size: 3086 bytes --]

If you are in trouble with the disks set because of bad healthiness of the drives

they helped me with really convenient fee-per-disk in Verona , Italy

It is the most well reputate (checked and compared with Google reviews) data recovery lab in Italy.

If you want the reference, just drop me an email

⁣kind regards

R. ​

Il giorno 12 apr 2023, 08:39, alle ore 08:39, Roland <devzero@web.de> ha scritto:
> >Controllers remap blocks all on their own and the so-called geometry
>is entirely fictitious anyway
>
>so tell me then, why i have a shelf full with dead disks where half of
>them are out of business for nothing but a couple of bad sectors ?
>
>i don't see the point that hardware capable storing terabytes of data
>is
>being put to trash, because of some <0.01% of it's sectors is defective
>for this or for that reason.  it's that "the vendor tells it's dead now
>- so please better buy a new one" paradigm, which seems to rule
>everewhere today.
>
>i dislike this attitude.
>
>if you had a self healing diving suit which quits healing itself after
>the 5th small hole, would you throw that away after the 5th hole - or
>would you put a patch on that? same goes for bicycle inner tubing.
>there
>were times, where you put patches on that because new ones where
>expensive. nowadays, everbody puts them to trash and buys a new one.
>
>so, if some drive controller isn't able to fix your 20 broken sectors -
>i'd like to fix it myself. and i'd like to try the lvm apporach,
>because
>i think it's a sensible way of putting some abstraction layer between
>your filesystem and your rotating disks.
>
>and even if it's dumb to do or if it's something which will not succeed
>, it's at least worth a try to show if it works or show why it can't
>work - and if it doesn't work - there is at least something to learn
>about lvm or dead disks.
>
>roland
>
>
>Am 09.04.23 um 22:18 schrieb matthew patton:
>> > my plan is to scan a disk for usable sectors and map the logical
>volume
>> > around the broken sectors.
>>
>> 1977 called, they'd like their non-self-correcting HD controller
>> implementations back.
>>
>> From a real-world perspective there is ZERO (more like negative)
>> utility to this exercise. Controllers remap blocks all on their own
>> and the so-called geometry is entirely fictitious anyway. From a
>> script/program "because I want to" perspective you could leave LVM
>> entirely out of it and just use a file with arbitrary offsets
>> scribbled with a "bad" signature.
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://listman.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO athttp://tldp.org/HOWTO/LVM-HOWTO/
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm@redhat.com
>https://listman.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[-- Attachment #1.2: Type: text/html, Size: 6188 bytes --]

[-- Attachment #2: Type: text/plain, Size: 202 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-11  7:14   ` Roland
  2023-04-12  9:24     ` Roberto Fastec
@ 2023-04-12  9:28     ` Roberto Fastec
  1 sibling, 0 replies; 16+ messages in thread
From: Roberto Fastec @ 2023-04-12  9:28 UTC (permalink / raw)
  To: LVM general discussion and development


[-- Attachment #1.1: Type: text/plain, Size: 2874 bytes --]

P.S. needless to be said, that they eventually can reassemble damaged LVM , until LVM's metadata tables are still good enough

Il giorno 12 apr 2023, 08:39, alle ore 08:39, Roland <devzero@web.de> ha scritto:
> >Controllers remap blocks all on their own and the so-called geometry
>is entirely fictitious anyway
>
>so tell me then, why i have a shelf full with dead disks where half of
>them are out of business for nothing but a couple of bad sectors ?
>
>i don't see the point that hardware capable storing terabytes of data
>is
>being put to trash, because of some <0.01% of it's sectors is defective
>for this or for that reason.  it's that "the vendor tells it's dead now
>- so please better buy a new one" paradigm, which seems to rule
>everewhere today.
>
>i dislike this attitude.
>
>if you had a self healing diving suit which quits healing itself after
>the 5th small hole, would you throw that away after the 5th hole - or
>would you put a patch on that? same goes for bicycle inner tubing.
>there
>were times, where you put patches on that because new ones where
>expensive. nowadays, everbody puts them to trash and buys a new one.
>
>so, if some drive controller isn't able to fix your 20 broken sectors -
>i'd like to fix it myself. and i'd like to try the lvm apporach,
>because
>i think it's a sensible way of putting some abstraction layer between
>your filesystem and your rotating disks.
>
>and even if it's dumb to do or if it's something which will not succeed
>, it's at least worth a try to show if it works or show why it can't
>work - and if it doesn't work - there is at least something to learn
>about lvm or dead disks.
>
>roland
>
>
>Am 09.04.23 um 22:18 schrieb matthew patton:
>> > my plan is to scan a disk for usable sectors and map the logical
>volume
>> > around the broken sectors.
>>
>> 1977 called, they'd like their non-self-correcting HD controller
>> implementations back.
>>
>> From a real-world perspective there is ZERO (more like negative)
>> utility to this exercise. Controllers remap blocks all on their own
>> and the so-called geometry is entirely fictitious anyway. From a
>> script/program "because I want to" perspective you could leave LVM
>> entirely out of it and just use a file with arbitrary offsets
>> scribbled with a "bad" signature.
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://listman.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO athttp://tldp.org/HOWTO/LVM-HOWTO/
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm@redhat.com
>https://listman.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[-- Attachment #1.2: Type: text/html, Size: 5754 bytes --]

[-- Attachment #2: Type: text/plain, Size: 202 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-12 12:37       ` Roland
  2023-04-12 13:16         ` Zdenek Kabelac
@ 2023-04-12 13:53         ` Roberto Fastec
  1 sibling, 0 replies; 16+ messages in thread
From: Roberto Fastec @ 2023-04-12 13:53 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Roger Heflin, Zdenek Kabelac


[-- Attachment #1.1: Type: text/plain, Size: 7589 bytes --]

"but shouldn't we perhaps leave it up to the end user / owner of the
hardware,  to decide when it's ready for the recycle bin ?"

with hard drives (forget SSDs, they are hardware accelerators absolutely unaffordable for data storing) it is not the user/owner that decide it , unless he doesn't want to suicide his data

is the SMART system that tells you that one first reallocation happen

Sectors reallocation are not allowed

Those few 100 or 250 sectors are just for the SMART system

As soon just one reallocation happen, it is time to waste the drive

Like with car tyres, when you reach the marker , it is time to waste them, but if you run too few kilometers per year, after few years the gum got dried and you create a risk for yourself and the others if you don waste them

This comparison applies to hard drives

If the weekly SMART test tells you that a drive is in pre-failure (just reallocations happen, few mean 3 - 4 - 5) you have been warned:

it is time to waste "the tyre"

You don't do that, you was warned



Il giorno 12 apr 2023, 14:37, alle ore 14:37, Roland <devzero@web.de> ha scritto:
>>Reall silly plan  - been there years back in time when drives were FAR
>more expensive with the price per GiB.
> >Todays - just throw bad drive to recycle bin - it's not worth to do
>this silliness.
>
>ok, i understand your point of view. and thank you for the input.
>
>but this applies to a world with endless ressources and when people can
>afford the new hardware.
>
>i think, with the same logic, you can designate some guy to be silly ,
>if he puts a patch on his bicycle inner tube instead
>of buying a new one, as they are cheap. with the patch, the tube is
>always worse then a new one. and he probably risks
>his own health, because of using a tube which may already have gotten
>porous...
>
>but shouldn't we perhaps leave it up to the end user / owner of the
>hardware,  to decide when it's ready for the recycle bin ?
>
>or should we perhaps wait for the next harddrive supply crisis (like in
>2011)?  then people start to get more creative in using
>what they have, because they have no other option...
>
> > whenever you want to create new arrangement for you disk with 'bad'
>areas,
> >you can always start from 'scratch' - since afterall - lvm2 ONLY
>manipulates with metadata within disk front -
> >so if you need to create new 'holes',
> >just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
>>and then  'lvextend'  with normal  or  'lvextend --type error | --type
>zero' segment types around bad areas with specific size.
> >Once you are finished and your LV precisely matches your 'previous' 
>LV of you past VG - you can start to use this LV again with
> >new arrangement of  'broken zeroed/errored' areas.
>
>yes, i have already come to the conclusion, that it's always better to
>start from scratch like this. i dismissed the idea of
>excluding or relocating bad sectors.
>
> > But good advice from me - whenever  'smartctl' starts to show
>relocation block errors - it's the right moment to  'dd_rescue'
> > any LV to your new drive...
>
>yes, i'm totally aware that we walk on very thin ice here.
>
>but i'd really like to collect some real world data/information on how
>good such disk "recycling" can probably work.  i don't have
>any pointers for such and did not find any information, on "how fast a
>bad disk gets worse,  if it has irretrievable bad sectors
>and smart is reporting relocation errors. there seems not much
>information around for this...
>
>i guess such "broken" disks being used with zfs in a redundant setup,
>they could probable still serve a purpose. maybe not
>for production data, but probably good enough for "not so important"
>application.
>
>it's a little bit academic project. for my own fun. i like to fiddle
>with disks, lvm, zfs and that stuff....
>
>roland
>
>Am 12.04.23 um 12:20 schrieb Zdenek Kabelac:
>> Dne 09. 04. 23 v 20:21 Roland napsal(a):
>>>> Well, if the LV is being used for anything real, then I don't know
>of
>>>> anything where you could remove a block in the middle and still
>have a
>>>> working fs.   You can only reduce fs'es (the ones that you can
>reduce)
>>>
>>> my plan is to scan a disk for usable sectors and map the logical
>volume
>>> around the broken sectors.
>>>
>>> whenever more sectors get broken, i'd like to remove the broken ones
>>> to have
>>> a usable lv without broken sectors.
>>>
>>
>> Reall silly plan  - been there years back in time when drives were
>FAR
>> more expensive with the price per GiB.
>>
>> Todays - just throw bad drive to recycle bin - it's not worth to do
>> this silliness.
>>
>> HDD bad sectors are spreading - and slowly the surface gets
>destroyed....
>>
>> So if you make large 'head-room' around bad disk areas - if they are
>> concentrated on some disk area - and you know topology of you disk
>drive
>> like i.e. 1% free disk space before and after bad area - you could
>> possibly use disk for a little while more - but only to store
>> invaluable data....
>>
>>
>>> since you need to rebuild your data anyway for that disk, you can
>also
>>> recreate the whole logical volume.
>>>
>>> my question and my project is a little bit academic. i'd simply want
>>> to try
>>> out how much use you can have from some dead disks which are trash
>>> otherwise...
>>
>> You could always take  'vgcfgbackup'  of lvm2 metadata and make some
>> crazy transition of if with even  AWK/python/perl   -  but we really
>> tend to support
>> just some useful features - as there is already  'too much' and users
>> are getting often lost.
>>
>> One very simply & naive implementation could be going alonge this
>path -
>>
>> whenever you want to create new arrangement for you disk with 'bad'
>> areas,
>> you can always start from 'scratch' - since afterall - lvm2 ONLY
>> manipulates with metadata within disk front - so if you need to
>create
>> new 'holes',
>> just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
>> and then  'lvextend'  with normal  or  'lvextend --type error |
>--type
>> zero' segment types around bad areas with specific size.
>> Once you are finished and your LV precisely matches your 'previous' 
>> LV of you past VG - you can start to use this LV again with new
>> arrangement of  'broken zeroed/errored' areas.
>>
>> I've some serious doubts about usability of this with any filesystem
>> :) but if you think this has some added value - fell free to use.
>> If the drive you play with would be 'discardable' (SSD/NVMe) then one
>> must take extra care there is no 'discard/TRIM' anywhere in the
>> process - as that would lose all data irrecoverably....
>>
>> But good advice from me - whenever  'smartctl' starts to show
>> relocation block errors - it's the right moment to  'dd_rescue' any
>LV
>> to your new drive...
>>>
>>> yes, pvmove is the other approach for that.
>>>
>>> but will pvmove continue/finish by all means when moving extents
>>> located on a
>>> bad sector ?
>>
>> pvmove  CANNOT be used with bad drives - it cannot deal with erroring
>> sectors and basically gets stuck there trying to mirror unrecoverable
>> disk areas...
>>
>> Regards
>>
>> Zdenek
>>
>>
>>
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm@redhat.com
>https://listman.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[-- Attachment #1.2: Type: text/html, Size: 10285 bytes --]

[-- Attachment #2: Type: text/plain, Size: 202 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-12 12:37       ` Roland
@ 2023-04-12 13:16         ` Zdenek Kabelac
  2023-04-12 13:53         ` Roberto Fastec
  1 sibling, 0 replies; 16+ messages in thread
From: Zdenek Kabelac @ 2023-04-12 13:16 UTC (permalink / raw)
  To: Roland, LVM general discussion and development, Roger Heflin

Dne 12. 04. 23 v 14:37 Roland napsal(a):
>  >Reall silly plan  - been there years back in time when drives were FAR
> more expensive with the price per GiB.
>  >Todays - just throw bad drive to recycle bin - it's not worth to do
> this silliness.
> 
> ok, i understand your point of view. and thank you for the input.
> 
> but this applies to a world with endless ressources and when people can
> afford the new hardware.
> 

Hi

It's really not about the 'endless' resource - it's just about 'practical' 
thinking.

> i think, with the same logic, you can designate some guy to be silly ,
> if he puts a patch on his bicycle inner tube instead

To use your comparative case with bicycle - you wouldn't likely ride one - 
where the 'next' hole in the tube would be appearing randomly & unexpectedly
during your future rides - you would simply buy a new tube to be sure you 
could get somewhere....

Bad drives are highly unpredictable - so as long as you simply don't care 
about data and you store there just something you could easily download again 
from some other place - that could be the only use case I could imagine,
but I'd never put there you only copy of your family album....

> but shouldn't we perhaps leave it up to the end user / owner of the
> hardware,  to decide when it's ready for the recycle bin ?

Yeah - if the hardware would cost more :) then the time you spend with trying 
to analyze and use bad drives - than there would be whole 'recycling' industry 
for these drives - thankfully where are not heading towards this  ATM :)
What I could observe is that HDD of 'small' sizes are being totally obsoleted 
by SSDs/NMVes....

> or should we perhaps wait for the next harddrive supply crisis (like in
> 2011)?  then people start to get more creative in using
> what they have, because they have no other option...

Assuming you are already preparing horses in the barn ?

> yes, i have already come to the conclusion, that it's always better to
> start from scratch like this. i dismissed the idea of
> excluding or relocating bad sectors.

The key is - that you know how the drive is built and how many disk plates are 
affected and also how quickly errors are spreading to surrounding sectors....

My best effort was always to leave some not so small amount of the 'free' 
space around bad disk areas - but once disks started to 'relocate' bad sectors 
on its own - this all became a game hard to win....

>  > But good advice from me - whenever  'smartctl' starts to show
> relocation block errors - it's the right moment to  'dd_rescue'
>  > any LV to your new drive...
> 
> yes, i'm totally aware that we walk on very thin ice here.
> 
> but i'd really like to collect some real world data/information on how
> good such disk "recycling" can probably work.  i don't have

Maybe ask Google people :) they are the most experienced ones with trashing 
storage....

> 
> i guess such "broken" disks being used with zfs in a redundant setup,
> they could probable still serve a purpose. maybe not
> for production data, but probably good enough for "not so important"
> application.
> 
> it's a little bit academic project. for my own fun. i like to fiddle
> with disks, lvm, zfs and that stuff....

Another idea to 'deploy' (I've been even using myself) - is to just mkfs.ext2 
bad drive and and write there large sized (10MiB-100MiB) files of 'zeroes'.

Then simply md5 checksum - and remove 'correct' ones and keep and mark 
immutable those, where you fail to read&validate them.

With some amount of luck - ext2 metadata will not hit the 'bad' parts of drive
(otherwise you would really need to use parted or lvm2 to skip those areas)

This way you end-up with somewhat 'usable' storage -  where bad sectors are 
hidden in those broken 'zero' files you just keep in filesystem.

Next time the 'error' spread  - you reapply same strategy.

And you even don't need lvm2 for this....

This is the easiest way how to keep 'some' not fatally broken drives in use 
for a while - but just don't put there anything you depend on - as drive can 
be gone anytime very easily.....

Regards

Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-12 10:20     ` Zdenek Kabelac
  2023-04-12 11:51       ` Roberto Fastec
@ 2023-04-12 12:37       ` Roland
  2023-04-12 13:16         ` Zdenek Kabelac
  2023-04-12 13:53         ` Roberto Fastec
  1 sibling, 2 replies; 16+ messages in thread
From: Roland @ 2023-04-12 12:37 UTC (permalink / raw)
  To: Zdenek Kabelac, LVM general discussion and development, Roger Heflin

 >Reall silly plan  - been there years back in time when drives were FAR
more expensive with the price per GiB.
 >Todays - just throw bad drive to recycle bin - it's not worth to do
this silliness.

ok, i understand your point of view. and thank you for the input.

but this applies to a world with endless ressources and when people can
afford the new hardware.

i think, with the same logic, you can designate some guy to be silly ,
if he puts a patch on his bicycle inner tube instead
of buying a new one, as they are cheap. with the patch, the tube is
always worse then a new one. and he probably risks
his own health, because of using a tube which may already have gotten
porous...

but shouldn't we perhaps leave it up to the end user / owner of the
hardware,  to decide when it's ready for the recycle bin ?

or should we perhaps wait for the next harddrive supply crisis (like in
2011)?  then people start to get more creative in using
what they have, because they have no other option...

 > whenever you want to create new arrangement for you disk with 'bad'
areas,
 >you can always start from 'scratch' - since afterall - lvm2 ONLY
manipulates with metadata within disk front -
 >so if you need to create new 'holes',
 >just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
 >and then  'lvextend'  with normal  or  'lvextend --type error | --type
zero' segment types around bad areas with specific size.
 >Once you are finished and your LV precisely matches your 'previous' 
LV of you past VG - you can start to use this LV again with
 >new arrangement of  'broken zeroed/errored' areas.

yes, i have already come to the conclusion, that it's always better to
start from scratch like this. i dismissed the idea of
excluding or relocating bad sectors.

 > But good advice from me - whenever  'smartctl' starts to show
relocation block errors - it's the right moment to  'dd_rescue'
 > any LV to your new drive...

yes, i'm totally aware that we walk on very thin ice here.

but i'd really like to collect some real world data/information on how
good such disk "recycling" can probably work.  i don't have
any pointers for such and did not find any information, on "how fast a
bad disk gets worse,  if it has irretrievable bad sectors
and smart is reporting relocation errors. there seems not much
information around for this...

i guess such "broken" disks being used with zfs in a redundant setup,
they could probable still serve a purpose. maybe not
for production data, but probably good enough for "not so important"
application.

it's a little bit academic project. for my own fun. i like to fiddle
with disks, lvm, zfs and that stuff....

roland

Am 12.04.23 um 12:20 schrieb Zdenek Kabelac:
> Dne 09. 04. 23 v 20:21 Roland napsal(a):
>>> Well, if the LV is being used for anything real, then I don't know of
>>> anything where you could remove a block in the middle and still have a
>>> working fs.   You can only reduce fs'es (the ones that you can reduce)
>>
>> my plan is to scan a disk for usable sectors and map the logical volume
>> around the broken sectors.
>>
>> whenever more sectors get broken, i'd like to remove the broken ones
>> to have
>> a usable lv without broken sectors.
>>
>
> Reall silly plan  - been there years back in time when drives were FAR
> more expensive with the price per GiB.
>
> Todays - just throw bad drive to recycle bin - it's not worth to do
> this silliness.
>
> HDD bad sectors are spreading - and slowly the surface gets destroyed....
>
> So if you make large 'head-room' around bad disk areas - if they are
> concentrated on some disk area - and you know topology of you disk drive
> like i.e. 1% free disk space before and after bad area - you could
> possibly use disk for a little while more - but only to store
> invaluable data....
>
>
>> since you need to rebuild your data anyway for that disk, you can also
>> recreate the whole logical volume.
>>
>> my question and my project is a little bit academic. i'd simply want
>> to try
>> out how much use you can have from some dead disks which are trash
>> otherwise...
>
> You could always take  'vgcfgbackup'  of lvm2 metadata and make some
> crazy transition of if with even  AWK/python/perl   -  but we really
> tend to support
> just some useful features - as there is already  'too much' and users
> are getting often lost.
>
> One very simply & naive implementation could be going alonge this path -
>
> whenever you want to create new arrangement for you disk with 'bad'
> areas,
> you can always start from 'scratch' - since afterall - lvm2 ONLY
> manipulates with metadata within disk front - so if you need to create
> new 'holes',
> just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
> and then  'lvextend'  with normal  or  'lvextend --type error | --type
> zero' segment types around bad areas with specific size.
> Once you are finished and your LV precisely matches your 'previous' 
> LV of you past VG - you can start to use this LV again with new
> arrangement of  'broken zeroed/errored' areas.
>
> I've some serious doubts about usability of this with any filesystem
> :) but if you think this has some added value - fell free to use.
> If the drive you play with would be 'discardable' (SSD/NVMe) then one
> must take extra care there is no 'discard/TRIM' anywhere in the
> process - as that would lose all data irrecoverably....
>
> But good advice from me - whenever  'smartctl' starts to show
> relocation block errors - it's the right moment to  'dd_rescue' any LV
> to your new drive...
>>
>> yes, pvmove is the other approach for that.
>>
>> but will pvmove continue/finish by all means when moving extents
>> located on a
>> bad sector ?
>
> pvmove  CANNOT be used with bad drives - it cannot deal with erroring
> sectors and basically gets stuck there trying to mirror unrecoverable
> disk areas...
>
> Regards
>
> Zdenek
>
>
>

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-12 10:20     ` Zdenek Kabelac
@ 2023-04-12 11:51       ` Roberto Fastec
  2023-04-12 12:37       ` Roland
  1 sibling, 0 replies; 16+ messages in thread
From: Roberto Fastec @ 2023-04-12 11:51 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Roger Heflin, Roland


[-- Attachment #1.1: Type: text/plain, Size: 4222 bytes --]

Zdenek is right

But if in this exact moment one or more drive do have not only bad sectors "reallocated" but also "pending" ones 

the best choice is to preserve them shutting them off and allowing a PRO cloning process with machines like PC-3000

But first at a PRO lab, they will open the drive in clean room for the sake of check and verify WHY SMART are so bad for such amount of sectors

If recovering the data is a must, failing the preservation of the bad SMART drives , will result in loosing them and together the data too.

Read on the data recovery guide published on the website of the told company

R.

⁣ ​

Il giorno 12 apr 2023, 12:20, alle ore 12:20, Zdenek Kabelac <zdenek.kabelac@gmail.com> ha scritto:
>Dne 09. 04. 23 v 20:21 Roland napsal(a):
>>> Well, if the LV is being used for anything real, then I don't know
>of
>>> anything where you could remove a block in the middle and still have
>a
>>> working fs.   You can only reduce fs'es (the ones that you can
>reduce)
>> 
>> my plan is to scan a disk for usable sectors and map the logical
>volume
>> around the broken sectors.
>> 
>> whenever more sectors get broken, i'd like to remove the broken ones
>to have
>> a usable lv without broken sectors.
>>
>
>Reall silly plan  - been there years back in time when drives were FAR
>more 
>expensive with the price per GiB.
>
>Todays - just throw bad drive to recycle bin - it's not worth to do
>this 
>silliness.
>
>HDD bad sectors are spreading - and slowly the surface gets
>destroyed....
>
>So if you make large 'head-room' around bad disk areas - if they are 
>concentrated on some disk area - and you know topology of you disk
>drive
>like i.e. 1% free disk space before and after bad area - you could
>possibly 
>use disk for a little while more - but only to store invaluable
>data....
>
>
>> since you need to rebuild your data anyway for that disk, you can
>also
>> recreate the whole logical volume.
>> 
>> my question and my project is a little bit academic. i'd simply want
>to try
>> out how much use you can have from some dead disks which are trash
>otherwise...
>
>You could always take  'vgcfgbackup'  of lvm2 metadata and make some
>crazy 
>transition of if with even  AWK/python/perl   -  but we really tend to
>support
>just some useful features - as there is already  'too much' and users
>are 
>getting often lost.
>
>One very simply & naive implementation could be going alonge this path
>-
>
>whenever you want to create new arrangement for you disk with 'bad'
>areas,
>you can always start from 'scratch' - since afterall - lvm2 ONLY
>manipulates 
>with metadata within disk front - so if you need to create new 'holes',
>just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
>and then  'lvextend'  with normal  or  'lvextend --type error | --type
>zero' 
>segment types around bad areas with specific size.
>Once you are finished and your LV precisely matches your 'previous'  LV
>of you 
>past VG - you can start to use this LV again with new arrangement of 
>'broken 
>zeroed/errored' areas.
>
>I've some serious doubts about usability of this with any filesystem :)
>but if 
>you think this has some added value - fell free to use.
>If the drive you play with would be 'discardable' (SSD/NVMe) then one
>must 
>take extra care there is no 'discard/TRIM' anywhere in the process - as
>that 
>would lose all data irrecoverably....
>
>But good advice from me - whenever  'smartctl' starts to show
>relocation block 
>errors - it's the right moment to  'dd_rescue' any LV to your new
>drive...
>> 
>> yes, pvmove is the other approach for that.
>> 
>> but will pvmove continue/finish by all means when moving extents
>located on a
>> bad sector ?
>
>pvmove  CANNOT be used with bad drives - it cannot deal with erroring
>sectors 
>and basically gets stuck there trying to mirror unrecoverable disk
>areas...
>
>Regards
>
>Zdenek
>
>
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm@redhat.com
>https://listman.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[-- Attachment #1.2: Type: text/html, Size: 5387 bytes --]

[-- Attachment #2: Type: text/plain, Size: 202 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-09 18:21   ` Roland
  2023-04-09 18:53     ` Roger Heflin
  2023-04-09 23:50     ` Stuart D Gathman
@ 2023-04-12 10:20     ` Zdenek Kabelac
  2023-04-12 11:51       ` Roberto Fastec
  2023-04-12 12:37       ` Roland
  2 siblings, 2 replies; 16+ messages in thread
From: Zdenek Kabelac @ 2023-04-12 10:20 UTC (permalink / raw)
  To: LVM general discussion and development, Roland, Roger Heflin

Dne 09. 04. 23 v 20:21 Roland napsal(a):
>> Well, if the LV is being used for anything real, then I don't know of
>> anything where you could remove a block in the middle and still have a
>> working fs.   You can only reduce fs'es (the ones that you can reduce)
> 
> my plan is to scan a disk for usable sectors and map the logical volume
> around the broken sectors.
> 
> whenever more sectors get broken, i'd like to remove the broken ones to have
> a usable lv without broken sectors.
>

Reall silly plan  - been there years back in time when drives were FAR more 
expensive with the price per GiB.

Todays - just throw bad drive to recycle bin - it's not worth to do this 
silliness.

HDD bad sectors are spreading - and slowly the surface gets destroyed....

So if you make large 'head-room' around bad disk areas - if they are 
concentrated on some disk area - and you know topology of you disk drive
like i.e. 1% free disk space before and after bad area - you could possibly 
use disk for a little while more - but only to store invaluable data....


> since you need to rebuild your data anyway for that disk, you can also
> recreate the whole logical volume.
> 
> my question and my project is a little bit academic. i'd simply want to try
> out how much use you can have from some dead disks which are trash otherwise...

You could always take  'vgcfgbackup'  of lvm2 metadata and make some crazy 
transition of if with even  AWK/python/perl   -  but we really tend to support
just some useful features - as there is already  'too much' and users are 
getting often lost.

One very simply & naive implementation could be going alonge this path -

whenever you want to create new arrangement for you disk with 'bad' areas,
you can always start from 'scratch' - since afterall - lvm2 ONLY manipulates 
with metadata within disk front - so if you need to create new 'holes',
just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
and then  'lvextend'  with normal  or  'lvextend --type error | --type zero' 
segment types around bad areas with specific size.
Once you are finished and your LV precisely matches your 'previous'  LV of you 
past VG - you can start to use this LV again with new arrangement of  'broken 
zeroed/errored' areas.

I've some serious doubts about usability of this with any filesystem :) but if 
you think this has some added value - fell free to use.
If the drive you play with would be 'discardable' (SSD/NVMe) then one must 
take extra care there is no 'discard/TRIM' anywhere in the process - as that 
would lose all data irrecoverably....

But good advice from me - whenever  'smartctl' starts to show relocation block 
errors - it's the right moment to  'dd_rescue' any LV to your new drive...
> 
> yes, pvmove is the other approach for that.
> 
> but will pvmove continue/finish by all means when moving extents located on a
> bad sector ?

pvmove  CANNOT be used with bad drives - it cannot deal with erroring sectors 
and basically gets stuck there trying to mirror unrecoverable disk areas...

Regards

Zdenek



_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-09 18:21   ` Roland
  2023-04-09 18:53     ` Roger Heflin
@ 2023-04-09 23:50     ` Stuart D Gathman
  2023-04-12 10:20     ` Zdenek Kabelac
  2 siblings, 0 replies; 16+ messages in thread
From: Stuart D Gathman @ 2023-04-09 23:50 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Roger Heflin

I use a utility that maps bad sectors to files, then move/rename the
files into a bad blocks folder.  (Yes, this doesn't work when critical
areas are affected.)  If you simply remove the files, then
modern disks will internally remap the sectors when they are written
again  - but the quality of remapping implementations varies.

It is more time efficient to just buy a new disk, but with wars and
rumors of wars threatening to disrupt supply chains, including tech,
it's nice to have the skills to get more use from failing hardware.

Plus, it is a challenging problem, which can be fun to work on at leisure.

On Sun, 9 Apr 2023, Roland wrote:

>>  What is your use case that you believe removing a block in the middle
>>  of an LV needs to work?
>
> my use case is creating some badblocks script with lvm which intelligently
> handles and skips broken sectors on disks which can't be used otherwise...

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-09 18:53     ` Roger Heflin
@ 2023-04-09 22:04       ` Roland
  0 siblings, 0 replies; 16+ messages in thread
From: Roland @ 2023-04-09 22:04 UTC (permalink / raw)
  To: LVM general discussion and development, Roger Heflin

thank you, very valuable!

Am 09.04.23 um 20:53 schrieb Roger Heflin:
> On Sun, Apr 9, 2023 at 1:21 PM Roland <devzero@web.de> wrote:
>>> Well, if the LV is being used for anything real, then I don't know of
>>> anything where you could remove a block in the middle and still have a
>>> working fs.   You can only reduce fs'es (the ones that you can reduce)
>>> by reducing off of the end and making it smaller.
>> yes, that's clear to me.
>>
>>> It makes zero sense to be able to remove a block in the middle of a LV
>>> used by just about everything that uses LV's as nothing supports being
>>> able to remove a block in the middle.
>> yes, that critics is totally valid. from a fs point of view you completely
>> corrupt  the volume, that's clear to me.
>>
>>> What is your use case that you believe removing a block in the middle
>>> of an LV needs to work?
>> my use case is creating some badblocks script with lvm which intelligently
>> handles and skips broken sectors on disks which can't be used otherwise...
>>
>> my plan is to scan a disk for usable sectors and map the logical volume
>> around the broken sectors.
>>
>> whenever more sectors get broken, i'd like to remove the broken ones to have
>> a usable lv without broken sectors.
>>
>> since you need to rebuild your data anyway for that disk, you can also
>> recreate the whole logical volume.
>>
>> my question and my project is a little bit academic. i'd simply want to try
>> out how much use you can have from some dead disks which are trash otherwise...
>>
>>
>> the manpage is telling this:
>>
>>
>>          Resize an LV by specified PV extents.
>>
>>          lvresize LV PV ...
>>              [ -r|--resizefs ]
>>              [ COMMON_OPTIONS ]
>>
>>
>>
>> so, that sounds like that i can resize in any direction by specifying extents.
>>
>>
>>> Now if you really need to remove a specific block in the middle of the
>>> LV then you are likely going to need to use pvmove with specific
>>> blocks to replace those blocks with something else.
>> yes, pvmove is the other approach for that.
>>
>> but will pvmove continue/finish by all means when moving extents located on a
>> bad sector ?
>>
>> the data may be corrupted anywhy, so i thought it's better to skip it.
>>
>> what i'm really after is some "remap a physical extent to a healty/reserved
>> section and let zfs selfheal do the rest".  just like "dismiss the problematic
>> extents and replace with healthy extents".
>>
>> i'd better like remapping instead of removing a PE, as removing will invalidate
>> the whole LV....
>>
>> roland
>>
>
> Create an LV per device, and when the device is replaced then lvremove
> the devices list.  Once a sector/area is bad I would not trust the
> sectors until you replace the device.  You may be able to try the
> pvmove multiple times and the disk may be able to eventually rebuild
> the data.
>
> My experience with bad sectors is once it reports bad the disks will
> often rewrite it at the same location and call it "good" when it is
> going to report bad again almost immediately, or be a uselessly slow
> sector.   Sometimes it will replace the sector on a
> re-write/successful read but that seems unreliable.
>
> On non-zfs fs'es I have found the "bad" file and renamed it
> badfile.#### and put it in a dir called badblocks.  So long as the bad
> block is in the file data then you can contain the badblock by
> containing the bad file.   And since most of the disk will be file
> data that should also be a management scheme not requiring a fs
> rebuild.
>
> The re-written sector may also be "slow" and it might be wise to treat
> those sectors as bad, and in the "slow" sector case pvmove should
> actually work.  For that you would need a badblocks that "timed" the
> reads to disk and treats any sector taking longer that even say .25
> seconds as slow/bad.   At 5400 rpm, .25/250ms translates to around 22
> failed re-read tries.   If you time it you may have to do some testing
> on the entire group of reads in smaller aligned sectors to figure out
> which sector in the main read was bad.  If you scanned often enough
> for slows you might catch them before they are completely bad.
> Technically the disk is supposed to do that on its scans, but even
> when I have turned the scans up to daily it does not seem to act
> right.
>
> And I have usually found that the bad "units" are 8 units of 8
> 512-byte sectors for a total of around 32k (aligned on the disk).
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-09 18:21   ` Roland
@ 2023-04-09 18:53     ` Roger Heflin
  2023-04-09 22:04       ` Roland
  2023-04-09 23:50     ` Stuart D Gathman
  2023-04-12 10:20     ` Zdenek Kabelac
  2 siblings, 1 reply; 16+ messages in thread
From: Roger Heflin @ 2023-04-09 18:53 UTC (permalink / raw)
  To: Roland; +Cc: LVM general discussion and development

On Sun, Apr 9, 2023 at 1:21 PM Roland <devzero@web.de> wrote:
>
> > Well, if the LV is being used for anything real, then I don't know of
> > anything where you could remove a block in the middle and still have a
> > working fs.   You can only reduce fs'es (the ones that you can reduce)
> > by reducing off of the end and making it smaller.
>
> yes, that's clear to me.
>
> > It makes zero sense to be able to remove a block in the middle of a LV
> > used by just about everything that uses LV's as nothing supports being
> > able to remove a block in the middle.
>
> yes, that critics is totally valid. from a fs point of view you completely
> corrupt  the volume, that's clear to me.
>
> > What is your use case that you believe removing a block in the middle
> > of an LV needs to work?
>
> my use case is creating some badblocks script with lvm which intelligently
> handles and skips broken sectors on disks which can't be used otherwise...
>
> my plan is to scan a disk for usable sectors and map the logical volume
> around the broken sectors.
>
> whenever more sectors get broken, i'd like to remove the broken ones to have
> a usable lv without broken sectors.
>
> since you need to rebuild your data anyway for that disk, you can also
> recreate the whole logical volume.
>
> my question and my project is a little bit academic. i'd simply want to try
> out how much use you can have from some dead disks which are trash otherwise...
>
>
> the manpage is telling this:
>
>
>         Resize an LV by specified PV extents.
>
>         lvresize LV PV ...
>             [ -r|--resizefs ]
>             [ COMMON_OPTIONS ]
>
>
>
> so, that sounds like that i can resize in any direction by specifying extents.
>
>
> > Now if you really need to remove a specific block in the middle of the
> > LV then you are likely going to need to use pvmove with specific
> > blocks to replace those blocks with something else.
>
> yes, pvmove is the other approach for that.
>
> but will pvmove continue/finish by all means when moving extents located on a
> bad sector ?
>
> the data may be corrupted anywhy, so i thought it's better to skip it.
>
> what i'm really after is some "remap a physical extent to a healty/reserved
> section and let zfs selfheal do the rest".  just like "dismiss the problematic
> extents and replace with healthy extents".
>
> i'd better like remapping instead of removing a PE, as removing will invalidate
> the whole LV....
>
> roland
>


Create an LV per device, and when the device is replaced then lvremove
the devices list.  Once a sector/area is bad I would not trust the
sectors until you replace the device.  You may be able to try the
pvmove multiple times and the disk may be able to eventually rebuild
the data.

My experience with bad sectors is once it reports bad the disks will
often rewrite it at the same location and call it "good" when it is
going to report bad again almost immediately, or be a uselessly slow
sector.   Sometimes it will replace the sector on a
re-write/successful read but that seems unreliable.

On non-zfs fs'es I have found the "bad" file and renamed it
badfile.#### and put it in a dir called badblocks.  So long as the bad
block is in the file data then you can contain the badblock by
containing the bad file.   And since most of the disk will be file
data that should also be a management scheme not requiring a fs
rebuild.

The re-written sector may also be "slow" and it might be wise to treat
those sectors as bad, and in the "slow" sector case pvmove should
actually work.  For that you would need a badblocks that "timed" the
reads to disk and treats any sector taking longer that even say .25
seconds as slow/bad.   At 5400 rpm, .25/250ms translates to around 22
failed re-read tries.   If you time it you may have to do some testing
on the entire group of reads in smaller aligned sectors to figure out
which sector in the main read was bad.  If you scanned often enough
for slows you might catch them before they are completely bad.
Technically the disk is supposed to do that on its scans, but even
when I have turned the scans up to daily it does not seem to act
right.

And I have usually found that the bad "units" are 8 units of 8
512-byte sectors for a total of around 32k (aligned on the disk).

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-09 17:32 ` Roger Heflin
@ 2023-04-09 18:21   ` Roland
  2023-04-09 18:53     ` Roger Heflin
                       ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Roland @ 2023-04-09 18:21 UTC (permalink / raw)
  To: LVM general discussion and development, Roger Heflin

> Well, if the LV is being used for anything real, then I don't know of
> anything where you could remove a block in the middle and still have a
> working fs.   You can only reduce fs'es (the ones that you can reduce)
> by reducing off of the end and making it smaller.

yes, that's clear to me.

> It makes zero sense to be able to remove a block in the middle of a LV
> used by just about everything that uses LV's as nothing supports being
> able to remove a block in the middle.

yes, that critics is totally valid. from a fs point of view you completely
corrupt  the volume, that's clear to me.

> What is your use case that you believe removing a block in the middle
> of an LV needs to work?

my use case is creating some badblocks script with lvm which intelligently
handles and skips broken sectors on disks which can't be used otherwise...

my plan is to scan a disk for usable sectors and map the logical volume
around the broken sectors.

whenever more sectors get broken, i'd like to remove the broken ones to have
a usable lv without broken sectors.

since you need to rebuild your data anyway for that disk, you can also
recreate the whole logical volume.

my question and my project is a little bit academic. i'd simply want to try
out how much use you can have from some dead disks which are trash otherwise...


the manpage is telling this:


        Resize an LV by specified PV extents.

        lvresize LV PV ...
            [ -r|--resizefs ]
            [ COMMON_OPTIONS ]



so, that sounds like that i can resize in any direction by specifying extents.


> Now if you really need to remove a specific block in the middle of the
> LV then you are likely going to need to use pvmove with specific
> blocks to replace those blocks with something else.

yes, pvmove is the other approach for that.

but will pvmove continue/finish by all means when moving extents located on a
bad sector ?

the data may be corrupted anywhy, so i thought it's better to skip it.

what i'm really after is some "remap a physical extent to a healty/reserved
section and let zfs selfheal do the rest".  just like "dismiss the problematic
extents and replace with healthy extents".

i'd better like remapping instead of removing a PE, as removing will invalidate
the whole LV....

roland






Am 09.04.23 um 19:32 schrieb Roger Heflin:
> On Sun, Apr 9, 2023 at 10:18 AM Roland <devzero@web.de> wrote:
>> hi,
>>
>> we can extend a logical volume by arbitrary pv extends like this :
>>
>>
>> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:5
>>     Size of logical volume mytestVG/blocks_allocated changed from 1.00
>> MiB (1 extents) to 2.00 MiB (2 extents).
>>     Logical volume mytestVG/blocks_allocated successfully resized.
>>
>> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:10
>>     Size of logical volume mytestVG/blocks_allocated changed from 2.00
>> MiB (2 extents) to 3.00 MiB (3 extents).
>>     Logical volume mytestVG/blocks_allocated successfully resized.
>>
>> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:15
>>     Size of logical volume mytestVG/blocks_allocated changed from 3.00
>> MiB (3 extents) to 4.00 MiB (4 extents).
>>     Logical volume mytestVG/blocks_allocated successfully resized.
>>
>> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:20
>>     Size of logical volume mytestVG/blocks_allocated changed from 4.00
>> MiB (4 extents) to 5.00 MiB (5 extents).
>>     Logical volume mytestVG/blocks_allocated successfully resized.
>>
>> root@s740:~# pvs --segments
>> -olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
>>     LV               Start SSize  Start
>>     blocks_allocated     0      1     0
>>                          0      4     1
>>     blocks_allocated     1      1     5
>>                          0      4     6
>>     blocks_allocated     2      1    10
>>                          0      4    11
>>     blocks_allocated     3      1    15
>>                          0      4    16
>>     blocks_allocated     4      1    20
>>                          0 476917    21
>>
>>
>> how can i do this in reverse ?
>>
>> when i specify the physical extend to be added, it works - but when is
>> specifcy the physical extent to be removed,
>> the last one is being removed but not the specified one.
>>
>> see here for example - i wanted to remove extent number 10 like i did
>> add it, but instead extent number 20
>> is being removed
>>
>> root@s740:~# lvresize mytestVG/blocks_allocated -l -1 /dev/sdb:10
>>     Ignoring PVs on command line when reducing.
>>     WARNING: Reducing active logical volume to 4.00 MiB.
>>     THIS MAY DESTROY YOUR DATA (filesystem etc.)
>> Do you really want to reduce mytestVG/blocks_allocated? [y/n]: y
>>     Size of logical volume mytestVG/blocks_allocated changed from 5.00
>> MiB (5 extents) to 4.00 MiB (4 extents).
>>     Logical volume mytestVG/blocks_allocated successfully resized.
>>
>> root@s740:~# pvs --segments
>> -olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
>>     LV               Start SSize  Start
>>     blocks_allocated     0      1     0
>>                          0      4     1
>>     blocks_allocated     1      1     5
>>                          0      4     6
>>     blocks_allocated     2      1    10
>>                          0      4    11
>>     blocks_allocated     3      1    15
>>                          0 476922    16
>>
>>
>> how can i remove extent number 10 ?
>>
>> is this a bug ?
>>
> Well, if the LV is being used for anything real, then I don't know of
> anything where you could remove a block in the middle and still have a
> working fs.   You can only reduce fs'es (the ones that you can reduce)
> by reducing off of the end and making it smaller.
>
> It makes zero sense to be able to remove a block in the middle of a LV
> used by just about everything that uses LV's as nothing supports being
> able to remove a block in the middle.
>
> What is your use case that you believe removing a block in the middle
> of an LV needs to work?
>
> Now if you really need to remove a specific block in the middle of the
> LV then you are likely going to need to use pvmove with specific
> blocks to replace those blocks with something else.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
  2023-04-09 15:05 Roland
@ 2023-04-09 17:32 ` Roger Heflin
  2023-04-09 18:21   ` Roland
  0 siblings, 1 reply; 16+ messages in thread
From: Roger Heflin @ 2023-04-09 17:32 UTC (permalink / raw)
  To: LVM general discussion and development

On Sun, Apr 9, 2023 at 10:18 AM Roland <devzero@web.de> wrote:
>
> hi,
>
> we can extend a logical volume by arbitrary pv extends like this :
>
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:5
>    Size of logical volume mytestVG/blocks_allocated changed from 1.00
> MiB (1 extents) to 2.00 MiB (2 extents).
>    Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:10
>    Size of logical volume mytestVG/blocks_allocated changed from 2.00
> MiB (2 extents) to 3.00 MiB (3 extents).
>    Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:15
>    Size of logical volume mytestVG/blocks_allocated changed from 3.00
> MiB (3 extents) to 4.00 MiB (4 extents).
>    Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:20
>    Size of logical volume mytestVG/blocks_allocated changed from 4.00
> MiB (4 extents) to 5.00 MiB (5 extents).
>    Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# pvs --segments
> -olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
>    LV               Start SSize  Start
>    blocks_allocated     0      1     0
>                         0      4     1
>    blocks_allocated     1      1     5
>                         0      4     6
>    blocks_allocated     2      1    10
>                         0      4    11
>    blocks_allocated     3      1    15
>                         0      4    16
>    blocks_allocated     4      1    20
>                         0 476917    21
>
>
> how can i do this in reverse ?
>
> when i specify the physical extend to be added, it works - but when is
> specifcy the physical extent to be removed,
> the last one is being removed but not the specified one.
>
> see here for example - i wanted to remove extent number 10 like i did
> add it, but instead extent number 20
> is being removed
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l -1 /dev/sdb:10
>    Ignoring PVs on command line when reducing.
>    WARNING: Reducing active logical volume to 4.00 MiB.
>    THIS MAY DESTROY YOUR DATA (filesystem etc.)
> Do you really want to reduce mytestVG/blocks_allocated? [y/n]: y
>    Size of logical volume mytestVG/blocks_allocated changed from 5.00
> MiB (5 extents) to 4.00 MiB (4 extents).
>    Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# pvs --segments
> -olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
>    LV               Start SSize  Start
>    blocks_allocated     0      1     0
>                         0      4     1
>    blocks_allocated     1      1     5
>                         0      4     6
>    blocks_allocated     2      1    10
>                         0      4    11
>    blocks_allocated     3      1    15
>                         0 476922    16
>
>
> how can i remove extent number 10 ?
>
> is this a bug ?
>

Well, if the LV is being used for anything real, then I don't know of
anything where you could remove a block in the middle and still have a
working fs.   You can only reduce fs'es (the ones that you can reduce)
by reducing off of the end and making it smaller.

It makes zero sense to be able to remove a block in the middle of a LV
used by just about everything that uses LV's as nothing supports being
able to remove a block in the middle.

What is your use case that you believe removing a block in the middle
of an LV needs to work?

Now if you really need to remove a specific block in the middle of the
LV then you are likely going to need to use pvmove with specific
blocks to replace those blocks with something else.

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected
@ 2023-04-09 15:05 Roland
  2023-04-09 17:32 ` Roger Heflin
  0 siblings, 1 reply; 16+ messages in thread
From: Roland @ 2023-04-09 15:05 UTC (permalink / raw)
  To: LVM general discussion and development

hi,

we can extend a logical volume by arbitrary pv extends like this :


root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:5
   Size of logical volume mytestVG/blocks_allocated changed from 1.00 
MiB (1 extents) to 2.00 MiB (2 extents).
   Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:10
   Size of logical volume mytestVG/blocks_allocated changed from 2.00 
MiB (2 extents) to 3.00 MiB (3 extents).
   Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:15
   Size of logical volume mytestVG/blocks_allocated changed from 3.00 
MiB (3 extents) to 4.00 MiB (4 extents).
   Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:20
   Size of logical volume mytestVG/blocks_allocated changed from 4.00 
MiB (4 extents) to 5.00 MiB (5 extents).
   Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments 
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
   LV               Start SSize  Start
   blocks_allocated     0      1     0
                        0      4     1
   blocks_allocated     1      1     5
                        0      4     6
   blocks_allocated     2      1    10
                        0      4    11
   blocks_allocated     3      1    15
                        0      4    16
   blocks_allocated     4      1    20
                        0 476917    21


how can i do this in reverse ?

when i specify the physical extend to be added, it works - but when is 
specifcy the physical extent to be removed,
the last one is being removed but not the specified one.

see here for example - i wanted to remove extent number 10 like i did 
add it, but instead extent number 20
is being removed

root@s740:~# lvresize mytestVG/blocks_allocated -l -1 /dev/sdb:10
   Ignoring PVs on command line when reducing.
   WARNING: Reducing active logical volume to 4.00 MiB.
   THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mytestVG/blocks_allocated? [y/n]: y
   Size of logical volume mytestVG/blocks_allocated changed from 5.00 
MiB (5 extents) to 4.00 MiB (4 extents).
   Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments 
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
   LV               Start SSize  Start
   blocks_allocated     0      1     0
                        0      4     1
   blocks_allocated     1      1     5
                        0      4     6
   blocks_allocated     2      1    10
                        0      4    11
   blocks_allocated     3      1    15
                        0 476922    16


how can i remove extent number 10 ?

is this a bug ?

regards
roland

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-04-13  6:55 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1043528017.520337.1681071486811.ref@mail.yahoo.com>
2023-04-09 20:18 ` [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected matthew patton
2023-04-11  7:14   ` Roland
2023-04-12  9:24     ` Roberto Fastec
2023-04-12  9:28     ` Roberto Fastec
2023-04-11 17:05   ` Roger Heflin
2023-04-09 15:05 Roland
2023-04-09 17:32 ` Roger Heflin
2023-04-09 18:21   ` Roland
2023-04-09 18:53     ` Roger Heflin
2023-04-09 22:04       ` Roland
2023-04-09 23:50     ` Stuart D Gathman
2023-04-12 10:20     ` Zdenek Kabelac
2023-04-12 11:51       ` Roberto Fastec
2023-04-12 12:37       ` Roland
2023-04-12 13:16         ` Zdenek Kabelac
2023-04-12 13:53         ` Roberto Fastec

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).