All of lore.kernel.org
 help / color / mirror / Atom feed
* lvmcache lv destroy with no flush
@ 2019-08-01  4:50 Lakshmi Narasimhan Sundararajan
  2019-08-02 12:44 ` Zdenek Kabelac
  0 siblings, 1 reply; 8+ messages in thread
From: Lakshmi Narasimhan Sundararajan @ 2019-08-01  4:50 UTC (permalink / raw)
  To: lvm-devel

Hi Team,
A very good day to you all.

Lets say, there exists a LVM cache LV in writeback mode with lots of dirty blocks.
How can I destroy this LV without waiting for data sync to finish? This is a tear down operation and there is no necessity for data sync to complete.

Any operation like lvremove, vgremove etc. all of it wait for the cache sync to complete before tearing down the lv/vg.

Please let me know if there is a way to accomplish my requirement.

Best regards
LN
Sent from Mail for Windows 10

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20190801/10faef6a/attachment.htm>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* lvmcache lv destroy with no flush
  2019-08-01  4:50 lvmcache lv destroy with no flush Lakshmi Narasimhan Sundararajan
@ 2019-08-02 12:44 ` Zdenek Kabelac
  2019-08-02 13:45   ` Lakshmi Narasimhan Sundararajan
  0 siblings, 1 reply; 8+ messages in thread
From: Zdenek Kabelac @ 2019-08-02 12:44 UTC (permalink / raw)
  To: lvm-devel

Dne 01. 08. 19 v 6:50 Lakshmi Narasimhan Sundararajan napsal(a):
> Hi Team,
> 
> A very good day to you all.
> 
> Lets say, there exists a LVM cache LV in writeback mode with lots of dirty blocks.
> 
> How can I destroy this LV without waiting for data sync to finish? This is a 
> tear down operation and there is no necessity for data sync to complete.
> 
> Any operation like lvremove, vgremove etc. all of it wait for the cache sync 
> to complete before tearing down the lv/vg.
> 
> Please let me know if there is a way to accomplish my requirement.


Currently this is not supported on lvm2 side - we usually want to flush cache 
first - since we try to keep logic that lvm2 should be reversible for 1-step back.

So we tend to keep things flushed first.

On the other hand - we do have some 'long wanted' feature - like some smart 
and fast 'accelerated' removal.

i.e. when removing all thin + thin-pool -  skip removing individual thins,
and similar would apply to cache.

These operation would be irreversible - but certainly much faster.....

On the other hand there is usually way quicker workaround -

If you know you are going to destroy whole VG - you can simply make sure,
there is no running  LV - and just recreate  PV/VG from scratch - certainly
faster them removing i.e.  thousand of LVs individually one-by-one which
is what will happen with lvremove/vgremove command ATM.


2nd. though - when the cache-pool is broken/missing - you can always remove 
any LV with  'lvremove -ff'

Regards

Zdenek



^ permalink raw reply	[flat|nested] 8+ messages in thread

* lvmcache lv destroy with no flush
  2019-08-02 12:44 ` Zdenek Kabelac
@ 2019-08-02 13:45   ` Lakshmi Narasimhan Sundararajan
  2019-08-02 13:50     ` Zdenek Kabelac
  0 siblings, 1 reply; 8+ messages in thread
From: Lakshmi Narasimhan Sundararajan @ 2019-08-02 13:45 UTC (permalink / raw)
  To: lvm-devel

Hi Zdenek,
Thank you for your email.

? If you know you are going to destroy whole VG - you can simply make sure,
? there is no running  LV - and just recreate  PV/VG from scratch - certainly
? faster them removing i.e.  thousand of LVs individually one-by-one which
? is what will happen with lvremove/vgremove command ATM.

I tried to follow you for accelerated removal? did I interpret you correctly? I though hit the cache sync stuck issue. Please clarify what needs to change below. I see still cache flush happens while removing the vg.

myhome$ sudo vgcreate pxtest /dev/sdc /dev/nvme0n1
  Volume group "pxtest" successfully created
myhome$
myhome$ sudo lvcreate -n cache --type cache-pool -l 100%pvs pxtest /dev/nvme0n1
  Logical volume "cache" created.
myhome$ sudo lvcreate -n pool --type cache --cachepool pxtest/cache -l 100%pvs pxtest /dev/sdc
  Logical volume "pool" created.
Myhome$

myhome$ sudo lvs pxtest
  LV   VG     Attr       LSize  Pool    Origin       Data%  Meta%  Move Log Cpy%Sync Convert
  pool pxtest Cwi---C--- 10.00g [cache] [pool_corig]
myhome$

myhome$ sudo vgchange -an pxtest
  0 logical volume(s) in volume group "pxtest" now active
myhome$ sudo vgremove -ff pxtest
  4096 blocks must still be flushed.
  4096 blocks must still be flushed.
  4096 blocks must still be flushed.
  4096 blocks must still be flushed.
^C
Myhome$

myhome$ sudo dmsetup status pxtest-pool
0 20963328 cache 8 40/2048 2048 4096/10220 28 58 0 0 0 0 4096 1 writethrough 2 migration_threshold 2048 cleaner 0 rw -
myhome$

myhome$ uname -r
4.4.0-131-generic
myhome$ sudo lvm version
  LVM version:     2.02.133(2) (2015-10-30)
  Library version: 1.02.110 (2015-10-30)
  Driver version:  4.34.0
myhome$

Regards
LN
Sent from Mail for Windows 10

From: Zdenek Kabelac
Sent: Friday, August 2, 2019 6:14 PM
To: LVM2 development; Lakshmi Narasimhan Sundararajan
Subject: Re: [lvm-devel] lvmcache lv destroy with no flush

Dne 01. 08. 19 v 6:50 Lakshmi Narasimhan Sundararajan napsal(a):
> Hi Team,
> 
> A very good day to you all.
> 
> Lets say, there exists a LVM cache LV in writeback mode with lots of dirty blocks.
> 
> How can I destroy this LV without waiting for data sync to finish? This is a 
> tear down operation and there is no necessity for data sync to complete.
> 
> Any operation like lvremove, vgremove etc. all of it wait for the cache sync 
> to complete before tearing down the lv/vg.
> 
> Please let me know if there is a way to accomplish my requirement.


Currently this is not supported on lvm2 side - we usually want to flush cache 
first - since we try to keep logic that lvm2 should be reversible for 1-step back.

So we tend to keep things flushed first.

On the other hand - we do have some 'long wanted' feature - like some smart 
and fast 'accelerated' removal.

i.e. when removing all thin + thin-pool -  skip removing individual thins,
and similar would apply to cache.

These operation would be irreversible - but certainly much faster.....

On the other hand there is usually way quicker workaround -

If you know you are going to destroy whole VG - you can simply make sure,
there is no running  LV - and just recreate  PV/VG from scratch - certainly
faster them removing i.e.  thousand of LVs individually one-by-one which
is what will happen with lvremove/vgremove command ATM.


2nd. though - when the cache-pool is broken/missing - you can always remove 
any LV with  'lvremove -ff'

Regards

Zdenek

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20190802/77fa1f79/attachment.htm>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* lvmcache lv destroy with no flush
  2019-08-02 13:45   ` Lakshmi Narasimhan Sundararajan
@ 2019-08-02 13:50     ` Zdenek Kabelac
  2019-08-02 14:24       ` Lakshmi Narasimhan Sundararajan
  0 siblings, 1 reply; 8+ messages in thread
From: Zdenek Kabelac @ 2019-08-02 13:50 UTC (permalink / raw)
  To: lvm-devel

Dne 02. 08. 19 v 15:45 Lakshmi Narasimhan Sundararajan napsal(a):
> Hi Zdenek,
> 
> Thank you for your email.
> 
>   * If you know you are going to destroy whole VG - you can simply make sure,
>   * there is no running? LV - and just recreate? PV/VG from scratch - certainly
>   * faster them removing i.e.? thousand of LVs individually one-by-one which
>   * is what will happen with lvremove/vgremove command ATM.
> 
> I tried to follow you for accelerated removal? did I interpret you correctly? 
> I though hit the cache sync stuck issue. Please clarify what needs to change 
> below. I see still cache flush happens while removing the vg.
> 
> myhome$ sudo vgcreate pxtest /dev/sdc /dev/nvme0n1
> 
>  ? Volume group "pxtest" successfully created
> 
> myhome$
> 
> myhome$ sudo lvcreate -n cache --type cache-pool -l 100%pvs pxtest /dev/nvme0n1
> 
>  ? Logical volume "cache" created.
> 
> myhome$ sudo lvcreate -n pool --type cache --cachepool pxtest/cache -l 100%pvs 
> pxtest /dev/sdc
> 
>  ? Logical volume "pool" created.
> 
> Myhome$
> 
> myhome$ sudo lvs pxtest
> 
>  ? LV?? VG???? Attr?????? LSize? Pool??? Origin?????? Data%? Meta%? Move Log 
> Cpy%Sync Convert
> 
>  ? pool pxtest Cwi---C--- 10.00g [cache] [pool_corig]
> 
> myhome$
> 
> myhome$ sudo vgchange -an pxtest
> 
>  ? 0 logical volume(s) in volume group "pxtest" now active
> 
> myhome$ sudo vgremove -ff pxtest
> 
>  ? 4096 blocks must still be flushed.
> 
>  ? 4096 blocks must still be flushed.
> 
>  ? 4096 blocks must still be flushed.
> 
>  ? 4096 blocks must still be flushed.
> 
> ^C
> 

1.) remove devices from DM table
dmsetup remove_all
(or just some selected device - whatever fits...)

2.) remove disk singatures of VG
wipefs -a /dev/sdc
wipefs -a /dev/nvme0n1
(or pvremove -ff /dev/sdc /dev/nvme0n1)

3.) recreate empty VG from scratch
vgcreate pxtest /dev/sdc /dev/nvme0n1


Although I'm not quite sure this is what you really want :) - it's more or 
less idea for quicker testing - not something for presering data.


Regards

Zdenek



^ permalink raw reply	[flat|nested] 8+ messages in thread

* lvmcache lv destroy with no flush
  2019-08-02 13:50     ` Zdenek Kabelac
@ 2019-08-02 14:24       ` Lakshmi Narasimhan Sundararajan
  2019-08-05  8:23         ` Zdenek Kabelac
  0 siblings, 1 reply; 8+ messages in thread
From: Lakshmi Narasimhan Sundararajan @ 2019-08-02 14:24 UTC (permalink / raw)
  To: lvm-devel


? 1.) remove devices from DM table
? dmsetup remove_all
? (or just some selected device - whatever fits...)
? 
? 2.) remove disk singatures of VG
? wipefs -a /dev/sdc
? wipefs -a /dev/nvme0n1
? (or pvremove -ff /dev/sdc /dev/nvme0n1)
? 
? 3.) recreate empty VG from scratch
? vgcreate pxtest /dev/sdc /dev/nvme0n1


myhome$ sudo dmsetup status --target cache
pxtest-pool: 0 20963328 cache 8 40/2048 2048 4096/10220 28 58 0 0 0 0 4096 1 writethrough 2 migration_threshold 2048 cleaner 0 rw -
myhome$ sudo dmsetup remove pxtest-pool
myhome$
myhome$ sudo vgchange -an pxtest
  0 logical volume(s) in volume group "pxtest" now active
myhome$ sudo pvremove -ff /dev/sdc /dev/nvme0n1
Really WIPE LABELS from physical volume "/dev/sdc" of volume group "pxtest" [y/n]? y
  WARNING: Wiping physical volume label from /dev/sdc of volume group "pxtest"
  Can't open /dev/sdc exclusively - not removing. Mounted filesystem?
Really WIPE LABELS from physical volume "/dev/nvme0n1" of volume group "pxtest" [y/n]? y
  WARNING: Wiping physical volume label from /dev/nvme0n1 of volume group "pxtest"
  Can't open /dev/nvme0n1 exclusively - not removing. Mounted filesystem?
myhome$
myhome$ sudo wipefs -a /dev/sdc /dev/nvme0n1
wipefs: error: /dev/sdc: probing initialization failed: Device or resource busy
myhome$


Doesn?t seem to work, there are still exclusive references on the drive held by lvm!
Any other way out?

LN

Sent from Mail for Windows 10

From: Zdenek Kabelac
Sent: Friday, August 2, 2019 7:21 PM
To: Lakshmi Narasimhan Sundararajan; LVM2 development
Subject: Re: [lvm-devel] lvmcache lv destroy with no flush

Dne 02. 08. 19 v 15:45 Lakshmi Narasimhan Sundararajan napsal(a):
> Hi Zdenek,
> 
> Thank you for your email.
> 
>   * If you know you are going to destroy whole VG - you can simply make sure,
>   * there is no running? LV - and just recreate? PV/VG from scratch - certainly
>   * faster them removing i.e.? thousand of LVs individually one-by-one which
>   * is what will happen with lvremove/vgremove command ATM.
> 
> I tried to follow you for accelerated removal? did I interpret you correctly? 
> I though hit the cache sync stuck issue. Please clarify what needs to change 
> below. I see still cache flush happens while removing the vg.
> 
> myhome$ sudo vgcreate pxtest /dev/sdc /dev/nvme0n1
> 
>  ? Volume group "pxtest" successfully created
> 
> myhome$
> 
> myhome$ sudo lvcreate -n cache --type cache-pool -l 100%pvs pxtest /dev/nvme0n1
> 
>  ? Logical volume "cache" created.
> 
> myhome$ sudo lvcreate -n pool --type cache --cachepool pxtest/cache -l 100%pvs 
> pxtest /dev/sdc
> 
>  ? Logical volume "pool" created.
> 
> Myhome$
> 
> myhome$ sudo lvs pxtest
> 
>  ? LV?? VG???? Attr?????? LSize? Pool??? Origin?????? Data%? Meta%? Move Log 
> Cpy%Sync Convert
> 
>  ? pool pxtest Cwi---C--- 10.00g [cache] [pool_corig]
> 
> myhome$
> 
> myhome$ sudo vgchange -an pxtest
> 
>  ? 0 logical volume(s) in volume group "pxtest" now active
> 
> myhome$ sudo vgremove -ff pxtest
> 
>  ? 4096 blocks must still be flushed.
> 
>  ? 4096 blocks must still be flushed.
> 
>  ? 4096 blocks must still be flushed.
> 
>  ? 4096 blocks must still be flushed.
> 
> ^C
> 

1.) remove devices from DM table
dmsetup remove_all
(or just some selected device - whatever fits...)

2.) remove disk singatures of VG
wipefs -a /dev/sdc
wipefs -a /dev/nvme0n1
(or pvremove -ff /dev/sdc /dev/nvme0n1)

3.) recreate empty VG from scratch
vgcreate pxtest /dev/sdc /dev/nvme0n1


Although I'm not quite sure this is what you really want :) - it's more or 
less idea for quicker testing - not something for presering data.


Regards

Zdenek

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20190802/22a6c56d/attachment.htm>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* lvmcache lv destroy with no flush
  2019-08-02 14:24       ` Lakshmi Narasimhan Sundararajan
@ 2019-08-05  8:23         ` Zdenek Kabelac
  2019-08-05  8:45           ` Lakshmi Narasimhan Sundararajan
  0 siblings, 1 reply; 8+ messages in thread
From: Zdenek Kabelac @ 2019-08-05  8:23 UTC (permalink / raw)
  To: lvm-devel

Dne 02. 08. 19 v 16:24 Lakshmi Narasimhan Sundararajan napsal(a):
>   * 1.) remove devices from DM table
>   * dmsetup remove_all
>   * (or just some selected device - whatever fits...)
>   *
>   * 2.) remove disk singatures of VG
>   * wipefs -a /dev/sdc
>   * wipefs -a /dev/nvme0n1
>   * (or pvremove -ff /dev/sdc /dev/nvme0n1)
>   *
>   * 3.) recreate empty VG from scratch
>   * vgcreate pxtest /dev/sdc /dev/nvme0n1
> 
> myhome$ sudo dmsetup status --target cache
> 
> pxtest-pool: 0 20963328 cache 8 40/2048 2048 4096/10220 28 58 0 0 0 0 4096 1 
> writethrough 2 migration_threshold 2048 cleaner 0 rw -
> 
> myhome$ sudo dmsetup remove pxtest-pool


Unfortunatelly you must remove ALL related device.


>  ? 0 logical volume(s) in volume group "pxtest" now active
> 
> myhome$ sudo pvremove -ff /dev/sdc /dev/nvme0n1
> 
> Really WIPE LABELS from physical volume "/dev/sdc" of volume group "pxtest" 
> [y/n]? y
> 
>  ? WARNING: Wiping physical volume label from /dev/sdc of volume group "pxtest"
> 
>  ? Can't open /dev/sdc exclusively - not removing. Mounted filesystem?

As you can see - you still have some device holding sdc open.

As said origin - all used of  your SDC & NVME device must be removed - so 
devices are 'free'.

You can't be killing VG while DM device are still running in memory.

> 
> Really WIPE LABELS from physical volume "/dev/nvme0n1" of volume group 
> "pxtest" [y/n]? y
> 
>  ? WARNING: Wiping physical volume label from /dev/nvme0n1 of volume group 
> "pxtest"
> 
>  ? Can't open /dev/nvme0n1 exclusively - not removing. Mounted filesystem?
> 
> myhome$
> 
> myhome$ sudo wipefs -a /dev/sdc /dev/nvme0n1
> 
> wipefs: error: /dev/sdc: probing initialization failed: Device or resource busy
> 
> myhome$
> 
> Doesn?t seem to work, there are still exclusive references on the drive held 
> by lvm!


Note - lvm2 never helds ANY reference - lvm2 is pure tool for manipulation 
with DM devices - aka you can do those DM devices yourself without any lvm2 in 
place - it's just way more work.

So back to question who keeps devices open - you can easily get this info from 
command like these:


dmsetup table

dmsetup ls --tree

lsbls
...


Before you start any device wiping for VG metadata - there must be no runing 
device holding those device open - and as you are basically bypassing lvm2 
when you run 'drastic' commands like 'dmsetup' or 'wipefs' yourself - you 
can't blame lvm2 for not being cooperative for such 'violence' usage :)
As has been originally said - advice was serious HACK in the lvm2 workflow....

Regards

Zdenek



^ permalink raw reply	[flat|nested] 8+ messages in thread

* lvmcache lv destroy with no flush
  2019-08-05  8:23         ` Zdenek Kabelac
@ 2019-08-05  8:45           ` Lakshmi Narasimhan Sundararajan
  2019-08-05 10:43             ` Zdenek Kabelac
  0 siblings, 1 reply; 8+ messages in thread
From: Lakshmi Narasimhan Sundararajan @ 2019-08-05  8:45 UTC (permalink / raw)
  To: lvm-devel

Thanks Zdenek, for your follow up email clarifying my questions.
I will have to check further and shall report back.

But, I also wonder why on a writeback cache even if I do submit blkdiscard to the whole device, the dirty blocks do not fall to zero?
Does blkdiscard on lvmcache device not work? 

> myhome$ sudo dmsetup status --target cache
> pxtest-pool: 0 20963328 cache 8 40/2048 2048 4096/10220 28 58 0 0 0 0 4096 1 writethrough 2 migration_threshold 2048 cleaner 0 rw -
> myhome$
> myhome$ sudo blkdev ?getsize64 /dev/pxtest/pool
<devsize>
>myhome$ sudo blkdiscard -o 0 -l ROUND_DISCARD_ALIGN(devsize) /dev/pxtest/pool

Even after the above discard, the lvmcache device in writeback mode holds dirty blocks. And has to be flushed. Can you please help explain the behavior here?

Regards
LN 
Sent from Mail for Windows 10

From: Zdenek Kabelac
Sent: Monday, August 5, 2019 1:53 PM
To: Lakshmi Narasimhan Sundararajan; LVM2 development
Subject: Re: [lvm-devel] lvmcache lv destroy with no flush

Dne 02. 08. 19 v 16:24 Lakshmi Narasimhan Sundararajan napsal(a):
>   * 1.) remove devices from DM table
>   * dmsetup remove_all
>   * (or just some selected device - whatever fits...)
>   *
>   * 2.) remove disk singatures of VG
>   * wipefs -a /dev/sdc
>   * wipefs -a /dev/nvme0n1
>   * (or pvremove -ff /dev/sdc /dev/nvme0n1)
>   *
>   * 3.) recreate empty VG from scratch
>   * vgcreate pxtest /dev/sdc /dev/nvme0n1
> 
> myhome$ sudo dmsetup status --target cache
> 
> pxtest-pool: 0 20963328 cache 8 40/2048 2048 4096/10220 28 58 0 0 0 0 4096 1 
> writethrough 2 migration_threshold 2048 cleaner 0 rw -
> 
> myhome$ sudo dmsetup remove pxtest-pool


Unfortunatelly you must remove ALL related device.


>  ? 0 logical volume(s) in volume group "pxtest" now active
> 
> myhome$ sudo pvremove -ff /dev/sdc /dev/nvme0n1
> 
> Really WIPE LABELS from physical volume "/dev/sdc" of volume group "pxtest" 
> [y/n]? y
> 
>  ? WARNING: Wiping physical volume label from /dev/sdc of volume group "pxtest"
> 
>  ? Can't open /dev/sdc exclusively - not removing. Mounted filesystem?

As you can see - you still have some device holding sdc open.

As said origin - all used of  your SDC & NVME device must be removed - so 
devices are 'free'.

You can't be killing VG while DM device are still running in memory.

> 
> Really WIPE LABELS from physical volume "/dev/nvme0n1" of volume group 
> "pxtest" [y/n]? y
> 
>  ? WARNING: Wiping physical volume label from /dev/nvme0n1 of volume group 
> "pxtest"
> 
>  ? Can't open /dev/nvme0n1 exclusively - not removing. Mounted filesystem?
> 
> myhome$
> 
> myhome$ sudo wipefs -a /dev/sdc /dev/nvme0n1
> 
> wipefs: error: /dev/sdc: probing initialization failed: Device or resource busy
> 
> myhome$
> 
> Doesn?t seem to work, there are still exclusive references on the drive held 
> by lvm!


Note - lvm2 never helds ANY reference - lvm2 is pure tool for manipulation 
with DM devices - aka you can do those DM devices yourself without any lvm2 in 
place - it's just way more work.

So back to question who keeps devices open - you can easily get this info from 
command like these:


dmsetup table

dmsetup ls --tree

lsbls
...


Before you start any device wiping for VG metadata - there must be no runing 
device holding those device open - and as you are basically bypassing lvm2 
when you run 'drastic' commands like 'dmsetup' or 'wipefs' yourself - you 
can't blame lvm2 for not being cooperative for such 'violence' usage :)
As has been originally said - advice was serious HACK in the lvm2 workflow....

Regards

Zdenek

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20190805/ee40a877/attachment.htm>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* lvmcache lv destroy with no flush
  2019-08-05  8:45           ` Lakshmi Narasimhan Sundararajan
@ 2019-08-05 10:43             ` Zdenek Kabelac
  0 siblings, 0 replies; 8+ messages in thread
From: Zdenek Kabelac @ 2019-08-05 10:43 UTC (permalink / raw)
  To: lvm-devel

Dne 05. 08. 19 v 10:45 Lakshmi Narasimhan Sundararajan napsal(a):
> Thanks Zdenek, for your follow up email clarifying my questions.
> 
> I will have to check further and shall report back.
> 
> But, I also wonder why on a writeback cache even if I do submit blkdiscard to 
> the whole device, the dirty blocks do not fall to zero?
> 
> Does blkdiscard on lvmcache device not work?
> 
>  > myhome$ sudo dmsetup status --target cache
> 
>  > pxtest-pool: 0 20963328 cache 8 40/2048 2048 4096/10220 28 58 0 0 0 0 4096 
> 1 writethrough 2 migration_threshold 2048 cleaner 0 rw -
> 
>  > myhome$
> 
>  > myhome$ sudo blkdev ?getsize64 /dev/pxtest/pool
> 
> <devsize>
> 
>  >myhome$ sudo blkdiscard -o 0 -l ROUND_DISCARD_ALIGN(devsize) /dev/pxtest/pool
> 
> Even after the above discard, the lvmcache device in writeback mode holds 
> dirty blocks. And has to be flushed. Can you please help explain the behavior 
> here?
> 


Are you using latest kernels ?

Original initial release of cache was (if I remeber correctly) not supporting
discard operation on such device.

On the latest kernels all should work...


Regards

Zdenek



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-08-05 10:43 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-01  4:50 lvmcache lv destroy with no flush Lakshmi Narasimhan Sundararajan
2019-08-02 12:44 ` Zdenek Kabelac
2019-08-02 13:45   ` Lakshmi Narasimhan Sundararajan
2019-08-02 13:50     ` Zdenek Kabelac
2019-08-02 14:24       ` Lakshmi Narasimhan Sundararajan
2019-08-05  8:23         ` Zdenek Kabelac
2019-08-05  8:45           ` Lakshmi Narasimhan Sundararajan
2019-08-05 10:43             ` Zdenek Kabelac

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.