All of lore.kernel.org
 help / color / mirror / Atom feed
* Batch/queue LVM operations
@ 2017-07-28 19:22 Eric Wheeler
  2017-07-28 19:43 ` Zdenek Kabelac
  2017-07-31  9:01 ` Germano Percossi
  0 siblings, 2 replies; 15+ messages in thread
From: Eric Wheeler @ 2017-07-28 19:22 UTC (permalink / raw)
  To: lvm-devel

Hello,

Is there an option to batch LVM operations?  

For example, I would like to delete 100 thin snapshots without updating 
the vgmeta 100 times.

Same thing with lvcreate, lvresize, etc.  

If would be neat if `lvm`'s readline prompt could take multiple commands 
via stdin and commit them once, even at the expense of longer lock times.

Is there already support for this in some form?


--
Eric Wheeler



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-07-28 19:22 Batch/queue LVM operations Eric Wheeler
@ 2017-07-28 19:43 ` Zdenek Kabelac
  2017-07-28 21:55   ` Eric Wheeler
                     ` (2 more replies)
  2017-07-31  9:01 ` Germano Percossi
  1 sibling, 3 replies; 15+ messages in thread
From: Zdenek Kabelac @ 2017-07-28 19:43 UTC (permalink / raw)
  To: lvm-devel

Dne 28.7.2017 v 21:22 Eric Wheeler napsal(a):
> Hello,
> 
> Is there an option to batch LVM operations?
> 
> For example, I would like to delete 100 thin snapshots without updating
> the vgmeta 100 times.

You could possibly use --select feature to handle all removals with
just one lvremove command.

> 
> Same thing with lvcreate, lvresize, etc.
> 
> If would be neat if `lvm`'s readline prompt could take multiple commands
> via stdin and commit them once, even at the expense of longer lock times.
> 
> Is there already support for this in some form?

At this moment we do not have optimization to do more then 1 removal at a time 
- however if  LVs  are 'inactive' before lvremove  it should be relatively quick.

However 'lvm2' was not designed for some ultra-fast manipulation - i.e. you
are typically not creating/removing LV so quickly.

Removal of individual thin LVs also takes its time since kernel metadata needs 
to be updated.  So I'd not expect some miliseconds timing.


Regards

Zdenek



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-07-28 19:43 ` Zdenek Kabelac
@ 2017-07-28 21:55   ` Eric Wheeler
  2017-07-31 12:16   ` Bryn M. Reeves
  2017-08-01 17:24   ` Eric Wheeler
  2 siblings, 0 replies; 15+ messages in thread
From: Eric Wheeler @ 2017-07-28 21:55 UTC (permalink / raw)
  To: lvm-devel

On Fri, 28 Jul 2017, Zdenek Kabelac wrote:

> Dne 28.7.2017 v 21:22 Eric Wheeler napsal(a):
> > Hello,
> > 
> > Is there an option to batch LVM operations?
> > 
> > For example, I would like to delete 100 thin snapshots without updating
> > the vgmeta 100 times.
> 
> You could possibly use --select feature to handle all removals with
> just one lvremove command.

Interesting.  Does it do a single vgmeta operation?

> > Same thing with lvcreate, lvresize, etc.
> > 
> > If would be neat if `lvm`'s readline prompt could take multiple commands
> > via stdin and commit them once, even at the expense of longer lock times.
> > 
> > Is there already support for this in some form?
> 
> At this moment we do not have optimization to do more then 1 removal at a time
> - however if  LVs  are 'inactive' before lvremove  it should be relatively
> quick.
> 
> However 'lvm2' was not designed for some ultra-fast manipulation - i.e. you
> are typically not creating/removing LV so quickly.
> 
> Removal of individual thin LVs also takes its time since kernel metadata needs
> to be updated.  So I'd not expect some miliseconds timing.

In this example, you can see the difference between LVM vs using dm-thin 
directly:

## These are the devid's from vgcfgbackup:

/dev/data/2017-07-28_13-53-57_manual_foobar_1		11276 
/dev/data/2017-07-28_13-54-14_manual_foobar_1		11277

## Both volumes are first deactivated:

]# lvchange -an /dev/data/2017-07-28_13-54-14_manual_foobar_1 /dev/data/2017-07-28_13-53-57_manual_foobar_1

## Direct removal via dmsetup:

]# time dmsetup message data-pool0-tpool 0 delete 11276
real	0m0.009s
user	0m0.001s
sys	0m0.001s

## Removal by lvremove:
]# time lvremove /dev/data/2017-07-28_13-54-14_manual_foobar_1
  Logical volume "2017-07-28_13-54-14_manual_foobar_1" successfully removed

real	0m1.606s
user	0m1.120s
sys	0m0.120s


## Quite a few volumes on this VG, so metadata manipulation is the 
## bottleneck, I think:

]# lvs -o lv_name --noheading data | wc -l
4953

--
Eric Wheeler



> 
> 
> Regards
> 
> Zdenek
> 
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-07-28 19:22 Batch/queue LVM operations Eric Wheeler
  2017-07-28 19:43 ` Zdenek Kabelac
@ 2017-07-31  9:01 ` Germano Percossi
  2017-07-31  9:19   ` Zdenek Kabelac
  1 sibling, 1 reply; 15+ messages in thread
From: Germano Percossi @ 2017-07-31  9:01 UTC (permalink / raw)
  To: lvm-devel

Hi,

Not sure if it an option for you but the Python bindings
allow you to instantiate a VG object that will hold the
lock and then issue all the commands you want before
flushing the metadata at the very end.

Cheers,
Germano

On 07/28/2017 08:22 PM, Eric Wheeler wrote:
> Hello,
> 
> Is there an option to batch LVM operations?  
> 
> For example, I would like to delete 100 thin snapshots without updating 
> the vgmeta 100 times.
> 
> Same thing with lvcreate, lvresize, etc.  
> 
> If would be neat if `lvm`'s readline prompt could take multiple commands 
> via stdin and commit them once, even at the expense of longer lock times.
> 
> Is there already support for this in some form?
> 
> 
> --
> Eric Wheeler
> 
> --
> lvm-devel mailing list
> lvm-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/lvm-devel
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-07-31  9:01 ` Germano Percossi
@ 2017-07-31  9:19   ` Zdenek Kabelac
  2017-07-31  9:22     ` Germano Percossi
  2017-08-01 17:20     ` Eric Wheeler
  0 siblings, 2 replies; 15+ messages in thread
From: Zdenek Kabelac @ 2017-07-31  9:19 UTC (permalink / raw)
  To: lvm-devel

Dne 31.7.2017 v 11:01 Germano Percossi napsal(a):
> Hi,
> 
> Not sure if it an option for you but the Python bindings
> allow you to instantiate a VG object that will hold the
> lock and then issue all the commands you want before
> flushing the metadata at the very end.


Hi

Please avoid using these Python binding or lvm2app.
Those 2 things are not being developed and are mostly left
there purely for backward compatibility reasons. So no new APP should ever use 
it - and old APPs will simple face lots of troubles....

It's in general unsupportable to lvm2 this way (API was badly designed and
we even figured later we really can't support it ).
And it's even less usable to workaround locking across bunch of commands...

What is now being tested is dBus integration - but this has still long way to 
go.  So whenever you can - please use  lvm2 commands with is documented
and well tested API.


Regards

Zdenek



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-07-31  9:19   ` Zdenek Kabelac
@ 2017-07-31  9:22     ` Germano Percossi
  2017-08-01 17:20     ` Eric Wheeler
  1 sibling, 0 replies; 15+ messages in thread
From: Germano Percossi @ 2017-07-31  9:22 UTC (permalink / raw)
  To: lvm-devel

OK, good to know.

It was in non-production code, anyway.

Thanks,
Germano

On 07/31/2017 10:19 AM, Zdenek Kabelac wrote:
> Dne 31.7.2017 v 11:01 Germano Percossi napsal(a):
>> Hi,
>>
>> Not sure if it an option for you but the Python bindings
>> allow you to instantiate a VG object that will hold the
>> lock and then issue all the commands you want before
>> flushing the metadata at the very end.
> 
> 
> Hi
> 
> Please avoid using these Python binding or lvm2app.
> Those 2 things are not being developed and are mostly left
> there purely for backward compatibility reasons. So no new APP should
> ever use it - and old APPs will simple face lots of troubles....
> 
> It's in general unsupportable to lvm2 this way (API was badly designed and
> we even figured later we really can't support it ).
> And it's even less usable to workaround locking across bunch of commands...
> 
> What is now being tested is dBus integration - but this has still long
> way to go.  So whenever you can - please use  lvm2 commands with is
> documented
> and well tested API.
> 
> 
> Regards
> 
> Zdenek



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-07-28 19:43 ` Zdenek Kabelac
  2017-07-28 21:55   ` Eric Wheeler
@ 2017-07-31 12:16   ` Bryn M. Reeves
  2017-08-01 17:24   ` Eric Wheeler
  2 siblings, 0 replies; 15+ messages in thread
From: Bryn M. Reeves @ 2017-07-31 12:16 UTC (permalink / raw)
  To: lvm-devel

On Fri, Jul 28, 2017 at 09:43:50PM +0200, Zdenek Kabelac wrote:
> Dne 28.7.2017 v 21:22 Eric Wheeler napsal(a):
> > Hello,
> > 
> > Is there an option to batch LVM operations?
> > 
> > For example, I would like to delete 100 thin snapshots without updating
> > the vgmeta 100 times.
> 
> You could possibly use --select feature to handle all removals with
> just one lvremove command.

Couldn't you also feed a sequence of commands to the lvm shell ('lvm')?

Iiirc that also does (or did) some caching of state between subsequent
invocations in the same shell?

Regards,
Bryn.
 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-07-31  9:19   ` Zdenek Kabelac
  2017-07-31  9:22     ` Germano Percossi
@ 2017-08-01 17:20     ` Eric Wheeler
  1 sibling, 0 replies; 15+ messages in thread
From: Eric Wheeler @ 2017-08-01 17:20 UTC (permalink / raw)
  To: lvm-devel

On Mon, 31 Jul 2017, Zdenek Kabelac wrote:

> Dne 31.7.2017 v 11:01 Germano Percossi napsal(a):
> > Hi,
> > 
> > Not sure if it an option for you but the Python bindings
> > allow you to instantiate a VG object that will hold the
> > lock and then issue all the commands you want before
> > flushing the metadata at the very end.
> 
> 
> Hi
> 
> Please avoid using these Python binding or lvm2app.
> Those 2 things are not being developed and are mostly left
> there purely for backward compatibility reasons. So no new APP should ever use
> it - and old APPs will simple face lots of troubles....
> 
> It's in general unsupportable to lvm2 this way (API was badly designed and
> we even figured later we really can't support it ).
> And it's even less usable to workaround locking across bunch of commands...
> 
> What is now being tested is dBus integration - but this has still long way to
> go.  So whenever you can - please use  lvm2 commands with is documented
> and well tested API.

Neat!

Will dbus support better batch processing times?

--
Eric Wheeler

	

> 
> 
> Regards
> 
> Zdenek
> 
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-07-28 19:43 ` Zdenek Kabelac
  2017-07-28 21:55   ` Eric Wheeler
  2017-07-31 12:16   ` Bryn M. Reeves
@ 2017-08-01 17:24   ` Eric Wheeler
  2017-08-01 17:46     ` Zdenek Kabelac
  2 siblings, 1 reply; 15+ messages in thread
From: Eric Wheeler @ 2017-08-01 17:24 UTC (permalink / raw)
  To: lvm-devel

On Fri, 28 Jul 2017, Zdenek Kabelac wrote:

> Dne 28.7.2017 v 21:22 Eric Wheeler napsal(a):
> > Hello,
> > 
> > Is there an option to batch LVM operations?
> > 
> > For example, I would like to delete 100 thin snapshots without updating
> > the vgmeta 100 times.
> 
> You could possibly use --select feature to handle all removals with
> just one lvremove command.

I tried --select, but there is a long pause between removals so it is 
clearly doing a metadata operation for each removal.

Is it possible to do something like this:

1. Hold a lock (how?)
2. vgcfgbackup -f /my-backup/file --ignorelocking-failure
3. Modify /my-backup/file (remove LVs)
4. dmsetup message pool 0 delete ID (each ID)
5. vgcfgrestore --ignorelocking-failure
6. Release lock

--
Eric Wheeler



> 
> > 
> > Same thing with lvcreate, lvresize, etc.
> > 
> > If would be neat if `lvm`'s readline prompt could take multiple commands
> > via stdin and commit them once, even at the expense of longer lock times.
> > 
> > Is there already support for this in some form?
> 
> At this moment we do not have optimization to do more then 1 removal at a time
> - however if  LVs  are 'inactive' before lvremove  it should be relatively
> quick.
> 
> However 'lvm2' was not designed for some ultra-fast manipulation - i.e. you
> are typically not creating/removing LV so quickly.
> 
> Removal of individual thin LVs also takes its time since kernel metadata needs
> to be updated.  So I'd not expect some miliseconds timing.
> 
> 
> Regards
> 
> Zdenek
> 
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-08-01 17:24   ` Eric Wheeler
@ 2017-08-01 17:46     ` Zdenek Kabelac
  2017-08-01 18:39       ` Zdenek Kabelac
  2017-08-04  0:38       ` Eric Wheeler
  0 siblings, 2 replies; 15+ messages in thread
From: Zdenek Kabelac @ 2017-08-01 17:46 UTC (permalink / raw)
  To: lvm-devel

Dne 1.8.2017 v 19:24 Eric Wheeler napsal(a):
> On Fri, 28 Jul 2017, Zdenek Kabelac wrote:
> 
>> Dne 28.7.2017 v 21:22 Eric Wheeler napsal(a):
>>> Hello,
>>>
>>> Is there an option to batch LVM operations?
>>>
>>> For example, I would like to delete 100 thin snapshots without updating
>>> the vgmeta 100 times.
>>
>> You could possibly use --select feature to handle all removals with
>> just one lvremove command.
> 
> I tried --select, but there is a long pause between removals so it is
> clearly doing a metadata operation for each removal.
> 
> Is it possible to do something like this:
> 
> 1. Hold a lock (how?)
> 2. vgcfgbackup -f /my-backup/file --ignorelocking-failure
> 3. Modify /my-backup/file (remove LVs)
> 4. dmsetup message pool 0 delete ID (each ID)
> 5. vgcfgrestore --ignorelocking-failure
> 6. Release lock
> 
> --
> Eric Wheeler
> 

Hi

Unfortunately this optimization is currently not possible.

lvm2 is strictly working on 1-by-1 logic - since resolving more complex
recovery path is beyond capabilities of this tool.

We do plan to 'group' 'lvremove' operation in future -
so i.e. if you pass multiple  LVs on cmdline -
sort them by  VG and do a single VG commit to remove them all.

But with thinLV this is different level as we would need to also
join all delete thin-pool transactions together - and we do not have good 
interface for this to resolve 'what have been delete and what's been left in 
case of error' so there is some space for improvements ATM - but I could see 
some possibilities, but it's still in queue behind bigger fishes for hunting...

What we would could possibly improve more easily is to remove ALL thin LVs
from a pool - such 'pool' reset could be implemented in more simple way.

So far the remove or creation of large numbers of LVs was not seen
as time-critical operation - so we are rather focused on simpler code here.


Regards

Zdenek



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-08-01 17:46     ` Zdenek Kabelac
@ 2017-08-01 18:39       ` Zdenek Kabelac
  2017-08-04  0:59         ` Eric Wheeler
  2017-08-04  0:38       ` Eric Wheeler
  1 sibling, 1 reply; 15+ messages in thread
From: Zdenek Kabelac @ 2017-08-01 18:39 UTC (permalink / raw)
  To: lvm-devel

Dne 1.8.2017 v 19:46 Zdenek Kabelac napsal(a):
> Dne 1.8.2017 v 19:24 Eric Wheeler napsal(a):
>> On Fri, 28 Jul 2017, Zdenek Kabelac wrote:
>>
>>> Dne 28.7.2017 v 21:22 Eric Wheeler napsal(a):
>>>> Hello,
>>>>
>>>> Is there an option to batch LVM operations?
>>>>
>>>> For example, I would like to delete 100 thin snapshots without updating
>>>> the vgmeta 100 times.
>>>
>>> You could possibly use --select feature to handle all removals with
>>> just one lvremove command.
>>
>> I tried --select, but there is a long pause between removals so it is
>> clearly doing a metadata operation for each removal.
>>
>> Is it possible to do something like this:
>>
>> 1. Hold a lock (how?)
>> 2. vgcfgbackup -f /my-backup/file --ignorelocking-failure
>> 3. Modify /my-backup/file (remove LVs)
>> 4. dmsetup message pool 0 delete ID (each ID)
>> 5. vgcfgrestore --ignorelocking-failure
>> 6. Release lock
>>


Also what is probably worthy exercise:

If you have thinLVs you want to 'remove' -

Deactivate all such LVs in one 'lvchange -an' command upfront.
Then as next step remove all such LVs with one  'lvremove' command.

Take a timed 'strace -ttt' and measure how much time is spend in actual 
'ioctl' and compare with the remaining time spend in lvm2 processing.

Then summarize  the output and your time-expectation  - possibly open RFE BZ 
for it.

Regards

Zdenek



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-08-01 17:46     ` Zdenek Kabelac
  2017-08-01 18:39       ` Zdenek Kabelac
@ 2017-08-04  0:38       ` Eric Wheeler
  2017-08-14 12:29         ` Zdenek Kabelac
  1 sibling, 1 reply; 15+ messages in thread
From: Eric Wheeler @ 2017-08-04  0:38 UTC (permalink / raw)
  To: lvm-devel

On Tue, 1 Aug 2017, Zdenek Kabelac wrote:

> Dne 1.8.2017 v 19:24 Eric Wheeler napsal(a):
> > On Fri, 28 Jul 2017, Zdenek Kabelac wrote:
> > 
> > > Dne 28.7.2017 v 21:22 Eric Wheeler napsal(a):
> > > > Hello,
> > > >
> > > > Is there an option to batch LVM operations?
> > > >
> > > > For example, I would like to delete 100 thin snapshots without updating
> > > > the vgmeta 100 times.
> > >
> > > You could possibly use --select feature to handle all removals with
> > > just one lvremove command.
> > 
> > I tried --select, but there is a long pause between removals so it is
> > clearly doing a metadata operation for each removal.
> > 
> > Is it possible to do something like this:
> > 
> > 1. Hold a lock (how?)
> > 2. vgcfgbackup -f /my-backup/file --ignorelocking-failure
> > 3. Modify /my-backup/file (remove LVs)
> > 4. dmsetup message pool 0 delete ID (each ID)
> > 5. vgcfgrestore --ignorelocking-failure
> > 6. Release lock
> > 
> > --
> > Eric Wheeler
> > 
> 
> Hi
> 
> Unfortunately this optimization is currently not possible.

Well, perhaps not by LVM at the moment, but is the procedure sound if I 
were to do this myself and accept any errors while deleting thin volumes?

Can vgcfgrestore be done hot with existing active volumes?

Similarly, could I do this for mass renames with the backup/restore?

--
Eric Wheeler

> 
> lvm2 is strictly working on 1-by-1 logic - since resolving more complex
> recovery path is beyond capabilities of this tool.
> 
> We do plan to 'group' 'lvremove' operation in future -
> so i.e. if you pass multiple  LVs on cmdline -
> sort them by  VG and do a single VG commit to remove them all.
> 
> But with thinLV this is different level as we would need to also
> join all delete thin-pool transactions together - and we do not have good
> interface for this to resolve 'what have been delete and what's been left in
> case of error' so there is some space for improvements ATM - but I could see
> some possibilities, but it's still in queue behind bigger fishes for
> hunting...
> 
> What we would could possibly improve more easily is to remove ALL thin LVs
> from a pool - such 'pool' reset could be implemented in more simple way.
> 
> So far the remove or creation of large numbers of LVs was not seen
> as time-critical operation - so we are rather focused on simpler code here.
> 
> 
> Regards
> 
> Zdenek
> 
> --
> lvm-devel mailing list
> lvm-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/lvm-devel
> 
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-08-01 18:39       ` Zdenek Kabelac
@ 2017-08-04  0:59         ` Eric Wheeler
  2017-08-14 12:36           ` Zdenek Kabelac
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Wheeler @ 2017-08-04  0:59 UTC (permalink / raw)
  To: lvm-devel

On Tue, 1 Aug 2017, Zdenek Kabelac wrote:

> Dne 1.8.2017 v 19:46 Zdenek Kabelac napsal(a):
> > Dne 1.8.2017 v 19:24 Eric Wheeler napsal(a):
> > > On Fri, 28 Jul 2017, Zdenek Kabelac wrote:
> > >
> > > > Dne 28.7.2017 v 21:22 Eric Wheeler napsal(a):
> > > > > Hello,
> > > > >
> > > > > Is there an option to batch LVM operations?
> > > > >
> > > > > For example, I would like to delete 100 thin snapshots without
> > > > > updating
> > > > > the vgmeta 100 times.
> > > >
> > > > You could possibly use --select feature to handle all removals with
> > > > just one lvremove command.
> > >
> > > I tried --select, but there is a long pause between removals so it is
> > > clearly doing a metadata operation for each removal.
> > >
> > > Is it possible to do something like this:
> > >
> > > 1. Hold a lock (how?)
> > > 2. vgcfgbackup -f /my-backup/file --ignorelocking-failure
> > > 3. Modify /my-backup/file (remove LVs)
> > > 4. dmsetup message pool 0 delete ID (each ID)
> > > 5. vgcfgrestore --ignorelocking-failure
> > > 6. Release lock
> > >
> 
> 
> Also what is probably worthy exercise:
> 
> If you have thinLVs you want to 'remove' -
> 
> Deactivate all such LVs in one 'lvchange -an' command upfront.
> Then as next step remove all such LVs with one  'lvremove' command.
> 
> Take a timed 'strace -ttt' and measure how much time is spend in actual
> 'ioctl' and compare with the remaining time spend in lvm2 processing.
> 
> Then summarize  the output and your time-expectation  - possibly open RFE BZ
> for it.

Details below, but to summarize:

Total ioctl time:  0.018542 seconds.  
Total runtime:    11.922s

Is this want you were interested to see?  Volumes were deactivated first.

-Eric

]# time strace -Tttt lvremove -f \
/dev/data/2017-08-03_17-52-54_manual_ertz_0 \
/dev/data/2017-08-03_17-52-56_manual_ertz_0 \
/dev/data/2017-08-03_17-52-57_manual_ertz_0 \
/dev/data/2017-08-03_17-52-59_manual_ertz_0 \
/dev/data/2017-08-03_17-53-00_manual_ertz_0 \
/dev/data/2017-08-03_17-53-02_manual_ertz_0 \
/dev/data/2017-08-03_17-53-03_manual_ertz_0 2>&1|grep ioctl | tee t.t

1501808046.555959 ioctl(4, BLKGETSIZE64, 107370508288) = 0 <0.000007>
1501808046.922878 ioctl(6, DM_VERSION, 0x55cfc11000a0) = 0 <0.000008>
1501808046.922912 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = -1 ENXIO (No such device or address) <0.000011>
1501808046.923887 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = -1 ENXIO (No such device or address) <0.000021>
1501808047.098661 ioctl(4, BLKBSZGET, 4096) = 0 <0.000005>
1501808047.098677 ioctl(4, BLKPBSZGET, 512) = 0 <0.000004>
1501808047.807246 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = 0 <0.000013>
1501808047.807287 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = 0 <0.000012>
1501808047.807329 ioctl(4, BLKRAGET, 8192) = 0 <0.000005>
1501808047.807662 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000012>
1501808047.807698 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000016>
1501808047.807738 ioctl(6, DM_VERSION, 0x55cfc110c750) = 0 <0.000005>
1501808047.807769 ioctl(6, DM_TABLE_DEPS, 0x55cfc110c750) = 0 <0.000011>
1501808047.807807 ioctl(6, DM_TABLE_DEPS, 0x55cfc1110760) = 0 <0.000017>
1501808047.807847 ioctl(6, DM_TABLE_DEPS, 0x55cfc1110760) = 0 <0.000007>
1501808047.808105 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000012>
1501808047.808139 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000009>
1501808047.808172 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000010>
1501808047.808195 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000009>
1501808047.808223 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000009>
1501808047.808245 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000006>
1501808047.808279 ioctl(6, DM_LIST_VERSIONS, 0x55cfc110c750) = 0 <0.000009>
1501808047.808309 ioctl(6, DM_LIST_VERSIONS, 0x55cfc110c750) = 0 <0.000028>
1501808047.808362 ioctl(6, DM_LIST_VERSIONS, 0x55cfc110c750) = 0 <0.000007>
1501808047.808389 ioctl(6, DM_LIST_VERSIONS, 0x55cfc110c750) = 0 <0.000006>
1501808047.808560 ioctl(6, DM_TABLE_STATUS, 0x55cfc110c750) = 0 <0.000011>
1501808047.808606 ioctl(6, DM_TABLE_STATUS, 0x55cfc110c750) = 0 <0.000018>
1501808047.808644 ioctl(6, DM_TABLE_STATUS, 0x55cfc110c750) = 0 <0.000010>
1501808047.814204 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000013>
1501808047.814245 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000013>
1501808047.814280 ioctl(6, DM_TABLE_DEPS, 0x55cfc11040b0) = 0 <0.000009>
1501808047.814311 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000010>
1501808047.814349 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000008>
1501808047.814734 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000013>
1501808047.814771 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000018>
1501808047.814809 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000009>
1501808047.814840 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000009>
1501808047.814861 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000010>
1501808047.814892 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000007>
1501808047.814918 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000007>
1501808047.814951 ioctl(6, DM_TABLE_STATUS, 0x55cfc11040b0) = 0 <0.000016>
1501808047.815030 ioctl(6, DM_TARGET_MSG, 0x55cfc11040b0) = 0 <0.001845>
1501808047.816897 ioctl(6, DM_TARGET_MSG, 0x55cfc11040b0) = 0 <0.000463>
1501808047.817379 ioctl(6, DM_TABLE_STATUS, 0x55cfc11040b0) = 0 <0.000010>
1501808047.817435 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000010>
1501808048.635303 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = -1 ENXIO (No such device or address) <0.000020>
1501808048.636528 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = -1 ENXIO (No such device or address) <0.000009>
1501808049.212219 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = 0 <0.000013>
1501808049.212259 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = 0 <0.000011>
1501808049.212305 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000021>
1501808049.212349 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000006>
1501808049.212373 ioctl(6, DM_TABLE_DEPS, 0x55cfc110c750) = 0 <0.000010>
1501808049.212429 ioctl(6, DM_TABLE_DEPS, 0x55cfc1110760) = 0 <0.000019>
1501808049.212482 ioctl(6, DM_TABLE_DEPS, 0x55cfc1110760) = 0 <0.000009>
1501808049.212676 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000014>
1501808049.212730 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000011>
1501808049.212766 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000014>
1501808049.212830 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000010>
1501808049.212864 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000014>
1501808049.212927 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000008>
1501808049.213128 ioctl(6, DM_TABLE_STATUS, 0x55cfc110c750) = 0 <0.000011>
1501808049.213181 ioctl(6, DM_TABLE_STATUS, 0x55cfc110c750) = 0 <0.000010>
1501808049.213225 ioctl(6, DM_TABLE_STATUS, 0x55cfc110c750) = 0 <0.000011>
1501808049.218054 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000011>
1501808049.218090 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000019>
1501808049.218123 ioctl(6, DM_TABLE_DEPS, 0x55cfc11040b0) = 0 <0.000009>
1501808049.218151 ioctl(6, DM_TABLE_DEPS, 0x55cfc1cd9a70) = 0 <0.000008>
1501808049.218176 ioctl(6, DM_TABLE_DEPS, 0x55cfc1cd9a70) = 0 <0.000007>
1501808049.218474 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000016>
1501808049.218532 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000016>
1501808049.218568 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000008>
1501808049.218606 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000018>
1501808049.218641 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000009>
1501808049.218677 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000019>
1501808049.218730 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000005>
1501808049.218752 ioctl(6, DM_TABLE_STATUS, 0x55cfc11040b0) = 0 <0.000023>
1501808049.218807 ioctl(6, DM_TARGET_MSG, 0x55cfc11040b0) = 0 <0.001636>
1501808049.220475 ioctl(6, DM_TARGET_MSG, 0x55cfc11040b0) = 0 <0.000388>
1501808049.220885 ioctl(6, DM_TABLE_STATUS, 0x55cfc11040b0) = 0 <0.000012>
1501808049.220927 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000013>
1501808050.083300 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = -1 ENXIO (No such device or address) <0.000035>
1501808050.084510 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = -1 ENXIO (No such device or address) <0.000010>
1501808050.712333 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = 0 <0.000014>
1501808050.712401 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = 0 <0.000021>
1501808050.712460 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000013>
1501808050.712517 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000009>
1501808050.712584 ioctl(6, DM_TABLE_DEPS, 0x55cfc110c750) = 0 <0.000021>
1501808050.712659 ioctl(6, DM_TABLE_DEPS, 0x55cfc1110760) = 0 <0.000024>
1501808050.712730 ioctl(6, DM_TABLE_DEPS, 0x55cfc1110760) = 0 <0.000019>
1501808050.713024 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000023>
1501808050.713086 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000012>
1501808050.713134 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000011>
1501808050.713186 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000010>
1501808050.713224 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000012>
1501808050.713266 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = -1 ENXIO (No such device or address) <0.000008>
1501808050.713461 ioctl(6, DM_TABLE_STATUS, 0x55cfc110c750) = 0 <0.000021>
1501808050.713527 ioctl(6, DM_TABLE_STATUS, 0x55cfc110c750) = 0 <0.000010>
1501808050.713588 ioctl(6, DM_TABLE_STATUS, 0x55cfc110c750) = 0 <0.000021>
1501808050.718627 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000014>
1501808050.718691 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000015>
1501808050.718729 ioctl(6, DM_TABLE_DEPS, 0x55cfc11040b0) = 0 <0.000010>
1501808050.718775 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000010>
1501808050.718823 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000009>
1501808050.719186 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000026>
1501808050.719266 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000010>
1501808050.719301 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000030>
1501808050.719373 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000010>
1501808050.719431 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000012>
1501808050.719504 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = -1 ENXIO (No such device or address) <0.000008>
1501808050.719539 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000007>
1501808050.719598 ioctl(6, DM_TABLE_STATUS, 0x55cfc11040b0) = 0 <0.000017>
1501808050.719646 ioctl(6, DM_TARGET_MSG, 0x55cfc11040b0) = 0 <0.001759>
1501808050.721446 ioctl(6, DM_TARGET_MSG, 0x55cfc11040b0) = 0 <0.000378>
1501808050.721855 ioctl(6, DM_TABLE_STATUS, 0x55cfc11040b0) = 0 <0.000012>
1501808050.721909 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000009>
1501808051.469219 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000011>
1501808051.470353 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000009>
1501808052.018442 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000032>
1501808052.018509 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000022>
1501808052.018556 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000021>
1501808052.018607 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000015>
1501808052.018644 ioctl(6, DM_TABLE_DEPS, 0x55cfc11020b0) = 0 <0.000019>
1501808052.018678 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000008>
1501808052.018706 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000017>
1501808052.018931 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000012>
1501808052.018956 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000008>
1501808052.018987 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000010>
1501808052.019011 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000008>
1501808052.019031 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000009>
1501808052.019053 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000006>
1501808052.019181 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000011>
1501808052.019226 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000007>
1501808052.019257 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000009>
1501808052.025155 ioctl(6, DM_DEV_STATUS, 0x55cfc1114760) = -1 ENXIO (No such device or address) <0.000014>
1501808052.025229 ioctl(6, DM_DEV_STATUS, 0x55cfc1114760) = 0 <0.000015>
1501808052.025269 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000012>
1501808052.025306 ioctl(6, DM_TABLE_DEPS, 0x55cfc1118770) = 0 <0.000036>
1501808052.025395 ioctl(6, DM_TABLE_DEPS, 0x55cfc1118770) = 0 <0.000010>
1501808052.025767 ioctl(6, DM_DEV_STATUS, 0x55cfc1114760) = 0 <0.000013>
1501808052.025821 ioctl(6, DM_DEV_STATUS, 0x55cfc1114760) = -1 ENXIO (No such device or address) <0.000010>
1501808052.025859 ioctl(6, DM_DEV_STATUS, 0x55cfc1114760) = -1 ENXIO (No such device or address) <0.000010>
1501808052.025919 ioctl(6, DM_DEV_STATUS, 0x55cfc1114760) = 0 <0.000010>
1501808052.025952 ioctl(6, DM_DEV_STATUS, 0x55cfc1114760) = -1 ENXIO (No such device or address) <0.000011>
1501808052.025993 ioctl(6, DM_DEV_STATUS, 0x55cfc1114760) = -1 ENXIO (No such device or address) <0.000008>
1501808052.026054 ioctl(6, DM_DEV_STATUS, 0x55cfc1114760) = 0 <0.000007>
1501808052.026084 ioctl(6, DM_TABLE_STATUS, 0x55cfc1114760) = 0 <0.000015>
1501808052.026152 ioctl(6, DM_TARGET_MSG, 0x55cfc1114760) = 0 <0.001656>
1501808052.027845 ioctl(6, DM_TARGET_MSG, 0x55cfc1114760) = 0 <0.000475>
1501808052.028351 ioctl(6, DM_TABLE_STATUS, 0x55cfc1114760) = 0 <0.000011>
1501808052.028408 ioctl(6, DM_DEV_STATUS, 0x55cfc110c750) = 0 <0.000011>
1501808052.767823 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000014>
1501808052.768984 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000009>
1501808053.390845 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000014>
1501808053.390912 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000013>
1501808053.390960 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000012>
1501808053.391015 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000008>
1501808053.391061 ioctl(6, DM_TABLE_DEPS, 0x55cfc11020b0) = 0 <0.000023>
1501808053.391128 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000009>
1501808053.391177 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000009>
1501808053.391377 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000013>
1501808053.391467 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000020>
1501808053.391515 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000012>
1501808053.391556 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000010>
1501808053.391593 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000010>
1501808053.391633 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000008>
1501808053.391830 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000012>
1501808053.391883 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000010>
1501808053.391931 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000011>
1501808053.396991 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000014>
1501808053.397059 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000014>
1501808053.397098 ioctl(6, DM_TABLE_DEPS, 0x55cfc14d9a60) = 0 <0.000011>
1501808053.397134 ioctl(6, DM_TABLE_DEPS, 0x55cfc14dda70) = 0 <0.000010>
1501808053.397177 ioctl(6, DM_TABLE_DEPS, 0x55cfc14dda70) = 0 <0.000018>
1501808053.397604 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000014>
1501808053.397650 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000009>
1501808053.397695 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000020>
1501808053.397744 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000009>
1501808053.397785 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000011>
1501808053.397851 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000008>
1501808053.397895 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000008>
1501808053.397952 ioctl(6, DM_TABLE_STATUS, 0x55cfc14d9a60) = 0 <0.000025>
1501808053.398050 ioctl(6, DM_TARGET_MSG, 0x55cfc14d9a60) = 0 <0.001691>
1501808053.399794 ioctl(6, DM_TARGET_MSG, 0x55cfc14d9a60) = 0 <0.000418>
1501808053.400253 ioctl(6, DM_TABLE_STATUS, 0x55cfc14d9a60) = 0 <0.000012>
1501808053.400337 ioctl(6, DM_DEV_STATUS, 0x55cfc14d1a50) = 0 <0.000020>
1501808054.199028 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000023>
1501808054.200182 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000011>
1501808054.804640 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000014>
1501808054.804693 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000012>
1501808054.804748 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000023>
1501808054.804806 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000018>
1501808054.804846 ioctl(6, DM_TABLE_DEPS, 0x55cfc11020b0) = 0 <0.000021>
1501808054.804925 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000009>
1501808054.804962 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000009>
1501808054.805173 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000013>
1501808054.805235 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000009>
1501808054.805278 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000010>
1501808054.805360 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000019>
1501808054.805429 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000020>
1501808054.805477 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000007>
1501808054.805639 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000017>
1501808054.805689 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000009>
1501808054.805730 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000016>
1501808054.811447 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000014>
1501808054.811483 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000012>
1501808054.811508 ioctl(6, DM_TABLE_DEPS, 0x55cfc14d9a60) = 0 <0.000010>
1501808054.811543 ioctl(6, DM_TABLE_DEPS, 0x55cfc14dda70) = 0 <0.000008>
1501808054.811564 ioctl(6, DM_TABLE_DEPS, 0x55cfc14dda70) = 0 <0.000009>
1501808054.811948 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000012>
1501808054.811984 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000018>
1501808054.812022 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000009>
1501808054.812051 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000009>
1501808054.812077 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000013>
1501808054.812109 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000006>
1501808054.812136 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000006>
1501808054.812158 ioctl(6, DM_TABLE_STATUS, 0x55cfc14d9a60) = 0 <0.000017>
1501808054.812200 ioctl(6, DM_TARGET_MSG, 0x55cfc14d9a60) = 0 <0.001640>
1501808054.813863 ioctl(6, DM_TARGET_MSG, 0x55cfc14d9a60) = 0 <0.000491>
1501808054.814372 ioctl(6, DM_TABLE_STATUS, 0x55cfc14d9a60) = 0 <0.000010>
1501808054.814419 ioctl(6, DM_DEV_STATUS, 0x55cfc14d1a50) = 0 <0.000022>
1501808055.709981 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000011>
1501808055.711084 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000007>
1501808056.293720 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000014>
1501808056.293796 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000023>
1501808056.293854 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000013>
1501808056.293908 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000008>
1501808056.293953 ioctl(6, DM_TABLE_DEPS, 0x55cfc11020b0) = 0 <0.000020>
1501808056.294020 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000010>
1501808056.294075 ioctl(6, DM_TABLE_DEPS, 0x55cfc1114760) = 0 <0.000009>
1501808056.294261 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000013>
1501808056.294333 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000011>
1501808056.294371 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000011>
1501808056.294438 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = 0 <0.000010>
1501808056.294477 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000011>
1501808056.294537 ioctl(6, DM_DEV_STATUS, 0x55cfc11020b0) = -1 ENXIO (No such device or address) <0.000007>
1501808056.294689 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000021>
1501808056.294755 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000009>
1501808056.294797 ioctl(6, DM_TABLE_STATUS, 0x55cfc11020b0) = 0 <0.000011>
1501808056.300074 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000014>
1501808056.300148 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000014>
1501808056.300187 ioctl(6, DM_TABLE_DEPS, 0x55cfc14d9a60) = 0 <0.000012>
1501808056.300226 ioctl(6, DM_TABLE_DEPS, 0x55cfc14dda70) = 0 <0.000010>
1501808056.300272 ioctl(6, DM_TABLE_DEPS, 0x55cfc14dda70) = 0 <0.000019>
1501808056.300772 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000013>
1501808056.300817 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000010>
1501808056.300862 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000020>
1501808056.300911 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000009>
1501808056.300952 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000011>
1501808056.301018 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = -1 ENXIO (No such device or address) <0.000008>
1501808056.301062 ioctl(6, DM_DEV_STATUS, 0x55cfc14d9a60) = 0 <0.000007>
1501808056.301134 ioctl(6, DM_TABLE_STATUS, 0x55cfc14d9a60) = 0 <0.000026>
1501808056.301208 ioctl(6, DM_TARGET_MSG, 0x55cfc14d9a60) = 0 <0.001999>
1501808056.303254 ioctl(6, DM_TARGET_MSG, 0x55cfc14d9a60) = 0 <0.000636>
1501808056.303932 ioctl(6, DM_TABLE_STATUS, 0x55cfc14d9a60) = 0 <0.000015>
1501808056.303997 ioctl(6, DM_DEV_STATUS, 0x55cfc14d1a50) = 0 <0.000011>

real	0m11.922s
user	0m6.312s
sys	0m4.428s

]# perl -lne '/<([0-9.]+)>/ and $a += $1; END { print "Total: $a" }' < t.t
Total: 0.018542


--
Eric Wheeler



> 
> Regards
> 
> Zdenek
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-08-04  0:38       ` Eric Wheeler
@ 2017-08-14 12:29         ` Zdenek Kabelac
  0 siblings, 0 replies; 15+ messages in thread
From: Zdenek Kabelac @ 2017-08-14 12:29 UTC (permalink / raw)
  To: lvm-devel

Dne 4.8.2017 v 02:38 Eric Wheeler napsal(a):
> On Tue, 1 Aug 2017, Zdenek Kabelac wrote:
> 
>> Dne 1.8.2017 v 19:24 Eric Wheeler napsal(a):
>>> On Fri, 28 Jul 2017, Zdenek Kabelac wrote:
>>>
>>>> Dne 28.7.2017 v 21:22 Eric Wheeler napsal(a):
>>>>> Hello,
>>>>>
>>>>> Is there an option to batch LVM operations?
>>>>>
>>>>> For example, I would like to delete 100 thin snapshots without updating
>>>>> the vgmeta 100 times.
>>>>
>>>> You could possibly use --select feature to handle all removals with
>>>> just one lvremove command.
>>>
>>> I tried --select, but there is a long pause between removals so it is
>>> clearly doing a metadata operation for each removal.
>>>
>>> Is it possible to do something like this:
>>>
>>> 1. Hold a lock (how?)
>>> 2. vgcfgbackup -f /my-backup/file --ignorelocking-failure
>>> 3. Modify /my-backup/file (remove LVs)
>>> 4. dmsetup message pool 0 delete ID (each ID)
>>> 5. vgcfgrestore --ignorelocking-failure
>>> 6. Release lock
>>>
>>> --
>>> Eric Wheeler
>>>
>>
>> Hi
>>
>> Unfortunately this optimization is currently not possible.
> 
> Well, perhaps not by LVM at the moment, but is the procedure sound if I
> were to do this myself and accept any errors while deleting thin volumes?

Hi

Yes - surely you can write direct sequence of command to update lvm2 metadata
in different way -   lvm2 does nothing else then 'read-lvm2-metadata'
'do-some-work'  and 'write-updated-lvm2-metadata' - clearly between does steps
there is a lot of validation checking which operation are allowed at each
individual steps to minimize risk of any data-lose.
lvm2 however targets 'universal' sequences - so it's getting sub-optimal for 
certain workloads.
If you can ensure like 'no LVs' gets activated between you  'hacked' tooling 
work then you can surely  write a replacement of 'lvm2' work  via usage
of some 'dmsetup' & 'awk/sed'  work with using of vgcfgbackup/restore.
But as said - you will lose large chunk of extra protection - but you know 
what price you pay here...

> Can vgcfgrestore be done hot with existing active volumes?
> 
> Similarly, could I do this for mass renames with the backup/restore?

Yep - rename is typically easier to do - here lvm2 is able to protect you on 
'vgcfgrestore' at least against bad names..
Just be aware if you rename LVs to some 'existing' live/running names you can 
make such LVs inaccessible/unsable for lvm2.

Regards

Zdenek



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Batch/queue LVM operations
  2017-08-04  0:59         ` Eric Wheeler
@ 2017-08-14 12:36           ` Zdenek Kabelac
  0 siblings, 0 replies; 15+ messages in thread
From: Zdenek Kabelac @ 2017-08-14 12:36 UTC (permalink / raw)
  To: lvm-devel

Dne 4.8.2017 v 02:59 Eric Wheeler napsal(a):
> On Tue, 1 Aug 2017, Zdenek Kabelac wrote:
> 
>> Dne 1.8.2017 v 19:46 Zdenek Kabelac napsal(a):
>> Also what is probably worthy exercise:
>>
>> If you have thinLVs you want to 'remove' -
>>
>> Deactivate all such LVs in one 'lvchange -an' command upfront.
>> Then as next step remove all such LVs with one  'lvremove' command.
>>
>> Take a timed 'strace -ttt' and measure how much time is spend in actual
>> 'ioctl' and compare with the remaining time spend in lvm2 processing.
>>
>> Then summarize  the output and your time-expectation  - possibly open RFE BZ
>> for it.
> 
> Details below, but to summarize:
> 
> Total ioctl time:  0.018542 seconds.
> Total runtime:    11.922s
> 
> Is this want you were interested to see?  Volumes were deactivated first.
> 
> -Eric
> 
> ]# time strace -Tttt lvremove -f \
> /dev/data/2017-08-03_17-52-54_manual_ertz_0 \
> /dev/data/2017-08-03_17-52-56_manual_ertz_0 \
> /dev/data/2017-08-03_17-52-57_manual_ertz_0 \
> /dev/data/2017-08-03_17-52-59_manual_ertz_0 \
> /dev/data/2017-08-03_17-53-00_manual_ertz_0 \
> /dev/data/2017-08-03_17-53-02_manual_ertz_0 \
> /dev/data/2017-08-03_17-53-03_manual_ertz_0 2>&1|grep ioctl | tee t.t
> 
> 1501808047.817379 ioctl(6, DM_TABLE_STATUS, 0x55cfc11040b0) = 0 <0.000010>
> 1501808047.817435 ioctl(6, DM_DEV_STATUS, 0x55cfc11040b0) = 0 <0.000010>
> 1501808048.635303 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = -1 ENXIO (No such device or address) <0.000020>
> 1501808048.636528 ioctl(6, DM_DEV_STATUS, 0x55cfc11000a0) = -1 ENXIO (No such device or address) <0.000009>

Looking at this trace - it seems to be stack at full commit point since lvm2 
is using flushing status - this has been (assuming) mostly fixed with recent
lvm2 releases (unless I've overlooked something).

Can you please try to do similar trace with lvm2 >= 2.02.172 - these version 
should avoid using  'status' with flushing.

Regards

Zdenek




^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-08-14 12:36 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-28 19:22 Batch/queue LVM operations Eric Wheeler
2017-07-28 19:43 ` Zdenek Kabelac
2017-07-28 21:55   ` Eric Wheeler
2017-07-31 12:16   ` Bryn M. Reeves
2017-08-01 17:24   ` Eric Wheeler
2017-08-01 17:46     ` Zdenek Kabelac
2017-08-01 18:39       ` Zdenek Kabelac
2017-08-04  0:59         ` Eric Wheeler
2017-08-14 12:36           ` Zdenek Kabelac
2017-08-04  0:38       ` Eric Wheeler
2017-08-14 12:29         ` Zdenek Kabelac
2017-07-31  9:01 ` Germano Percossi
2017-07-31  9:19   ` Zdenek Kabelac
2017-07-31  9:22     ` Germano Percossi
2017-08-01 17:20     ` Eric Wheeler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.