All of lore.kernel.org
 help / color / mirror / Atom feed
* [NOTES] thinp zeroing
@ 2012-02-22 14:14 Joe Thornber
  2012-02-22 14:25 ` [PATCH] [dm-thin] experimental erase-log patch Joe Thornber
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Joe Thornber @ 2012-02-22 14:14 UTC (permalink / raw)
  To: dm-devel

* Requirements

  There are two distinct requirements for zeroing applicable to the
  thin-provisioning target:

  - Avoid data leaks (DATA_LEAK)

    Consumers of thin devices may be using the same pool.  eg, two
    different vm guests provisioned from the same pool.  We must
    ensure that no data from one thin device appears on another in
    newly provisioned areas.

  - Provide guarantees about the presence/absence of sensitive data (ERASE)

    eg, When decomissioning a guest vm, the host (running the pool)
    wishes to guarantee that no data from that guest remains on the
    data device.
  
* Implementing DATA_LEAK

  Currently the DATA_LEAK requirement is enforced by zeroing every
  newly provisioned thin device block.  This zeroing can often be
  elided if the write io triggering the provisioning completely covers
  the block in question.  This zeroing can be turned off at the pool
  level if data leaks are not a concern (eg, a desktop system).
  Already upstream.

* Implementing ERASE

** Erase on deallocation

  The ERASE requirement is more difficult.  Zeroing data blocks when
  they are deallocated (ie. their ref count drops to zero after
  deleting the device) sounds like a good approach, but this
  introduces certain difficulties, mainly:

  - To retain our crash recovery properties, the zeroing cannot occur
    until after the next commit.  Extra on disk metadata would need to
    be stored to keep track of these blocks that need zeroing.  A
    commit would trigger a storm of io; currently the cost of a
    copy-on-write exception is paid immediately by the io that
    triggers it.  Building up delayed work like this makes it very
    hard to give performance estimates.

** Erasing from userland.

   Zeroing all unshared blocks when deleting a thin device will create
   a lot of io, I'd much rather this was being managed by userland.  I
   really don't want message ioctls that take many minutes to
   complete.

   The 'held_root' facility allows userland to read a snapshot of the
   metadata in a live pool.  [Note: this is a snapshot of the
   metadata, not a snapshot of data].  The following will implement
   ERASE in userland.

   - Deactivate all thin volumes that you wish to erase.

     Failure to deactivate would mean the mappings for the thins could
     be out of date by the time userland reads them.  There is no
     mechanism for enforcing this at the device mapper level; but
     userland can easily do this (eg, lvm2 already has a comprehensive
     locking scheme that will handle this).  It should also be pointed
     out that if you try and erase a volume while you're still using it,
     you are an idiot.

   - Grab a 'held_root'

   - Read the mappings for all of the thins you wish to erase.

   - Work out which data blocks are used exclusively by this subset of
     thins.

   - Write zeroes across these blocks.

   - send a thin-delete message to the pool for each thin.

** Crash recovery

   If we crash during the copy-on-write or provision operation the
   recovery process needs to zero those new, but not committed,
   blocks.  This requires the introduction of an 'erase log' to the
   metadata format.  This log would need to be committed *before* the
   copy/overwrite operation could proceed.

   I've implemented such an erase log [see patch], to get an idea of
   the performance overhead.  Testing in ideal conditions (ie. large
   writes that are triggering many provision/copy operations so costs
   can be amortised), we see a 25% slowdown in throughput of
   provision/copy operations.  Better than I feared.

   We can improve the performance significantly by observing that it's
   harmless to do too much zeroing of unprovisioned blocks on
   recovery.  This suggests a scheme similar to a mirror log, where we
   mark regions of the data volume that we have pending provision/copy
   operations.  When we recover we just zero *all* unallocated blocks
   in these marked regions.  This will result in fewer commits, since
   newly allocated blocks will commonly come from the same region and
   so avoid the need for a commit.  [TODO: get a proof of concept
   patch together].

** Discards

   DISCARDs *must* result in data being zeroed.  Some devices set the
   discard_zeroes_data flag.  This is not good enough; you cannot use
   this flag as a guarantee that the data no longer exists on the
   disk.  So real zeroing must occur.  I suggest we write a separate
   target that zeroes data just before discarding it, and stack it
   under the thin-pool.  The performance impact of this will be
   significant; to the point that we may wish to turn discard within
   the fs off; instead doing periodic tidy-ups.

** Avoid redundant copying

   The calculation to say whether a block is shared or not (and thus
   liable to suffer a copy-on-write exception), is an approximation.
   It sometimes says something is shared when it isn't, which causes
   us a problem wrt ERASE.  To avoid leaving orphaned copies of data,
   we must either tighten up the sharing detection [patch in the
   works], or zero the old block (via discard).

** Summary of work items [0/5]

   Too much for linux 3.4 timeframe.

   - [ ] Change the shared block detection [1 day, worth doing anyway]

   - [ ] Bitmap based erase log [1 week]

   - [ ] Recovery tool that zeroes unallocated blocks in dirty regions [1 week]

   - [ ] Implement the discard-really-zeroes target [1 month]

   - [ ] Write thin_erase userland tool [1 week]

   - [ ] Update lvm2 tools [3 months]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] [dm-thin] experimental erase-log patch.
  2012-02-22 14:14 [NOTES] thinp zeroing Joe Thornber
@ 2012-02-22 14:25 ` Joe Thornber
  2012-02-22 16:30 ` [NOTES] thinp zeroing Spelic
  2012-02-23  2:49 ` Mike Snitzer
  2 siblings, 0 replies; 7+ messages in thread
From: Joe Thornber @ 2012-02-22 14:25 UTC (permalink / raw)
  To: dm-devel; +Cc: Joe Thornber

This is just to let me get an idea of the costs involved with
implementing an erase log.
---
 drivers/md/dm-thin-metadata.c |   56 +++++++++++++++++++++++++++++++++++++++++
 drivers/md/dm-thin-metadata.h |    6 ++++
 drivers/md/dm-thin.c          |   38 ++++++++++++++++++++-------
 3 files changed, 90 insertions(+), 10 deletions(-)

diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
index f3ba61d..c392068 100644
--- a/drivers/md/dm-thin-metadata.c
+++ b/drivers/md/dm-thin-metadata.c
@@ -124,6 +124,11 @@ struct thin_disk_superblock {
 	__le32 compat_flags;
 	__le32 compat_ro_flags;
 	__le32 incompat_flags;
+
+	/*
+	 * Hold's blocks that will need to be zeroed as part of recovery from crash.
+	 */
+	__le64 erase_root;
 } __packed;
 
 struct disk_device_details {
@@ -170,11 +175,17 @@ struct dm_pool_metadata {
 	 */
 	struct dm_btree_info details_info;
 
+	/*
+	 * Blocks that need erasing on recovery.
+	 */
+	struct dm_btree_info erase_info;
+
 	struct rw_semaphore root_lock;
 	uint32_t time;
 	int need_commit;
 	dm_block_t root;
 	dm_block_t details_root;
+	dm_block_t erase_root;
 	struct list_head thin_devices;
 	uint64_t trans_id;
 	unsigned long flags;
@@ -465,6 +476,14 @@ static int init_pmd(struct dm_pool_metadata *pmd,
 	pmd->details_info.value_type.dec = NULL;
 	pmd->details_info.value_type.equal = NULL;
 
+	pmd->erase_info.tm = tm;
+	pmd->erase_info.levels = 1;
+	pmd->erase_info.value_type.context = NULL;
+	pmd->erase_info.value_type.size = sizeof(__le64);
+	pmd->erase_info.value_type.inc = NULL;
+	pmd->erase_info.value_type.dec = NULL;
+	pmd->erase_info.value_type.equal = NULL;
+
 	pmd->root = 0;
 
 	init_rwsem(&pmd->root_lock);
@@ -735,6 +754,12 @@ struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
 		goto bad;
 	}
 
+	r = dm_btree_empty(&pmd->erase_info, &pmd->erase_root);
+	if (r < 0) {
+		DMERR("couldn't create erase journal");
+		goto bad;
+	}
+
 	pmd->flags = 0;
 	pmd->need_commit = 1;
 	r = dm_pool_commit_metadata(pmd);
@@ -1332,6 +1357,37 @@ int dm_pool_alloc_data_block(struct dm_pool_metadata *pmd, dm_block_t *result)
 	return r;
 }
 
+int dm_pool_mark_erase(struct dm_pool_metadata *pmd, dm_block_t b)
+{
+	int r;
+	uint64_t key = b;
+	__le64 value = cpu_to_le64(pmd->time);
+
+	down_write(&pmd->root_lock);
+	r = dm_btree_insert(&pmd->erase_info, pmd->erase_root,
+			    &key, &value, &pmd->erase_root);
+	if (!r)
+		pmd->need_commit = 1;
+	up_write(&pmd->root_lock);
+
+	return r;
+}
+
+int dm_pool_clear_erase(struct dm_pool_metadata *pmd, dm_block_t b)
+{
+	int r;
+	uint64_t key = b;
+
+	down_write(&pmd->root_lock);
+	r = dm_btree_remove(&pmd->erase_info, pmd->erase_root,
+			    &key, &pmd->erase_root);
+	if (!r)
+		pmd->need_commit = 1;
+	up_write(&pmd->root_lock);
+
+	return r;
+}
+
 int dm_pool_commit_metadata(struct dm_pool_metadata *pmd)
 {
 	int r;
diff --git a/drivers/md/dm-thin-metadata.h b/drivers/md/dm-thin-metadata.h
index cfc7d0b..42a4268 100644
--- a/drivers/md/dm-thin-metadata.h
+++ b/drivers/md/dm-thin-metadata.h
@@ -126,6 +126,12 @@ int dm_thin_insert_block(struct dm_thin_device *td, dm_block_t block,
 int dm_thin_remove_block(struct dm_thin_device *td, dm_block_t block);
 
 /*
+ * Erase log
+ */
+int dm_pool_mark_erase(struct dm_pool_metadata *pmd, dm_block_t b);
+int dm_pool_clear_erase(struct dm_pool_metadata *pmd, dm_block_t b);
+
+/*
  * Queries.
  */
 int dm_thin_get_highest_mapped_block(struct dm_thin_device *td,
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index 0da0db2..7536db1 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -539,6 +539,7 @@ struct pool {
 	struct bio_list deferred_flush_bios;
 	struct list_head prepared_mappings;
 	struct list_head prepared_discards;
+	struct list_head copy_jobs;
 
 	struct bio_list retry_on_resume_list;
 
@@ -809,14 +810,6 @@ static void overwrite_endio(struct bio *bio, int err)
 /*----------------------------------------------------------------*/
 
 /*
- * Workqueue.
- */
-
-/*
- * Prepared mapping jobs.
- */
-
-/*
  * This sends the bios in the cell back to the deferred_bios list.
  */
 static void cell_defer(struct thin_c *tc, struct cell *cell,
@@ -878,6 +871,13 @@ static void process_prepared_mapping(struct new_mapping *m)
 		return;
 	}
 
+	r = dm_pool_clear_erase(tc->pool->pmd, m->data_block);
+	if (r) {
+		DMERR("dm_pool_clear_erase() failed");
+		cell_error(m->cell);
+		return;
+	}
+
 	/*
 	 * Release any bios held while the block was being provisioned.
 	 * If we are processing a write bio that completely covers the block,
@@ -996,6 +996,13 @@ static void schedule_copy(struct thin_c *tc, dm_block_t virt_block,
 	if (!ds_add_work(&pool->shared_read_ds, &m->list))
 		m->quiesced = 1;
 
+	r = dm_pool_mark_erase(pool->pmd, data_dest);
+	if (r) {
+		mempool_free(m, pool->mapping_pool);
+		DMERR("dm_kcopyd_copy() failed");
+		cell_error(cell);
+	}
+
 	/*
 	 * IO to pool_dev remaps to the pool target's data_dev.
 	 *
@@ -1007,8 +1014,15 @@ static void schedule_copy(struct thin_c *tc, dm_block_t virt_block,
 		h->overwrite_mapping = m;
 		m->bio = bio;
 		save_and_set_endio(bio, &m->saved_bi_end_io, overwrite_endio);
-		remap_and_issue(tc, bio, data_dest);
+		remap(tc, bio, data_dest);
+		bio_list_add(&pool->deferred_flush_bios, bio);
+
 	} else {
+		/*
+		 * FIXME: this shouldn't be done until after the commit of
+		 * the erase state change.  No point doing it now, for this
+		 * little experiment.  Just use small block sizes.
+		 */
 		struct dm_io_region from, to;
 
 		from.bdev = origin->bdev;
@@ -1062,6 +1076,8 @@ static void schedule_zero(struct thin_c *tc, dm_block_t virt_block,
 	m->err = 0;
 	m->bio = NULL;
 
+	dm_pool_mark_erase(pool->pmd, data_block);
+
 	/*
 	 * If the whole block of data is being overwritten or we are not
 	 * zeroing pre-existing data, we can issue the bio immediately.
@@ -1075,7 +1091,8 @@ static void schedule_zero(struct thin_c *tc, dm_block_t virt_block,
 		h->overwrite_mapping = m;
 		m->bio = bio;
 		save_and_set_endio(bio, &m->saved_bi_end_io, overwrite_endio);
-		remap_and_issue(tc, bio, data_block);
+		remap(tc, bio, data_block);
+		bio_list_add(&pool->deferred_flush_bios, bio);
 
 	} else {
 		int r;
@@ -1087,6 +1104,7 @@ static void schedule_zero(struct thin_c *tc, dm_block_t virt_block,
 
 		r = dm_kcopyd_zero(pool->copier, 1, &to, 0, copy_complete, m);
 		if (r < 0) {
+			dm_pool_clear_erase(pool->pmd, data_block);
 			mempool_free(m, pool->mapping_pool);
 			DMERR("dm_kcopyd_zero() failed");
 			cell_error(cell);
-- 
1.7.5.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [NOTES] thinp zeroing
  2012-02-22 14:14 [NOTES] thinp zeroing Joe Thornber
  2012-02-22 14:25 ` [PATCH] [dm-thin] experimental erase-log patch Joe Thornber
@ 2012-02-22 16:30 ` Spelic
  2012-02-22 18:04   ` Zdenek Kabelac
  2012-02-23 15:43   ` Joe Thornber
  2012-02-23  2:49 ` Mike Snitzer
  2 siblings, 2 replies; 7+ messages in thread
From: Spelic @ 2012-02-22 16:30 UTC (permalink / raw)
  To: dm-devel, thornber

On 02/22/12 15:14, Joe Thornber wrote:
> * Requirements
>
>    There are two distinct requirements for zeroing applicable to the
>    thin-provisioning target:
>
>    - Avoid data leaks (DATA_LEAK)
>
> * Implementing DATA_LEAK
>
>
> * Implementing ERASE
>
> ** Erase on deallocation
>
>
> ** Erasing from userland.
>
> ** Crash recovery
>
> ** Discards
>
>

Hello
thanks for all your hard work regarding thinp

I was thinking: why don't you implement a bitmap that takes care of 
emulating the discard functionality?

This would take care of all your issues above, and also be great for a 
lot of use cases even outside thinp (*).

Every read would first hit the bitmap; if the bitmap says that the 
region has been discarded, thinp would return zeroes to the requestor.

When a discard comes, you first set the bits in the 
discard-emulation-bitmap, and then also pass the discard to layers 
below. Passing the discard below has no user-visible effects (because 
discard is already implemented in thinp) however it is still 
advantageous to pass it to lower layers because there might be SSDs 
below thinp which can benefit from the discard.

I suggest a bitmap of 4kbytes / bit, and then if a discard comes that is 
not 4K aligned (that would be a mistake of the above layers, at least a 
"performance" mistake), you set the bitmaps only for the bits which are 
completely covered by the discard, and then you are left with at most 
two misaligned edges one at the beginning and one at the end of the 
discard region, and for those you will need to write zeroes to the 
layers below. So in the worst case you need to set a few bits and then 
perform two small writes of zeroes, but in most cases you just set a few 
bits.

(*) remember that most MD Raid levels do not pass discards below, so we 
-raid users- cannot really see zeroes where discard has been triggered. 
That's a problem when we want to backup a virtual machine disk image (DM 
volume) from the outside: non-zeroes don't compress well; it's like we 
backup deleted files everytime.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [NOTES] thinp zeroing
  2012-02-22 16:30 ` [NOTES] thinp zeroing Spelic
@ 2012-02-22 18:04   ` Zdenek Kabelac
  2012-02-23 15:43   ` Joe Thornber
  1 sibling, 0 replies; 7+ messages in thread
From: Zdenek Kabelac @ 2012-02-22 18:04 UTC (permalink / raw)
  To: dm-devel

Dne 22.2.2012 17:30, Spelic napsal(a):
> On 02/22/12 15:14, Joe Thornber wrote:
>> * Requirements
>>
>>    There are two distinct requirements for zeroing applicable to the
>>    thin-provisioning target:
>>
>>    - Avoid data leaks (DATA_LEAK)
>>
>> * Implementing DATA_LEAK
>>
>>
>> * Implementing ERASE
>>
>> ** Erase on deallocation
>>
>>
>> ** Erasing from userland.
>>
>> ** Crash recovery
>>
>> ** Discards
>>
>>
> 
> Hello
> thanks for all your hard work regarding thinp
> 
> I was thinking: why don't you implement a bitmap that takes care of emulating
> the discard functionality?
> 
> This would take care of all your issues above, and also be great for a lot of
> use cases even outside thinp (*).
> 
> Every read would first hit the bitmap; if the bitmap says that the region has
> been discarded, thinp would return zeroes to the requestor.
> 
> When a discard comes, you first set the bits in the discard-emulation-bitmap,
> and then also pass the discard to layers below. Passing the discard below has
> no user-visible effects (because discard is already implemented in thinp)
> however it is still advantageous to pass it to lower layers because there
> might be SSDs below thinp which can benefit from the discard.
> 
> I suggest a bitmap of 4kbytes / bit, and then if a discard comes that is not
> 4K aligned (that would be a mistake of the above layers, at least a
> "performance" mistake), you set the bitmaps only for the bits which are
> completely covered by the discard, and then you are left with at most two
> misaligned edges one at the beginning and one at the end of the discard
> region, and for those you will need to write zeroes to the layers below. So in
> the worst case you need to set a few bits and then perform two small writes of
> zeroes, but in most cases you just set a few bits.
> 
> (*) remember that most MD Raid levels do not pass discards below, so we -raid
> users- cannot really see zeroes where discard has been triggered. That's a
> problem when we want to backup a virtual machine disk image (DM volume) from
> the outside: non-zeroes don't compress well; it's like we backup deleted files
> everytime.
> 

For backups there will be much better solution which will be able to get list
of provisioned blocks for a device (in case of snapshot - diffs).

IMHO Bitmaps are expensive - as you may observe with certain extX operations.

Zdenek

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: thinp zeroing
  2012-02-22 14:14 [NOTES] thinp zeroing Joe Thornber
  2012-02-22 14:25 ` [PATCH] [dm-thin] experimental erase-log patch Joe Thornber
  2012-02-22 16:30 ` [NOTES] thinp zeroing Spelic
@ 2012-02-23  2:49 ` Mike Snitzer
  2012-02-23 15:47   ` Joe Thornber
  2 siblings, 1 reply; 7+ messages in thread
From: Mike Snitzer @ 2012-02-23  2:49 UTC (permalink / raw)
  To: Joe Thornber; +Cc: dm-devel

Nice write-up.  It is concerning that we have to go to such lengths but
I don't see a way around it without limiting who can consume thinp.

On Wed, Feb 22 2012 at  9:14am -0500,
Joe Thornber <thornber@redhat.com> wrote:

> ** Discards
> 
>    DISCARDs *must* result in data being zeroed.  Some devices set the
>    discard_zeroes_data flag.  This is not good enough; you cannot use
>    this flag as a guarantee that the data no longer exists on the
>    disk.  So real zeroing must occur.  I suggest we write a separate
>    target that zeroes data just before discarding it, and stack it
>    under the thin-pool.  The performance impact of this will be
>    significant; to the point that we may wish to turn discard within
>    the fs off; instead doing periodic tidy-ups.

...

> ** Summary of work items [0/5]
> 
>    - [ ] Implement the discard-really-zeroes target [1 month]

I don't think it'll take a month.  Probably a focused week to 2 weeks.

I can develop this target before jumping in to the HSM target (unless
you'd rather I start in on HSM asap).

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [NOTES] thinp zeroing
  2012-02-22 16:30 ` [NOTES] thinp zeroing Spelic
  2012-02-22 18:04   ` Zdenek Kabelac
@ 2012-02-23 15:43   ` Joe Thornber
  1 sibling, 0 replies; 7+ messages in thread
From: Joe Thornber @ 2012-02-23 15:43 UTC (permalink / raw)
  To: Spelic; +Cc: dm-devel

On Wed, Feb 22, 2012 at 05:30:04PM +0100, Spelic wrote:
> On 02/22/12 15:14, Joe Thornber wrote:
> >* Requirements
> >
> >   There are two distinct requirements for zeroing applicable to the
> >   thin-provisioning target:
> >
> >   - Avoid data leaks (DATA_LEAK)
> >
> >* Implementing DATA_LEAK
> >
> >
> >* Implementing ERASE
> >
> >** Erase on deallocation
> >
> >
> >** Erasing from userland.
> >
> >** Crash recovery
> >
> >** Discards
> >
> >
> 
> Hello
> thanks for all your hard work regarding thinp
> 
> I was thinking: why don't you implement a bitmap that takes care of
> emulating the discard functionality?
> 
> This would take care of all your issues above, and also be great for
> a lot of use cases even outside thinp (*).
> 
> Every read would first hit the bitmap; if the bitmap says that the
> region has been discarded, thinp would return zeroes to the
> requestor.

Already done, the first thing a discard bio does is remove mappings
from the btree.  It's then (optionally) handed down to the underlying
device.

> I suggest a bitmap of 4kbytes / bit, and then if a discard comes
> that is not 4K aligned (that would be a mistake of the above layers,
> at least a "performance" mistake), you set the bitmaps only for the
> bits which are completely covered by the discard, and then you are
> left with at most two misaligned edges one at the beginning and one
> at the end of the discard region, and for those you will need to
> write zeroes to the layers below. So in the worst case you need to
> set a few bits and then perform two small writes of zeroes, but in
> most cases you just set a few bits.

Things like SSDs that set the discard_zeroes_data flag are only saying
that they'll return zeroes if you read from this area.  This is
different from promising the data has been overwritten with zeroes on
the disk.  Hence the need in the ERASE case for real writes across the
discarded area.

- Joe

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: thinp zeroing
  2012-02-23  2:49 ` Mike Snitzer
@ 2012-02-23 15:47   ` Joe Thornber
  0 siblings, 0 replies; 7+ messages in thread
From: Joe Thornber @ 2012-02-23 15:47 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: dm-devel

On Wed, Feb 22, 2012 at 09:49:18PM -0500, Mike Snitzer wrote:
> Nice write-up.  It is concerning that we have to go to such lengths but
> I don't see a way around it without limiting who can consume thinp.
> 
> On Wed, Feb 22 2012 at  9:14am -0500,
> Joe Thornber <thornber@redhat.com> wrote:
> 
> > ** Discards
> > 
> >    DISCARDs *must* result in data being zeroed.  Some devices set the
> >    discard_zeroes_data flag.  This is not good enough; you cannot use
> >    this flag as a guarantee that the data no longer exists on the
> >    disk.  So real zeroing must occur.  I suggest we write a separate
> >    target that zeroes data just before discarding it, and stack it
> >    under the thin-pool.  The performance impact of this will be
> >    significant; to the point that we may wish to turn discard within
> >    the fs off; instead doing periodic tidy-ups.
> 
> ...
> 
> > ** Summary of work items [0/5]
> > 
> >    - [ ] Implement the discard-really-zeroes target [1 month]
> 
> I don't think it'll take a month.  Probably a focused week to 2 weeks.

By the time you include getting it through agk I think a month is
highly optimistic.

> I can develop this target before jumping in to the HSM target (unless
> you'd rather I start in on HSM asap).

HSM is the priority please.  ERASE can wait until later.  Plus given
the development effort and performance impact I think there are other
alternatives we should consider (such as using dm-crypt on each thin,
and throwing away the keys when you delete it).

- Joe

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-02-23 15:47 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-22 14:14 [NOTES] thinp zeroing Joe Thornber
2012-02-22 14:25 ` [PATCH] [dm-thin] experimental erase-log patch Joe Thornber
2012-02-22 16:30 ` [NOTES] thinp zeroing Spelic
2012-02-22 18:04   ` Zdenek Kabelac
2012-02-23 15:43   ` Joe Thornber
2012-02-23  2:49 ` Mike Snitzer
2012-02-23 15:47   ` Joe Thornber

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.