* Re: [PATCH] dm-raid: add RAID discard support
2014-10-01 2:56 ` NeilBrown
@ 2014-10-01 11:13 ` Heinz Mauelshagen
2014-10-03 1:12 ` Martin K. Petersen
2014-10-01 13:32 ` Mike Snitzer
` (3 subsequent siblings)
4 siblings, 1 reply; 28+ messages in thread
From: Heinz Mauelshagen @ 2014-10-01 11:13 UTC (permalink / raw)
To: device-mapper development; +Cc: Shaohua Li, Martin K. Petersen
[-- Attachment #1.1: Type: text/plain, Size: 4949 bytes --]
On 10/01/2014 04:56 AM, NeilBrown wrote:
> On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
> wrote:
>
>> Martin,
>>
>> thanks for the good explanation of the state of the discard union.
>> Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
>>
>> I was planning to have a followup patch for dm-raid supporting a dm-raid
>> table
>> line argument to prohibit discard passdown.
>>
>> In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data
>> support
>> related to RAID4/5/6, we need that in upstream together with the initial
>> patch.
>>
>> That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6
>> table
>> lines to avoid possible data corruption but can be avoided on RAID1/10
>> table lines,
>> because the latter are not suffering from any discard_zeroes_data flaw.
>>
>>
>> Neil,
>>
>> are you going to disable discards in RAID4/5/6 shortly
>> or rather go with your bitmap solution?
> Can I just close my eyes and hope it goes away?
I'd think that it actually would.
Sane deployments are unlikely to base MD RAID4/5/6 mappings on
ancient/unreliable SSDs.
Your "md_mod.willing_to_risk_discard=Y" mentioned below is the proper
way against all insane odds.
BTW:
we're referring to SSD discard_zeroes_data issues only so far it seems?
Can we be sure that SCSI unmap with the TPRZ bit set will always return
zeroes on reads for arbitrary storage devices?
>
> The idea of a bitmap of uninitialised areas is not a short-term solution.
> But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
> would mean that people with good sensible hardware wouldn't be able to use
> it properly.
>
> I would really rather that discard_zeroes_data were only set on devices where
> it was actually true. Then it wouldn't be my problem any more.
>
> Maybe I could do a loud warning
> "Not enabling DISCARD on RAID5 because we cannot trust committees.
> Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
> sectors as zeros"
Yes, we are thinking the same for dm-raid (like a "force_discard"
parameter).
>
> and add an appropriate module parameter......
>
> While we are on the topic, maybe I should write down my thoughts about the
> bitmap thing in case someone wants to contribute.
>
> There are 3 states that a 'region' can be in:
> 1- known to be in-sync
> 2- possibly not in sync, but it should be
> 3- probably not in sync, contains no valuable data.
>
> A read from '3' should return zeroes.
> A write to '3' should change the region to be '2'. It could either
> write zeros before allowing the write to start, or it could just start
> a normal resync.
>
> Here is a question: if a region has been discarded, are we guaranteed that
> reads are at least stable. i.e. if I read twice will I definitely get the
> same value?
We can never be sure taking the standards and implementation hassle
into account.
Thus RAID system OEMs have whitelists for supported drives.
>
> It would be simplest to store all the states in the one bitmap. i.e. have a
> new version of the current bitmap (which distinguishes between '1' and '2')
> which allows the 3rd state. Probably just 2 bits per region, though we could
> squeeze state for 20 regions into 32 bits if we really wanted :-)
>
> Then we set the bitmap to all '3's when it is created and set regions to
> '3' when discarding.
>
> One problem with this approach is that it forces the same region size.
> Regions for the current bitmap can conveniently be 10s of Megabytes. For
> state '3', we ideally want one region per stripe.
> For 4TB drives with a 512K chunk size (and smaller is not uncommon) that is
> 8 million regions which is a megabyte of bits.
>
> I guess that isn't all that much. One problem with small regions for the
> 1/2 distinction is that we end up updating the bitmap more often, but
> that could be avoided by setting multiple bits at one.
> i.e. keep the internal counters with a larger granularity, and when a
> counter (of outstanding writes) becomes non-zero, set quite a few bits in
> the bitmap.
How close can you get with update patterns on bitmaps for that new approach
to what we have with the current one?
Will it be negligible or performance impacting?
>
> So that is probably what I would do:
> - new version for bitmap which has 2 bits per region and encodes 3 states
> - bitmap granularity matches chunk size (by default)
> - decouple region size of bitmap from region size for internal 'dirty'
> accounting
> - write to a 'state 3' region sets it to 'state 2' and kicks off resync
> - 'discard' sets state to '3'.
Makes sense but we'd rather avoid it based on the aforementioned
"willing" option.
Heinz
>
> NeilBrown
>
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
[-- Attachment #1.2: Type: text/html, Size: 6404 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH] dm-raid: add RAID discard support
2014-10-01 11:13 ` Heinz Mauelshagen
@ 2014-10-03 1:12 ` Martin K. Petersen
0 siblings, 0 replies; 28+ messages in thread
From: Martin K. Petersen @ 2014-10-03 1:12 UTC (permalink / raw)
To: Heinz Mauelshagen
Cc: device-mapper development, Shaohua Li, Martin K. Petersen
>>>>> "Heinz" == Heinz Mauelshagen <heinzm@redhat.com> writes:
Heinz> BTW: we're referring to SSD discard_zeroes_data issues only so
Heinz> far it seems? Can we be sure that SCSI unmap with the TPRZ bit
Heinz> set will always return zeroes on reads for arbitrary storage
Heinz> devices?
Only if we used WRITE SAME to discard.
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: dm-raid: add RAID discard support
2014-10-01 2:56 ` NeilBrown
2014-10-01 11:13 ` Heinz Mauelshagen
@ 2014-10-01 13:32 ` Mike Snitzer
2014-10-01 23:34 ` NeilBrown
2014-10-01 16:00 ` [PATCH] " Andrey Kuzmin
` (2 subsequent siblings)
4 siblings, 1 reply; 28+ messages in thread
From: Mike Snitzer @ 2014-10-01 13:32 UTC (permalink / raw)
To: NeilBrown
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen
On Tue, Sep 30 2014 at 10:56pm -0400,
NeilBrown <neilb@suse.de> wrote:
> On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
> wrote:
>
> >
> > Martin,
> >
> > thanks for the good explanation of the state of the discard union.
> > Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
> >
> > I was planning to have a followup patch for dm-raid supporting a dm-raid
> > table
> > line argument to prohibit discard passdown.
> >
> > In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data
> > support
> > related to RAID4/5/6, we need that in upstream together with the initial
> > patch.
> >
> > That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6
> > table
> > lines to avoid possible data corruption but can be avoided on RAID1/10
> > table lines,
> > because the latter are not suffering from any discard_zeroes_data flaw.
> >
> >
> > Neil,
> >
> > are you going to disable discards in RAID4/5/6 shortly
> > or rather go with your bitmap solution?
>
> Can I just close my eyes and hope it goes away?
>
> The idea of a bitmap of uninitialised areas is not a short-term solution.
> But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
> would mean that people with good sensible hardware wouldn't be able to use
> it properly.
>
> I would really rather that discard_zeroes_data were only set on devices where
> it was actually true. Then it wouldn't be my problem any more.
>
> Maybe I could do a loud warning
> "Not enabling DISCARD on RAID5 because we cannot trust committees.
> Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
> sectors as zeros"
>
> and add an appropriate module parameter......
I had the same thought and would be happy with this too. I was going to
update Heinz's patch to have the same default off but allow user to
enable:
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=8e0cff64f35971135a6de7907bbc8c3a010aff8f
But I'd love to just follow what you arrive at with MD (using the same
name for the module param in dm-raid).
I'm open to getting this done now and included in 3.18 if you are.
Mike
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: dm-raid: add RAID discard support
2014-10-01 13:32 ` Mike Snitzer
@ 2014-10-01 23:34 ` NeilBrown
2014-10-02 1:31 ` Mike Snitzer
0 siblings, 1 reply; 28+ messages in thread
From: NeilBrown @ 2014-10-01 23:34 UTC (permalink / raw)
To: Mike Snitzer
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen
[-- Attachment #1.1: Type: text/plain, Size: 4321 bytes --]
On Wed, 1 Oct 2014 09:32:37 -0400 Mike Snitzer <snitzer@redhat.com> wrote:
> On Tue, Sep 30 2014 at 10:56pm -0400,
> NeilBrown <neilb@suse.de> wrote:
>
> > On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
> > wrote:
> >
> > >
> > > Martin,
> > >
> > > thanks for the good explanation of the state of the discard union.
> > > Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
> > >
> > > I was planning to have a followup patch for dm-raid supporting a dm-raid
> > > table
> > > line argument to prohibit discard passdown.
> > >
> > > In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data
> > > support
> > > related to RAID4/5/6, we need that in upstream together with the initial
> > > patch.
> > >
> > > That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6
> > > table
> > > lines to avoid possible data corruption but can be avoided on RAID1/10
> > > table lines,
> > > because the latter are not suffering from any discard_zeroes_data flaw.
> > >
> > >
> > > Neil,
> > >
> > > are you going to disable discards in RAID4/5/6 shortly
> > > or rather go with your bitmap solution?
> >
> > Can I just close my eyes and hope it goes away?
> >
> > The idea of a bitmap of uninitialised areas is not a short-term solution.
> > But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
> > would mean that people with good sensible hardware wouldn't be able to use
> > it properly.
> >
> > I would really rather that discard_zeroes_data were only set on devices where
> > it was actually true. Then it wouldn't be my problem any more.
> >
> > Maybe I could do a loud warning
> > "Not enabling DISCARD on RAID5 because we cannot trust committees.
> > Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
> > sectors as zeros"
> >
> > and add an appropriate module parameter......
>
> I had the same thought and would be happy with this too. I was going to
> update Heinz's patch to have the same default off but allow user to
> enable:
> https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=8e0cff64f35971135a6de7907bbc8c3a010aff8f
>
> But I'd love to just follow what you arrive at with MD (using the same
> name for the module param in dm-raid).
>
> I'm open to getting this done now and included in 3.18 if you are.
>
> Mike
How about something like this?
I want to keep it well away from regular API stuff as I hope it is just a
temporary hack until a more general solution can be found and implemented.
Thanks,
NeilBrown
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 183588b11fc1..3ed668c5378c 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -64,6 +64,10 @@
#define cpu_to_group(cpu) cpu_to_node(cpu)
#define ANY_GROUP NUMA_NO_NODE
+static bool devices_handle_discard_safely = false;
+module_param(devices_handle_discard_safely, bool, false);
+MODULE_PARM_DESC(devices_handle_discard_safely,
+ "Set to Y if all devices in array reliably return zeroes on reads from discarded regions");
static struct workqueue_struct *raid5_wq;
/*
* Stripe cache
@@ -6208,7 +6212,7 @@ static int run(struct mddev *mddev)
mddev->queue->limits.discard_granularity = stripe;
/*
* unaligned part of discard request will be ignored, so can't
- * guarantee discard_zerors_data
+ * guarantee discard_zeroes_data
*/
mddev->queue->limits.discard_zeroes_data = 0;
@@ -6233,6 +6237,18 @@ static int run(struct mddev *mddev)
!bdev_get_queue(rdev->bdev)->
limits.discard_zeroes_data)
discard_supported = false;
+ /* Unfortunately, discard_zeroes_data is not currently
+ * a guarantee - just a hint. So we only allow DISCARD
+ * if the sysadmin has confirmed that only safe devices
+ * are in use but setting a module parameter.
+ */
+ if (!devices_handle_discard_safely) {
+ if (discard_supported) {
+ pr_info("md/raid456: discard support disabled due to uncertainty.\n");
+ pr_info("Set raid456.devices_handle_discard_safely=Y to override.\n");
+ }
+ discard_supported = false;
+ }
}
if (discard_supported &&
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: dm-raid: add RAID discard support
2014-10-01 23:34 ` NeilBrown
@ 2014-10-02 1:31 ` Mike Snitzer
2014-10-02 2:00 ` NeilBrown
0 siblings, 1 reply; 28+ messages in thread
From: Mike Snitzer @ 2014-10-02 1:31 UTC (permalink / raw)
To: NeilBrown
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen
Hi,
On Wed, Oct 01 2014 at 7:34pm -0400,
NeilBrown <neilb@suse.de> wrote:
> On Wed, 1 Oct 2014 09:32:37 -0400 Mike Snitzer <snitzer@redhat.com> wrote:
>
> >
> > I had the same thought and would be happy with this too. I was going to
> > update Heinz's patch to have the same default off but allow user to
> > enable:
> > https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=8e0cff64f35971135a6de7907bbc8c3a010aff8f
> >
> > But I'd love to just follow what you arrive at with MD (using the same
> > name for the module param in dm-raid).
> >
> > I'm open to getting this done now and included in 3.18 if you are.
> >
> > Mike
>
> How about something like this?
> I want to keep it well away from regular API stuff as I hope it is just a
> temporary hack until a more general solution can be found and implemented.
>
> Thanks,
> NeilBrown
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 183588b11fc1..3ed668c5378c 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -64,6 +64,10 @@
> #define cpu_to_group(cpu) cpu_to_node(cpu)
> #define ANY_GROUP NUMA_NO_NODE
>
> +static bool devices_handle_discard_safely = false;
> +module_param(devices_handle_discard_safely, bool, false);
> +MODULE_PARM_DESC(devices_handle_discard_safely,
> + "Set to Y if all devices in array reliably return zeroes on reads from discarded regions");
> static struct workqueue_struct *raid5_wq;
> /*
> * Stripe cache
> @@ -6208,7 +6212,7 @@ static int run(struct mddev *mddev)
> mddev->queue->limits.discard_granularity = stripe;
> /*
> * unaligned part of discard request will be ignored, so can't
> - * guarantee discard_zerors_data
> + * guarantee discard_zeroes_data
> */
> mddev->queue->limits.discard_zeroes_data = 0;
>
> @@ -6233,6 +6237,18 @@ static int run(struct mddev *mddev)
> !bdev_get_queue(rdev->bdev)->
> limits.discard_zeroes_data)
> discard_supported = false;
> + /* Unfortunately, discard_zeroes_data is not currently
> + * a guarantee - just a hint. So we only allow DISCARD
> + * if the sysadmin has confirmed that only safe devices
> + * are in use but setting a module parameter.
> + */
> + if (!devices_handle_discard_safely) {
> + if (discard_supported) {
> + pr_info("md/raid456: discard support disabled due to uncertainty.\n");
> + pr_info("Set raid456.devices_handle_discard_safely=Y to override.\n");
> + }
> + discard_supported = false;
> + }
> }
>
> if (discard_supported &&
There is a typo in the new block comment above: "are in use but setting
a module parameter". s/but/by/
But thinking further: should this be a per array override? E.g. for DM
this could easily be a dm-raid table parameter. But I know the MD
implementation would likely be more cumbersome (superblock update?).
Though given the (hopefully) temporary nature of this, maybe it isn't
worth it for MD. Maybe be a bit more precise in the MODULE_PARM_DESC
with:
"Set to Y if all devices in each array reliably returns zeroes on reads
from discarded regions" ?
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: dm-raid: add RAID discard support
2014-10-02 1:31 ` Mike Snitzer
@ 2014-10-02 2:00 ` NeilBrown
2014-10-02 4:04 ` [dm-devel] " NeilBrown
0 siblings, 1 reply; 28+ messages in thread
From: NeilBrown @ 2014-10-02 2:00 UTC (permalink / raw)
To: Mike Snitzer
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen
[-- Attachment #1.1: Type: text/plain, Size: 3864 bytes --]
On Wed, 1 Oct 2014 21:31:36 -0400 Mike Snitzer <snitzer@redhat.com> wrote:
> Hi,
>
> On Wed, Oct 01 2014 at 7:34pm -0400,
> NeilBrown <neilb@suse.de> wrote:
>
> > On Wed, 1 Oct 2014 09:32:37 -0400 Mike Snitzer <snitzer@redhat.com> wrote:
> >
> > >
> > > I had the same thought and would be happy with this too. I was going to
> > > update Heinz's patch to have the same default off but allow user to
> > > enable:
> > > https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=8e0cff64f35971135a6de7907bbc8c3a010aff8f
> > >
> > > But I'd love to just follow what you arrive at with MD (using the same
> > > name for the module param in dm-raid).
> > >
> > > I'm open to getting this done now and included in 3.18 if you are.
> > >
> > > Mike
> >
> > How about something like this?
> > I want to keep it well away from regular API stuff as I hope it is just a
> > temporary hack until a more general solution can be found and implemented.
> >
> > Thanks,
> > NeilBrown
> >
> > diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> > index 183588b11fc1..3ed668c5378c 100644
> > --- a/drivers/md/raid5.c
> > +++ b/drivers/md/raid5.c
> > @@ -64,6 +64,10 @@
> > #define cpu_to_group(cpu) cpu_to_node(cpu)
> > #define ANY_GROUP NUMA_NO_NODE
> >
> > +static bool devices_handle_discard_safely = false;
> > +module_param(devices_handle_discard_safely, bool, false);
> > +MODULE_PARM_DESC(devices_handle_discard_safely,
> > + "Set to Y if all devices in array reliably return zeroes on reads from discarded regions");
> > static struct workqueue_struct *raid5_wq;
> > /*
> > * Stripe cache
> > @@ -6208,7 +6212,7 @@ static int run(struct mddev *mddev)
> > mddev->queue->limits.discard_granularity = stripe;
> > /*
> > * unaligned part of discard request will be ignored, so can't
> > - * guarantee discard_zerors_data
> > + * guarantee discard_zeroes_data
> > */
> > mddev->queue->limits.discard_zeroes_data = 0;
> >
> > @@ -6233,6 +6237,18 @@ static int run(struct mddev *mddev)
> > !bdev_get_queue(rdev->bdev)->
> > limits.discard_zeroes_data)
> > discard_supported = false;
> > + /* Unfortunately, discard_zeroes_data is not currently
> > + * a guarantee - just a hint. So we only allow DISCARD
> > + * if the sysadmin has confirmed that only safe devices
> > + * are in use but setting a module parameter.
> > + */
> > + if (!devices_handle_discard_safely) {
> > + if (discard_supported) {
> > + pr_info("md/raid456: discard support disabled due to uncertainty.\n");
> > + pr_info("Set raid456.devices_handle_discard_safely=Y to override.\n");
> > + }
> > + discard_supported = false;
> > + }
> > }
> >
> > if (discard_supported &&
>
>
> There is a typo in the new block comment above: "are in use but setting
> a module parameter". s/but/by/
Fixed, thanks.
>
> But thinking further: should this be a per array override? E.g. for DM
> this could easily be a dm-raid table parameter. But I know the MD
> implementation would likely be more cumbersome (superblock update?).
If you want to use discard selectively on some arrays, you can mount
filesystems with appropriate options, or be careful in your use of 'fstrim'.
I see the module option as something to force people to think or ask before
"blindly" using DISCARD. I don't see it is a configuration tool.
>
> Though given the (hopefully) temporary nature of this, maybe it isn't
> worth it for MD. Maybe be a bit more precise in the MODULE_PARM_DESC
> with:
> "Set to Y if all devices in each array reliably returns zeroes on reads
> from discarded regions" ?
I added the 'each'. I don't agree with the 's' on 'returns' :-(
Thanks,
NeilBrown
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dm-devel] dm-raid: add RAID discard support
2014-10-02 2:00 ` NeilBrown
@ 2014-10-02 4:04 ` NeilBrown
2014-10-02 13:52 ` Mike Snitzer
2014-10-03 1:14 ` [dm-devel] " Martin K. Petersen
0 siblings, 2 replies; 28+ messages in thread
From: NeilBrown @ 2014-10-02 4:04 UTC (permalink / raw)
To: Mike Snitzer
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen, linux RAID, lkml
[-- Attachment #1: Type: text/plain, Size: 3570 bytes --]
I plan to submit this to Linus tomorrow, hopefully for 3.7, unless there are
complaints. It is in my for-next branch now.
Thanks,
NeilBrown
From aec6f821ed92fac5ae4f1db50279a3999de5872a Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb@suse.de>
Date: Thu, 2 Oct 2014 13:45:00 +1000
Subject: [PATCH] md/raid5: disable 'DISCARD' by default due to safety
concerns.
It has come to my attention (thanks Martin) that 'discard_zeroes_data'
is only a hint. Some devices in some cases don't do what it
says on the label.
The use of DISCARD in RAID5 depends on reads from discarded regions
being predictably zero. If a write to a previously discarded region
performs a read-modify-write cycle it assumes that the parity block
was consistent with the data blocks. If all were zero, this would
be the case. If some are and some aren't this would not be the case.
This could lead to data corruption after a device failure when
data needs to be reconstructed from the parity.
As we cannot trust 'discard_zeroes_data', ignore it by default
and so disallow DISCARD on all raid4/5/6 arrays.
As many devices are trustworthy, and as there are benefits to using
DISCARD, add a module parameter to over-ride this caution and cause
DISCARD to work if discard_zeroes_data is set.
If a site want to enable DISCARD on some arrays but not on others they
should select DISCARD support at the filesystem level, and set the
raid456 module parameter.
raid456.devices_handle_discard_safely=Y
As this is a data-safety issue, I believe this patch is suitable for
-stable.
DISCARD support for RAID456 was added in 3.7
Cc: Shaohua Li <shli@kernel.org>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: Heinz Mauelshagen <heinzm@redhat.com>
Cc: stable@vger.kernel.org (3.7+)
Fixes: 620125f2bf8ff0c4969b79653b54d7bcc9d40637
Signed-off-by: NeilBrown <neilb@suse.de>
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 183588b11fc1..9f0fbecd1eb5 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -64,6 +64,10 @@
#define cpu_to_group(cpu) cpu_to_node(cpu)
#define ANY_GROUP NUMA_NO_NODE
+static bool devices_handle_discard_safely = false;
+module_param(devices_handle_discard_safely, bool, 0644);
+MODULE_PARM_DESC(devices_handle_discard_safely,
+ "Set to Y if all devices in each array reliably return zeroes on reads from discarded regions");
static struct workqueue_struct *raid5_wq;
/*
* Stripe cache
@@ -6208,7 +6212,7 @@ static int run(struct mddev *mddev)
mddev->queue->limits.discard_granularity = stripe;
/*
* unaligned part of discard request will be ignored, so can't
- * guarantee discard_zerors_data
+ * guarantee discard_zeroes_data
*/
mddev->queue->limits.discard_zeroes_data = 0;
@@ -6233,6 +6237,18 @@ static int run(struct mddev *mddev)
!bdev_get_queue(rdev->bdev)->
limits.discard_zeroes_data)
discard_supported = false;
+ /* Unfortunately, discard_zeroes_data is not currently
+ * a guarantee - just a hint. So we only allow DISCARD
+ * if the sysadmin has confirmed that only safe devices
+ * are in use by setting a module parameter.
+ */
+ if (!devices_handle_discard_safely) {
+ if (discard_supported) {
+ pr_info("md/raid456: discard support disabled due to uncertainty.\n");
+ pr_info("Set raid456.devices_handle_discard_safely=Y to override.\n");
+ }
+ discard_supported = false;
+ }
}
if (discard_supported &&
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: dm-raid: add RAID discard support
2014-10-02 4:04 ` [dm-devel] " NeilBrown
@ 2014-10-02 13:52 ` Mike Snitzer
2014-10-02 18:00 ` Mike Snitzer
2014-10-03 1:14 ` [dm-devel] " Martin K. Petersen
1 sibling, 1 reply; 28+ messages in thread
From: Mike Snitzer @ 2014-10-02 13:52 UTC (permalink / raw)
To: NeilBrown
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen, linux RAID, lkml
On Thu, Oct 02 2014 at 12:04am -0400,
NeilBrown <neilb@suse.de> wrote:
>
> I plan to submit this to Linus tomorrow, hopefully for 3.7, unless there are
> complaints. It is in my for-next branch now.
>
> Thanks,
> NeilBrown
>
>
> From aec6f821ed92fac5ae4f1db50279a3999de5872a Mon Sep 17 00:00:00 2001
> From: NeilBrown <neilb@suse.de>
> Date: Thu, 2 Oct 2014 13:45:00 +1000
> Subject: [PATCH] md/raid5: disable 'DISCARD' by default due to safety
> concerns.
>
> It has come to my attention (thanks Martin) that 'discard_zeroes_data'
> is only a hint. Some devices in some cases don't do what it
> says on the label.
>
> The use of DISCARD in RAID5 depends on reads from discarded regions
> being predictably zero. If a write to a previously discarded region
> performs a read-modify-write cycle it assumes that the parity block
> was consistent with the data blocks. If all were zero, this would
> be the case. If some are and some aren't this would not be the case.
> This could lead to data corruption after a device failure when
> data needs to be reconstructed from the parity.
>
> As we cannot trust 'discard_zeroes_data', ignore it by default
> and so disallow DISCARD on all raid4/5/6 arrays.
>
> As many devices are trustworthy, and as there are benefits to using
> DISCARD, add a module parameter to over-ride this caution and cause
> DISCARD to work if discard_zeroes_data is set.
>
> If a site want to enable DISCARD on some arrays but not on others they
> should select DISCARD support at the filesystem level, and set the
> raid456 module parameter.
> raid456.devices_handle_discard_safely=Y
>
> As this is a data-safety issue, I believe this patch is suitable for
> -stable.
> DISCARD support for RAID456 was added in 3.7
>
> Cc: Shaohua Li <shli@kernel.org>
> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> Cc: Mike Snitzer <snitzer@redhat.com>
> Cc: Heinz Mauelshagen <heinzm@redhat.com>
> Cc: stable@vger.kernel.org (3.7+)
> Fixes: 620125f2bf8ff0c4969b79653b54d7bcc9d40637
> Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Thanks Neil, I'll get something comparable staged for dm-raid for 3.18
(less urgency given that dm-raid doesn't yet support discards).
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: dm-raid: add RAID discard support
2014-10-02 13:52 ` Mike Snitzer
@ 2014-10-02 18:00 ` Mike Snitzer
0 siblings, 0 replies; 28+ messages in thread
From: Mike Snitzer @ 2014-10-02 18:00 UTC (permalink / raw)
To: NeilBrown
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen, linux RAID, lkml
On Thu, Oct 02 2014 at 9:52am -0400,
Mike Snitzer <snitzer@redhat.com> wrote:
> Thanks Neil, I'll get something comparable staged for dm-raid for 3.18
> (less urgency given that dm-raid doesn't yet support discards).
FYI, I've staged the following dm-raid change in linux-next for 3.18:
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=bfc3f335f9886b68d281240aab751b96259d8860
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dm-devel] dm-raid: add RAID discard support
2014-10-02 4:04 ` [dm-devel] " NeilBrown
2014-10-02 13:52 ` Mike Snitzer
@ 2014-10-03 1:14 ` Martin K. Petersen
1 sibling, 0 replies; 28+ messages in thread
From: Martin K. Petersen @ 2014-10-03 1:14 UTC (permalink / raw)
To: NeilBrown
Cc: Mike Snitzer, Heinz Mauelshagen, device-mapper development,
Shaohua Li, Martin K. Petersen, linux RAID, lkml
>>>>> "Neil" == NeilBrown <neilb@suse.de> writes:
Neil> I plan to submit this to Linus tomorrow, hopefully for 3.7, unless
Neil> there are complaints. It is in my for-next branch now.
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH] dm-raid: add RAID discard support
2014-10-01 2:56 ` NeilBrown
2014-10-01 11:13 ` Heinz Mauelshagen
2014-10-01 13:32 ` Mike Snitzer
@ 2014-10-01 16:00 ` Andrey Kuzmin
2014-10-01 23:15 ` NeilBrown
2014-10-01 18:57 ` Brassow Jonathan
2014-10-03 1:09 ` Martin K. Petersen
4 siblings, 1 reply; 28+ messages in thread
From: Andrey Kuzmin @ 2014-10-01 16:00 UTC (permalink / raw)
To: device-mapper development
Cc: Heinz Mauelshagen, Shaohua Li, Martin K. Petersen
On Wed, Oct 1, 2014 at 6:56 AM, NeilBrown <neilb@suse.de> wrote:
> On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
> wrote:
>
>>
>> Martin,
>>
>> thanks for the good explanation of the state of the discard union.
>> Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
>>
>> I was planning to have a followup patch for dm-raid supporting a dm-raid
>> table
>> line argument to prohibit discard passdown.
>>
>> In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data
>> support
>> related to RAID4/5/6, we need that in upstream together with the initial
>> patch.
>>
>> That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6
>> table
>> lines to avoid possible data corruption but can be avoided on RAID1/10
>> table lines,
>> because the latter are not suffering from any discard_zeroes_data flaw.
>>
>>
>> Neil,
>>
>> are you going to disable discards in RAID4/5/6 shortly
>> or rather go with your bitmap solution?
>
> Can I just close my eyes and hope it goes away?
>
> The idea of a bitmap of uninitialised areas is not a short-term solution.
> But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
> would mean that people with good sensible hardware wouldn't be able to use
> it properly.
>
> I would really rather that discard_zeroes_data were only set on devices where
> it was actually true. Then it wouldn't be my problem any more.
>
> Maybe I could do a loud warning
> "Not enabling DISCARD on RAID5 because we cannot trust committees.
> Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
> sectors as zeros"
>
> and add an appropriate module parameter......
>
> While we are on the topic, maybe I should write down my thoughts about the
> bitmap thing in case someone wants to contribute.
>
> There are 3 states that a 'region' can be in:
> 1- known to be in-sync
> 2- possibly not in sync, but it should be
> 3- probably not in sync, contains no valuable data.
>
> A read from '3' should return zeroes.
> A write to '3' should change the region to be '2'. It could either
> write zeros before allowing the write to start, or it could just start
> a normal resync.
>
> Here is a question: if a region has been discarded, are we guaranteed that
> reads are at least stable. i.e. if I read twice will I definitely get the
> same value?
Not sure with other specs, but an NVMe-compliant SSD that supports
discard (Dataset Management command with Deallocate attribute, in NVMe
parlance) is, per spec, required to be deterministic when deallocated
range is subsequently read. That's what the spec (1.1) says:
The value read from a deallocated LBA shall be deterministic;
specifically, the value returned by subsequent reads of that LBA shall
be the same until a write occurs to that LBA. The values read from a
deallocated LBA and its metadata (excluding protection information)
shall be all zeros, all ones, or the last data written to the
associated LBA and its metadata. The values read from an unwritten or
deallocated LBA’s protection information field shall be all ones
(indicating the protection information shall not be checked).
Regards,
Andrey
>
> It would be simplest to store all the states in the one bitmap. i.e. have a
> new version of the current bitmap (which distinguishes between '1' and '2')
> which allows the 3rd state. Probably just 2 bits per region, though we could
> squeeze state for 20 regions into 32 bits if we really wanted :-)
>
> Then we set the bitmap to all '3's when it is created and set regions to
> '3' when discarding.
>
> One problem with this approach is that it forces the same region size.
> Regions for the current bitmap can conveniently be 10s of Megabytes. For
> state '3', we ideally want one region per stripe.
> For 4TB drives with a 512K chunk size (and smaller is not uncommon) that is
> 8 million regions which is a megabyte of bits.
>
> I guess that isn't all that much. One problem with small regions for the
> 1/2 distinction is that we end up updating the bitmap more often, but
> that could be avoided by setting multiple bits at one.
> i.e. keep the internal counters with a larger granularity, and when a
> counter (of outstanding writes) becomes non-zero, set quite a few bits in
> the bitmap.
>
> So that is probably what I would do:
> - new version for bitmap which has 2 bits per region and encodes 3 states
> - bitmap granularity matches chunk size (by default)
> - decouple region size of bitmap from region size for internal 'dirty'
> accounting
> - write to a 'state 3' region sets it to 'state 2' and kicks off resync
> - 'discard' sets state to '3'.
>
> NeilBrown
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH] dm-raid: add RAID discard support
2014-10-01 16:00 ` [PATCH] " Andrey Kuzmin
@ 2014-10-01 23:15 ` NeilBrown
0 siblings, 0 replies; 28+ messages in thread
From: NeilBrown @ 2014-10-01 23:15 UTC (permalink / raw)
To: Andrey Kuzmin
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen
[-- Attachment #1.1: Type: text/plain, Size: 3587 bytes --]
On Wed, 1 Oct 2014 20:00:45 +0400 Andrey Kuzmin <andrey.v.kuzmin@gmail.com>
wrote:
> On Wed, Oct 1, 2014 at 6:56 AM, NeilBrown <neilb@suse.de> wrote:
> > On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
> > wrote:
> >
> >>
> >> Martin,
> >>
> >> thanks for the good explanation of the state of the discard union.
> >> Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
> >>
> >> I was planning to have a followup patch for dm-raid supporting a dm-raid
> >> table
> >> line argument to prohibit discard passdown.
> >>
> >> In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data
> >> support
> >> related to RAID4/5/6, we need that in upstream together with the initial
> >> patch.
> >>
> >> That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6
> >> table
> >> lines to avoid possible data corruption but can be avoided on RAID1/10
> >> table lines,
> >> because the latter are not suffering from any discard_zeroes_data flaw.
> >>
> >>
> >> Neil,
> >>
> >> are you going to disable discards in RAID4/5/6 shortly
> >> or rather go with your bitmap solution?
> >
> > Can I just close my eyes and hope it goes away?
> >
> > The idea of a bitmap of uninitialised areas is not a short-term solution.
> > But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
> > would mean that people with good sensible hardware wouldn't be able to use
> > it properly.
> >
> > I would really rather that discard_zeroes_data were only set on devices where
> > it was actually true. Then it wouldn't be my problem any more.
> >
> > Maybe I could do a loud warning
> > "Not enabling DISCARD on RAID5 because we cannot trust committees.
> > Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
> > sectors as zeros"
> >
> > and add an appropriate module parameter......
> >
> > While we are on the topic, maybe I should write down my thoughts about the
> > bitmap thing in case someone wants to contribute.
> >
> > There are 3 states that a 'region' can be in:
> > 1- known to be in-sync
> > 2- possibly not in sync, but it should be
> > 3- probably not in sync, contains no valuable data.
> >
> > A read from '3' should return zeroes.
> > A write to '3' should change the region to be '2'. It could either
> > write zeros before allowing the write to start, or it could just start
> > a normal resync.
> >
> > Here is a question: if a region has been discarded, are we guaranteed that
> > reads are at least stable. i.e. if I read twice will I definitely get the
> > same value?
>
> Not sure with other specs, but an NVMe-compliant SSD that supports
> discard (Dataset Management command with Deallocate attribute, in NVMe
> parlance) is, per spec, required to be deterministic when deallocated
> range is subsequently read. That's what the spec (1.1) says:
>
> The value read from a deallocated LBA shall be deterministic;
> specifically, the value returned by subsequent reads of that LBA shall
> be the same until a write occurs to that LBA. The values read from a
> deallocated LBA and its metadata (excluding protection information)
> shall be all zeros, all ones, or the last data written to the
> associated LBA and its metadata. The values read from an unwritten or
> deallocated LBA’s protection information field shall be all ones
> (indicating the protection information shall not be checked).
>
That's good to know - thanks.
NeilBrown
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH] dm-raid: add RAID discard support
2014-10-01 2:56 ` NeilBrown
` (2 preceding siblings ...)
2014-10-01 16:00 ` [PATCH] " Andrey Kuzmin
@ 2014-10-01 18:57 ` Brassow Jonathan
2014-10-01 23:18 ` NeilBrown
2014-10-03 1:09 ` Martin K. Petersen
4 siblings, 1 reply; 28+ messages in thread
From: Brassow Jonathan @ 2014-10-01 18:57 UTC (permalink / raw)
To: NeilBrown
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen
On Sep 30, 2014, at 9:56 PM, NeilBrown wrote:
> So that is probably what I would do:
> - new version for bitmap which has 2 bits per region and encodes 3 states
> - bitmap granularity matches chunk size (by default)
> - decouple region size of bitmap from region size for internal 'dirty'
> accounting
> - write to a 'state 3' region sets it to 'state 2' and kicks off resync
> - 'discard' sets state to '3'.
There are scenarios where we do not trust the bitmap and perform exhaustive searches (scrubbing). When we do this, we assume that the bitmap has been correctly handled, but the data has been corrupted somehow. When we scrub, we now could presumably use the bitmap to avoid those regions that have been discarded. However, are there problems to be considered when the corruption is the other way around - i.e. when the bitmap/superblock are corrupted instead of the data area? Do we consider problems like this unlikely because of the frequency of superblock writes? (The bitrot that I am familiar with usually happens in areas that are not touched very often - especially if they exist near areas that /are/ touched very often.)
brassow
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH] dm-raid: add RAID discard support
2014-10-01 18:57 ` Brassow Jonathan
@ 2014-10-01 23:18 ` NeilBrown
0 siblings, 0 replies; 28+ messages in thread
From: NeilBrown @ 2014-10-01 23:18 UTC (permalink / raw)
To: Brassow Jonathan
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen
[-- Attachment #1.1: Type: text/plain, Size: 1596 bytes --]
On Wed, 1 Oct 2014 13:57:24 -0500 Brassow Jonathan <jbrassow@redhat.com>
wrote:
>
> On Sep 30, 2014, at 9:56 PM, NeilBrown wrote:
>
> > So that is probably what I would do:
> > - new version for bitmap which has 2 bits per region and encodes 3 states
> > - bitmap granularity matches chunk size (by default)
> > - decouple region size of bitmap from region size for internal 'dirty'
> > accounting
> > - write to a 'state 3' region sets it to 'state 2' and kicks off resync
> > - 'discard' sets state to '3'.
>
> There are scenarios where we do not trust the bitmap and perform exhaustive searches (scrubbing). When we do this, we assume that the bitmap has been correctly handled, but the data has been corrupted somehow. When we scrub, we now could presumably use the bitmap to avoid those regions that have been discarded. However, are there problems to be considered when the corruption is the other way around - i.e. when the bitmap/superblock are corrupted instead of the data area? Do we consider problems like this unlikely because of the frequency of superblock writes? (The bitrot that I am familiar with usually happens in areas that are not touched very often - especially if they exist near areas that /are/ touched very often.)
>
> brassow
Good question....
I suspect the frequent writes that you mention would reduce the chance of
bitrot - and would make it quickly disappear if it did happen.
Maybe we could compare bitmaps across all devices as the array is assembled
and take some action if there are differences(??).
NeilBrown
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH] dm-raid: add RAID discard support
2014-10-01 2:56 ` NeilBrown
` (3 preceding siblings ...)
2014-10-01 18:57 ` Brassow Jonathan
@ 2014-10-03 1:09 ` Martin K. Petersen
4 siblings, 0 replies; 28+ messages in thread
From: Martin K. Petersen @ 2014-10-03 1:09 UTC (permalink / raw)
To: NeilBrown
Cc: Heinz Mauelshagen, device-mapper development, Shaohua Li,
Martin K. Petersen
>>>>> "Neil" == NeilBrown <neilb@suse.de> writes:
Neil> I would really rather that discard_zeroes_data were only set on
Neil> devices where it was actually true. Then it wouldn't be my
Neil> problem any more.
Yeah. It's too late for 3.18 but I'll try to get a whitelist going for
3.19.
Neil> Here is a question: if a region has been discarded, are we
Neil> guaranteed that reads are at least stable. i.e. if I read twice
Neil> will I definitely get the same value?
If discard_zeroes_data=1, then yes.
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 28+ messages in thread