All of lore.kernel.org
 help / color / mirror / Atom feed
* XFS and nobarrier with SSDs
@ 2015-12-12 10:24 Georg Schönberger
  2015-12-12 12:26 ` Martin Steigerwald
  2015-12-14 11:48 ` Emmanuel Florac
  0 siblings, 2 replies; 18+ messages in thread
From: Georg Schönberger @ 2015-12-12 10:24 UTC (permalink / raw)
  To: xfs

Hi folks!

We are using a lot of SSDs in our Ceph clusters with XFS. Our SSDs have Power Loss Protection
via capacitors, so is it safe in all cases to run XFS with nobarrier on them? Or is there indeed
a need for a specific I/O scheduler?

I have found a recent discussion on the Ceph mailing list, anyone from XFS that can help us?

*http://www.spinics.net/lists/ceph-users/msg22053.html
*https://bugzilla.redhat.com/show_bug.cgi?id=1104380

Thanks for your help,
Georg Schönberger
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-12 10:24 XFS and nobarrier with SSDs Georg Schönberger
@ 2015-12-12 12:26 ` Martin Steigerwald
  2015-12-14  6:43   ` Georg Schönberger
  2015-12-14 11:48 ` Emmanuel Florac
  1 sibling, 1 reply; 18+ messages in thread
From: Martin Steigerwald @ 2015-12-12 12:26 UTC (permalink / raw)
  To: xfs; +Cc: Georg Schönberger

Am Samstag, 12. Dezember 2015, 10:24:25 CET schrieb Georg Schönberger:
> Hi folks!

Hi Georg.

> We are using a lot of SSDs in our Ceph clusters with XFS. Our SSDs have
> Power Loss Protection via capacitors, so is it safe in all cases to run XFS
> with nobarrier on them? Or is there indeed a need for a specific I/O
> scheduler?

I do think that using nobarrier would be safe with those SSDs as long as there 
is no other caching happening on the hardware side, for example inside the 
controller that talks to the SSDs.

I always thought barrier/nobarrier acts independently of the I/O scheduler 
thing, but I can understand the thought from the bug report you linked to 
below. As for I/O schedulers, with recent kernels and block multiqueue I see 
it being set to "none".

> I have found a recent discussion on the Ceph mailing list, anyone from XFS
> that can help us?
> 
> *http://www.spinics.net/lists/ceph-users/msg22053.html

Also see:

http://xfs.org/index.php/
XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.
3F

> *https://bugzilla.redhat.com/show_bug.cgi?id=1104380

Interesting. Never thought of that one.

So would it be safe to interrupt the flow of data towards the SSD at any point 
if time with reordering I/O schedulers in place? And how about blk-mq which 
has mutiple software queus?

I like to think that they are still independent of the barrier thing and the 
last bug comment by Eric, where he quoted from Jeff, supports this:

> Eric Sandeen 2014-06-24 10:32:06 EDT
> 
> As Jeff Moyer says:
> > The file system will manually order dependent I/O.
> > What I mean by that is the file system will send down any I/O for the
> > transaction log, wait for that to complete, issue a barrier (which will
> > be a noop in the case of a battery-backed write cache), and then send
> > down the commit block along with another barrier.  As such, you cannot
> > have the I/O scheduler reorder the commit block and the log entry with
> > which it is associated.\0

Ciao,
-- 
Martin
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-12 12:26 ` Martin Steigerwald
@ 2015-12-14  6:43   ` Georg Schönberger
  2015-12-14  8:38       ` Martin Steigerwald
  0 siblings, 1 reply; 18+ messages in thread
From: Georg Schönberger @ 2015-12-14  6:43 UTC (permalink / raw)
  To: Martin Steigerwald, xfs


On 2015-12-12 13:26, Martin Steigerwald wrote:
> Am Samstag, 12. Dezember 2015, 10:24:25 CET schrieb Georg Schönberger:
>> We are using a lot of SSDs in our Ceph clusters with XFS. Our SSDs have
>> Power Loss Protection via capacitors, so is it safe in all cases to run XFS
>> with nobarrier on them? Or is there indeed a need for a specific I/O
>> scheduler?
> I do think that using nobarrier would be safe with those SSDs as long as there
> is no other caching happening on the hardware side, for example inside the
> controller that talks to the SSDs.
Hi Martin, thanks for your response!

We are using HBAs and no RAID controller, therefore there is no other 
cache in the I/O stack.

>
> I always thought barrier/nobarrier acts independently of the I/O scheduler
> thing, but I can understand the thought from the bug report you linked to
> below. As for I/O schedulers, with recent kernels and block multiqueue I see
> it being set to "none".
What do you mean by "none" near? Do you think I will be more on the safe 
side with noop scheduler?

>
>> I have found a recent discussion on the Ceph mailing list, anyone from XFS
>> that can help us?
>>
>> *http://www.spinics.net/lists/ceph-users/msg22053.html
> Also see:
>
> http://xfs.org/index.php/
> XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.
> 3F
I've already read that XFS wiki entry before and also found some Intel 
presentations where they suggest to use nobarrier with
there enterprise SSDs. But a confirmation from any block layer 
specialist would be a good thing!

>
>> *https://bugzilla.redhat.com/show_bug.cgi?id=1104380
> Interesting. Never thought of that one.
>
> So would it be safe to interrupt the flow of data towards the SSD at any point
> if time with reordering I/O schedulers in place? And how about blk-mq which
> has mutiple software queus?
Maybe we should ask the block layer mailing list about that?

>
> I like to think that they are still independent of the barrier thing and the
> last bug comment by Eric, where he quoted from Jeff, supports this:
>
>> Eric Sandeen 2014-06-24 10:32:06 EDT
>>
>> As Jeff Moyer says:
>>> The file system will manually order dependent I/O.
>>> What I mean by that is the file system will send down any I/O for the
>>> transaction log, wait for that to complete, issue a barrier (which will
>>> be a noop in the case of a battery-backed write cache), and then send
>>> down the commit block along with another barrier.  As such, you cannot
>>> have the I/O scheduler reorder the commit block and the log entry with
>>> which it is associated.
If it is truly that way then I do not see any problems using nobarrier 
with the SSDs an power loss protection.
I have already find some people say that enterprise SSDs with PLP simply 
ignore the sync call. If that's the case
then using nobarrier would have no performance improvement...

Cheers, Georg

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-14  6:43   ` Georg Schönberger
@ 2015-12-14  8:38       ` Martin Steigerwald
  0 siblings, 0 replies; 18+ messages in thread
From: Martin Steigerwald @ 2015-12-14  8:38 UTC (permalink / raw)
  To: Georg Schönberger, Jens Axboe, Jeff Moyer, Linux FS-Devel,
	Linux Block mailing list
  Cc: XFS mailing list

Hello Georg.

I am adding in some Ccs of kernel devs and mailing lists, as I think this is a 
more generic question. Still of course would be nice to hear something from 
XFS developers as well.

For the broader audience the question is:

Is it safe to use XFS (or any other filesystem) on enterprise SSDs with Power 
Loss Protection (PLP), i.e. some capacitor to provide for enough electricity 
to write out all data in DRAM to flash after a power loss, with a reordering 
I/O scheduler like CFQ?

According to this comment of Jeff Moyer on a report on RedHat´s bugzilla the 
I/O scheduler cannot reorder commit block and log entry, so it would be safe, 
I think (see below).

Am Montag, 14. Dezember 2015, 06:43:48 CET schrieb Georg Schönberger:
> On 2015-12-12 13:26, Martin Steigerwald wrote:
> > Am Samstag, 12. Dezember 2015, 10:24:25 CET schrieb Georg Schönberger:
> >> We are using a lot of SSDs in our Ceph clusters with XFS. Our SSDs have
> >> Power Loss Protection via capacitors, so is it safe in all cases to run
> >> XFS
> >> with nobarrier on them? Or is there indeed a need for a specific I/O
> >> scheduler?
> > 
> > I do think that using nobarrier would be safe with those SSDs as long as
> > there is no other caching happening on the hardware side, for example
> > inside the controller that talks to the SSDs.
[…]
> We are using HBAs and no RAID controller, therefore there is no other
> cache in the I/O stack.
> 
> > I always thought barrier/nobarrier acts independently of the I/O scheduler
> > thing, but I can understand the thought from the bug report you linked to
> > below. As for I/O schedulers, with recent kernels and block multiqueue I
> > see it being set to "none".
> 
> What do you mean by "none" near? Do you think I will be more on the safe
> side with noop scheduler?

I mean that I get this on a 4.3 kernel with blk-mq enabled:

merkaba:/sys/block/sda/queue> grep . rotational scheduler 
rotational:0
scheduler:none

merkaba:/sys/block/sda/queue> echo "cfq" > scheduler 
merkaba:/sys/block/sda/queue> cat scheduler                              
none
merkaba:/sys/block/sda/queue> echo "noop" > scheduler
merkaba:/sys/block/sda/queue> cat scheduler          
none

So with blk-mq I do not get a choice which scheduler to use anyway. Which is 
what I expect. Thats why there has been a discussion about blk-mq on 
rotational devices recently.

> >> I have found a recent discussion on the Ceph mailing list, anyone from
> >> XFS
> >> that can help us?
> >> 
> >> *http://www.spinics.net/lists/ceph-users/msg22053.html
> > 
> > Also see:
> > 
> > http://xfs.org/index.php/
> > XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_
> > write_cache. 3F
> 
> I've already read that XFS wiki entry before and also found some Intel
> presentations where they suggest to use nobarrier with
> there enterprise SSDs. But a confirmation from any block layer
> specialist would be a good thing!

I think Jens Axboe would be good to ask? But as Jeff Moyer already replied to 
the bug report?

> >> *https://bugzilla.redhat.com/show_bug.cgi?id=1104380
> > 
> > Interesting. Never thought of that one.
> > 
> > So would it be safe to interrupt the flow of data towards the SSD at any
> > point if time with reordering I/O schedulers in place? And how about
> > blk-mq which has mutiple software queus?
> 
> Maybe we should ask the block layer mailing list about that?
> 
> > I like to think that they are still independent of the barrier thing and
> > the> 
> > last bug comment by Eric, where he quoted from Jeff, supports this:
> >> Eric Sandeen 2014-06-24 10:32:06 EDT
> >> 
> >> As Jeff Moyer says:
> >>> The file system will manually order dependent I/O.
> >>> What I mean by that is the file system will send down any I/O for the
> >>> transaction log, wait for that to complete, issue a barrier (which will
> >>> be a noop in the case of a battery-backed write cache), and then send
> >>> down the commit block along with another barrier.  As such, you cannot
> >>> have the I/O scheduler reorder the commit block and the log entry with
> >>> which it is associated.
> 
> If it is truly that way then I do not see any problems using nobarrier
> with the SSDs an power loss protection.
> I have already find some people say that enterprise SSDs with PLP simply
> ignore the sync call. If that's the case
> then using nobarrier would have no performance improvement...

Interesting. I think if they ignore it tough they risk data loss. Cause I 
imagine there might be a slight chance where the data has been sent by the 
controller, but not yet fully stored in SSD DRAM. Or is this operation atomic?

Thanks,
-- 
Martin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
@ 2015-12-14  8:38       ` Martin Steigerwald
  0 siblings, 0 replies; 18+ messages in thread
From: Martin Steigerwald @ 2015-12-14  8:38 UTC (permalink / raw)
  To: Georg Schönberger, Jens Axboe, Jeff Moyer, Linux FS-Devel,
	Linux Block mailing list
  Cc: XFS mailing list

Hello Georg.

I am adding in some Ccs of kernel devs and mailing lists, as I think this is a 
more generic question. Still of course would be nice to hear something from 
XFS developers as well.

For the broader audience the question is:

Is it safe to use XFS (or any other filesystem) on enterprise SSDs with Power 
Loss Protection (PLP), i.e. some capacitor to provide for enough electricity 
to write out all data in DRAM to flash after a power loss, with a reordering 
I/O scheduler like CFQ?

According to this comment of Jeff Moyer on a report on RedHat´s bugzilla the 
I/O scheduler cannot reorder commit block and log entry, so it would be safe, 
I think (see below).

Am Montag, 14. Dezember 2015, 06:43:48 CET schrieb Georg Schönberger:
> On 2015-12-12 13:26, Martin Steigerwald wrote:
> > Am Samstag, 12. Dezember 2015, 10:24:25 CET schrieb Georg Schönberger:
> >> We are using a lot of SSDs in our Ceph clusters with XFS. Our SSDs have
> >> Power Loss Protection via capacitors, so is it safe in all cases to run
> >> XFS
> >> with nobarrier on them? Or is there indeed a need for a specific I/O
> >> scheduler?
> > 
> > I do think that using nobarrier would be safe with those SSDs as long as
> > there is no other caching happening on the hardware side, for example
> > inside the controller that talks to the SSDs.
[…]
> We are using HBAs and no RAID controller, therefore there is no other
> cache in the I/O stack.
> 
> > I always thought barrier/nobarrier acts independently of the I/O scheduler
> > thing, but I can understand the thought from the bug report you linked to
> > below. As for I/O schedulers, with recent kernels and block multiqueue I
> > see it being set to "none".
> 
> What do you mean by "none" near? Do you think I will be more on the safe
> side with noop scheduler?

I mean that I get this on a 4.3 kernel with blk-mq enabled:

merkaba:/sys/block/sda/queue> grep . rotational scheduler 
rotational:0
scheduler:none

merkaba:/sys/block/sda/queue> echo "cfq" > scheduler 
merkaba:/sys/block/sda/queue> cat scheduler                              
none
merkaba:/sys/block/sda/queue> echo "noop" > scheduler
merkaba:/sys/block/sda/queue> cat scheduler          
none

So with blk-mq I do not get a choice which scheduler to use anyway. Which is 
what I expect. Thats why there has been a discussion about blk-mq on 
rotational devices recently.

> >> I have found a recent discussion on the Ceph mailing list, anyone from
> >> XFS
> >> that can help us?
> >> 
> >> *http://www.spinics.net/lists/ceph-users/msg22053.html
> > 
> > Also see:
> > 
> > http://xfs.org/index.php/
> > XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_
> > write_cache. 3F
> 
> I've already read that XFS wiki entry before and also found some Intel
> presentations where they suggest to use nobarrier with
> there enterprise SSDs. But a confirmation from any block layer
> specialist would be a good thing!

I think Jens Axboe would be good to ask? But as Jeff Moyer already replied to 
the bug report?

> >> *https://bugzilla.redhat.com/show_bug.cgi?id=1104380
> > 
> > Interesting. Never thought of that one.
> > 
> > So would it be safe to interrupt the flow of data towards the SSD at any
> > point if time with reordering I/O schedulers in place? And how about
> > blk-mq which has mutiple software queus?
> 
> Maybe we should ask the block layer mailing list about that?
> 
> > I like to think that they are still independent of the barrier thing and
> > the> 
> > last bug comment by Eric, where he quoted from Jeff, supports this:
> >> Eric Sandeen 2014-06-24 10:32:06 EDT
> >> 
> >> As Jeff Moyer says:
> >>> The file system will manually order dependent I/O.
> >>> What I mean by that is the file system will send down any I/O for the
> >>> transaction log, wait for that to complete, issue a barrier (which will
> >>> be a noop in the case of a battery-backed write cache), and then send
> >>> down the commit block along with another barrier.  As such, you cannot
> >>> have the I/O scheduler reorder the commit block and the log entry with
> >>> which it is associated.
> 
> If it is truly that way then I do not see any problems using nobarrier
> with the SSDs an power loss protection.
> I have already find some people say that enterprise SSDs with PLP simply
> ignore the sync call. If that's the case
> then using nobarrier would have no performance improvement...

Interesting. I think if they ignore it tough they risk data loss. Cause I 
imagine there might be a slight chance where the data has been sent by the 
controller, but not yet fully stored in SSD DRAM. Or is this operation atomic?

Thanks,
-- 
Martin

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-14  8:38       ` Martin Steigerwald
@ 2015-12-14  9:58         ` Christoph Hellwig
  -1 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2015-12-14  9:58 UTC (permalink / raw)
  To: Martin Steigerwald
  Cc: Georg Sch?nberger, Jens Axboe, Jeff Moyer, Linux FS-Devel,
	Linux Block mailing list, XFS mailing list

On Mon, Dec 14, 2015 at 09:38:56AM +0100, Martin Steigerwald wrote:
> Is it safe to use XFS (or any other filesystem) on enterprise SSDs with Power 
> Loss Protection (PLP), i.e. some capacitor to provide for enough electricity 
> to write out all data in DRAM to flash after a power loss, with a reordering 
> I/O scheduler like CFQ?

If the device does not need cache flushes it should not report requiring
flushes, in which case nobarrier will be a noop.  Or to phrase it
differently:  If nobarrier makes a difference skipping it is not safe.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
@ 2015-12-14  9:58         ` Christoph Hellwig
  0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2015-12-14  9:58 UTC (permalink / raw)
  To: Martin Steigerwald
  Cc: Jens Axboe, Georg Sch?nberger, XFS mailing list,
	Linux Block mailing list, Jeff Moyer, Linux FS-Devel

On Mon, Dec 14, 2015 at 09:38:56AM +0100, Martin Steigerwald wrote:
> Is it safe to use XFS (or any other filesystem) on enterprise SSDs with Power 
> Loss Protection (PLP), i.e. some capacitor to provide for enough electricity 
> to write out all data in DRAM to flash after a power loss, with a reordering 
> I/O scheduler like CFQ?

If the device does not need cache flushes it should not report requiring
flushes, in which case nobarrier will be a noop.  Or to phrase it
differently:  If nobarrier makes a difference skipping it is not safe.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-14  9:58         ` Christoph Hellwig
@ 2015-12-14 10:18           ` Georg Schönberger
  -1 siblings, 0 replies; 18+ messages in thread
From: Georg Schönberger @ 2015-12-14 10:18 UTC (permalink / raw)
  To: Christoph Hellwig, Martin Steigerwald
  Cc: Jens Axboe, Jeff Moyer, Linux FS-Devel, Linux Block mailing list,
	XFS mailing list

On 2015-12-14 10:58, Christoph Hellwig wrote:
> On Mon, Dec 14, 2015 at 09:38:56AM +0100, Martin Steigerwald wrote:
>> Is it safe to use XFS (or any other filesystem) on enterprise SSDs with Power
>> Loss Protection (PLP), i.e. some capacitor to provide for enough electricity
>> to write out all data in DRAM to flash after a power loss, with a reordering
>> I/O scheduler like CFQ?
> If the device does not need cache flushes it should not report requiring
> flushes, in which case nobarrier will be a noop.
OK - that would also mean that mounting with nobarrier should not make a 
performance difference.

> Or to phrase it
> differently:  If nobarrier makes a difference skipping it is not safe.
I do not fully understand that sentence, what do you mean by "makes a 
difference" and "skipping is not safe"?

-Georg

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
@ 2015-12-14 10:18           ` Georg Schönberger
  0 siblings, 0 replies; 18+ messages in thread
From: Georg Schönberger @ 2015-12-14 10:18 UTC (permalink / raw)
  To: Christoph Hellwig, Martin Steigerwald
  Cc: Jens Axboe, Linux FS-Devel, Jeff Moyer, Linux Block mailing list,
	XFS mailing list

On 2015-12-14 10:58, Christoph Hellwig wrote:
> On Mon, Dec 14, 2015 at 09:38:56AM +0100, Martin Steigerwald wrote:
>> Is it safe to use XFS (or any other filesystem) on enterprise SSDs with Power
>> Loss Protection (PLP), i.e. some capacitor to provide for enough electricity
>> to write out all data in DRAM to flash after a power loss, with a reordering
>> I/O scheduler like CFQ?
> If the device does not need cache flushes it should not report requiring
> flushes, in which case nobarrier will be a noop.
OK - that would also mean that mounting with nobarrier should not make a 
performance difference.

> Or to phrase it
> differently:  If nobarrier makes a difference skipping it is not safe.
I do not fully understand that sentence, what do you mean by "makes a 
difference" and "skipping is not safe"?

-Georg
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-14 10:18           ` Georg Schönberger
@ 2015-12-14 10:27             ` Christoph Hellwig
  -1 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2015-12-14 10:27 UTC (permalink / raw)
  To: Georg Sch?nberger
  Cc: Christoph Hellwig, Martin Steigerwald, Jens Axboe,
	Linux FS-Devel, Jeff Moyer, Linux Block mailing list,
	XFS mailing list

On Mon, Dec 14, 2015 at 10:18:54AM +0000, Georg Sch?nberger wrote:
> > Or to phrase it
> > differently:  If nobarrier makes a difference skipping it is not safe.
> I do not fully understand that sentence, what do you mean by "makes a 
> difference" and "skipping is not safe"?

The rule of thumb is: if nobarrier makes your workload run faster you
should not be using it, aka: don't use it. 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
@ 2015-12-14 10:27             ` Christoph Hellwig
  0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2015-12-14 10:27 UTC (permalink / raw)
  To: Georg Sch?nberger
  Cc: Jens Axboe, Linux Block mailing list, XFS mailing list,
	Christoph Hellwig, Jeff Moyer, Linux FS-Devel

On Mon, Dec 14, 2015 at 10:18:54AM +0000, Georg Sch?nberger wrote:
> > Or to phrase it
> > differently:  If nobarrier makes a difference skipping it is not safe.
> I do not fully understand that sentence, what do you mean by "makes a 
> difference" and "skipping is not safe"?

The rule of thumb is: if nobarrier makes your workload run faster you
should not be using it, aka: don't use it. 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-14 10:27             ` Christoph Hellwig
@ 2015-12-14 10:34               ` Georg Schönberger
  -1 siblings, 0 replies; 18+ messages in thread
From: Georg Schönberger @ 2015-12-14 10:34 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Martin Steigerwald, Jens Axboe, Linux FS-Devel, Jeff Moyer,
	Linux Block mailing list, XFS mailing list

On 2015-12-14 11:27, Christoph Hellwig wrote:
> On Mon, Dec 14, 2015 at 10:18:54AM +0000, Georg Sch?nberger wrote:
>>> Or to phrase it
>>> differently:  If nobarrier makes a difference skipping it is not safe.
>> I do not fully understand that sentence, what do you mean by "makes a
>> difference" and "skipping is not safe"?
> The rule of thumb is: if nobarrier makes your workload run faster you
> should not be using it, aka: don't use it.
OK, thanks for clarification.
Should the XFS FAQ be updated?
*http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
@ 2015-12-14 10:34               ` Georg Schönberger
  0 siblings, 0 replies; 18+ messages in thread
From: Georg Schönberger @ 2015-12-14 10:34 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, XFS mailing list, Linux Block mailing list,
	Jeff Moyer, Linux FS-Devel

On 2015-12-14 11:27, Christoph Hellwig wrote:
> On Mon, Dec 14, 2015 at 10:18:54AM +0000, Georg Sch?nberger wrote:
>>> Or to phrase it
>>> differently:  If nobarrier makes a difference skipping it is not safe.
>> I do not fully understand that sentence, what do you mean by "makes a
>> difference" and "skipping is not safe"?
> The rule of thumb is: if nobarrier makes your workload run faster you
> should not be using it, aka: don't use it.
OK, thanks for clarification.
Should the XFS FAQ be updated?
*http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-12 10:24 XFS and nobarrier with SSDs Georg Schönberger
  2015-12-12 12:26 ` Martin Steigerwald
@ 2015-12-14 11:48 ` Emmanuel Florac
  1 sibling, 0 replies; 18+ messages in thread
From: Emmanuel Florac @ 2015-12-14 11:48 UTC (permalink / raw)
  To: Georg Schönberger; +Cc: xfs

Le Sat, 12 Dec 2015 10:24:25 +0000
Georg Schönberger <g.schoenberger@xortex.com> écrivait:

> I have found a recent discussion on the Ceph mailing list, anyone
> from XFS that can help us?
> 

See this article from someone using many SSDs:
http://blog.nordeus.com/dev-ops/power-failure-testing-with-ssds.htm

If you want to know if a particular SSD model is reliable, your best
option is to buy one, write a lot of data to it, and pull the plug on
the machine repeatedly while it's writing, and see what's happening
when using barriers or not.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-14 10:34               ` Georg Schönberger
  (?)
@ 2015-12-14 13:36               ` Christoph Hellwig
  -1 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2015-12-14 13:36 UTC (permalink / raw)
  To: Georg Sch?nberger
  Cc: Christoph Hellwig, Martin Steigerwald, Jens Axboe,
	Linux FS-Devel, Jeff Moyer, Linux Block mailing list,
	XFS mailing list

On Mon, Dec 14, 2015 at 10:34:30AM +0000, Georg Sch?nberger wrote:
> OK, thanks for clarification.
> Should the XFS FAQ be updated?
> *http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F

Probably.  The text soudns to me like it was written a long time ago
when Linux actually use barriers that also prevent I/O reordering
instead of just issuing the required cache flushes.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-14 10:34               ` Georg Schönberger
  (?)
  (?)
@ 2015-12-14 16:39               ` Eric Sandeen
  -1 siblings, 0 replies; 18+ messages in thread
From: Eric Sandeen @ 2015-12-14 16:39 UTC (permalink / raw)
  To: xfs



On 12/14/15 4:34 AM, Georg Schönberger wrote:
> On 2015-12-14 11:27, Christoph Hellwig wrote:
>> On Mon, Dec 14, 2015 at 10:18:54AM +0000, Georg Sch?nberger wrote:
>>>> Or to phrase it
>>>> differently:  If nobarrier makes a difference skipping it is not safe.
>>> I do not fully understand that sentence, what do you mean by "makes a
>>> difference" and "skipping is not safe"?
>> The rule of thumb is: if nobarrier makes your workload run faster you
>> should not be using it, aka: don't use it.
> OK, thanks for clarification.
> Should the XFS FAQ be updated?
> *http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F

Yes, it should.  I've made some edits, hopefully it's up to date and clear now.

Thanks,
-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
  2015-12-14 10:27             ` Christoph Hellwig
@ 2015-12-26 23:44               ` Linda Walsh
  -1 siblings, 0 replies; 18+ messages in thread
From: Linda Walsh @ 2015-12-26 23:44 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Linux Block mailing list, XFS mailing list, Linux FS-Devel



Christoph Hellwig wrote:
> The rule of thumb is: if nobarrier makes your workload run faster you
> should not be using it, aka: don't use it. 
----
	So what is the purpose of the switch if it is to only
be used when it makes no difference?

I.e. My raid controller does write-through if it's internal
battery needs replacing, otherwise, it does write-back.

On top of that my system is on a UPS that is good for a hour or more
of running.  

So, I used to use nobarrier on "work" disks where there were likely
to be alot of "writes".  Those disks are also backed up daily via
xfsdump/restore.  I figured those would benefit most, and at worst
I could restore to previous morning's backup.

Eventually stopped using the option, as for the most part, I couldn't
really measure any reliable difference in performance (which means
I should use it?!?).

Hmmm...

The only times I have experienced disk corruption on a single
disk were either back before I ever tried the option, or when
I had several months to a year where I tried to use software
RAID5 (several-10+ years ago, before it was possible to use
multiple cores for doing some RAID operations).

I doubt I'm going to try it again soon, but being told that
it's only "ok" to use an option when it makes no difference
in performance *sounds* more than a little confusing.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: XFS and nobarrier with SSDs
@ 2015-12-26 23:44               ` Linda Walsh
  0 siblings, 0 replies; 18+ messages in thread
From: Linda Walsh @ 2015-12-26 23:44 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Linux Block mailing list, Linux FS-Devel, XFS mailing list



Christoph Hellwig wrote:
> The rule of thumb is: if nobarrier makes your workload run faster you
> should not be using it, aka: don't use it. 
----
	So what is the purpose of the switch if it is to only
be used when it makes no difference?

I.e. My raid controller does write-through if it's internal
battery needs replacing, otherwise, it does write-back.

On top of that my system is on a UPS that is good for a hour or more
of running.  

So, I used to use nobarrier on "work" disks where there were likely
to be alot of "writes".  Those disks are also backed up daily via
xfsdump/restore.  I figured those would benefit most, and at worst
I could restore to previous morning's backup.

Eventually stopped using the option, as for the most part, I couldn't
really measure any reliable difference in performance (which means
I should use it?!?).

Hmmm...

The only times I have experienced disk corruption on a single
disk were either back before I ever tried the option, or when
I had several months to a year where I tried to use software
RAID5 (several-10+ years ago, before it was possible to use
multiple cores for doing some RAID operations).

I doubt I'm going to try it again soon, but being told that
it's only "ok" to use an option when it makes no difference
in performance *sounds* more than a little confusing.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2015-12-27  0:16 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-12 10:24 XFS and nobarrier with SSDs Georg Schönberger
2015-12-12 12:26 ` Martin Steigerwald
2015-12-14  6:43   ` Georg Schönberger
2015-12-14  8:38     ` Martin Steigerwald
2015-12-14  8:38       ` Martin Steigerwald
2015-12-14  9:58       ` Christoph Hellwig
2015-12-14  9:58         ` Christoph Hellwig
2015-12-14 10:18         ` Georg Schönberger
2015-12-14 10:18           ` Georg Schönberger
2015-12-14 10:27           ` Christoph Hellwig
2015-12-14 10:27             ` Christoph Hellwig
2015-12-14 10:34             ` Georg Schönberger
2015-12-14 10:34               ` Georg Schönberger
2015-12-14 13:36               ` Christoph Hellwig
2015-12-14 16:39               ` Eric Sandeen
2015-12-26 23:44             ` Linda Walsh
2015-12-26 23:44               ` Linda Walsh
2015-12-14 11:48 ` Emmanuel Florac

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.