All of lore.kernel.org
 help / color / mirror / Atom feed
* SSD slowdown with 3.3.X?
@ 2012-04-19  2:15 Joe Ceklosky
  2012-04-19  3:13 ` Mark Lord
  0 siblings, 1 reply; 11+ messages in thread
From: Joe Ceklosky @ 2012-04-19  2:15 UTC (permalink / raw)
  To: linux-ide

All,

Has anyone reported slowness using SSD's on kernel 3.3.X compiled
as 32-bit PAE with 16 Gigs of memory (I know I need to update to 64-bit
already, will do with Fedora 17)?

I am see terrible r/w to an SSD using 3.3.2.  When I boot the same machine
and SSD back in 3.2.15 all is fine.





Joe Ceklosky

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
  2012-04-19  2:15 SSD slowdown with 3.3.X? Joe Ceklosky
@ 2012-04-19  3:13 ` Mark Lord
  2012-04-20 15:23   ` Jeff Moyer
       [not found]   ` <4F90C4CF.1010000@gmail.com>
  0 siblings, 2 replies; 11+ messages in thread
From: Mark Lord @ 2012-04-19  3:13 UTC (permalink / raw)
  To: Joe Ceklosky; +Cc: linux-ide

On 12-04-18 10:15 PM, Joe Ceklosky wrote:
> All,
> 
> Has anyone reported slowness using SSD's on kernel 3.3.X compiled
> as 32-bit PAE with 16 Gigs of memory (I know I need to update to 64-bit
> already, will do with Fedora 17)?
> 
> I am see terrible r/w to an SSD using 3.3.2.  When I boot the same machine
> and SSD back in 3.2.15 all is fine.


Double check which IO-scheduler the kernel is choosing.
For SSDs, it is normally "noop", but I noticed "cfq"
being chosen instead for some reason.

   find /sys -name scheduler|grep '/sd[a-z]/'|xargs cat

-ml

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
  2012-04-19  3:13 ` Mark Lord
@ 2012-04-20 15:23   ` Jeff Moyer
       [not found]   ` <4F90C4CF.1010000@gmail.com>
  1 sibling, 0 replies; 11+ messages in thread
From: Jeff Moyer @ 2012-04-20 15:23 UTC (permalink / raw)
  To: Mark Lord; +Cc: Joe Ceklosky, linux-ide

Mark Lord <kernel@teksavvy.com> writes:

> On 12-04-18 10:15 PM, Joe Ceklosky wrote:
>> All,
>> 
>> Has anyone reported slowness using SSD's on kernel 3.3.X compiled
>> as 32-bit PAE with 16 Gigs of memory (I know I need to update to 64-bit
>> already, will do with Fedora 17)?
>> 
>> I am see terrible r/w to an SSD using 3.3.2.  When I boot the same machine
>> and SSD back in 3.2.15 all is fine.
>
>
> Double check which IO-scheduler the kernel is choosing.
> For SSDs, it is normally "noop", but I noticed "cfq"
> being chosen instead for some reason.

The default I/O scheduler is the default I/O scheduler.  Drivers may
override this (some high-end PCIe SSD drivers do this, and I think s390
block drivers do as well), but in general the default is left alone.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
       [not found]   ` <4F90C4CF.1010000@gmail.com>
@ 2012-04-21  2:40     ` Mark Lord
  2012-04-21  2:43       ` Mark Lord
  0 siblings, 1 reply; 11+ messages in thread
From: Mark Lord @ 2012-04-21  2:40 UTC (permalink / raw)
  To: Joe Ceklosky,
	linux-ide@vger.kernel.org >> IDE/ATA development list

On 12-04-19 10:07 PM, Joe Ceklosky wrote:
> Mark,
> 
> 
> Thanks for the info, but nothing like that shows up:
> 
> 
> [jceklosk@neptune tmp]$ cat c-3.2.15
> noop deadline [cfq]
> noop deadline [cfq]
> 
> 
> [jceklosk@neptune tmp]$ cat c-3.3.2
> noop deadline [cfq]
> noop deadline [cfq]


Well, the stuff you posted (above) shows that cfq is being used
instead of noop.  For SSDs, noop is the more natural choice,
and used to be the default in the kernel for a while.
I wonder when that changed?

You can change it (after boot) by just echo'ing "noop" into
those same sysfs entries.

Cheers


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
  2012-04-21  2:40     ` Mark Lord
@ 2012-04-21  2:43       ` Mark Lord
  2012-04-21  3:53         ` Stan Hoeppner
  2012-04-23 14:11         ` Jeff Moyer
  0 siblings, 2 replies; 11+ messages in thread
From: Mark Lord @ 2012-04-21  2:43 UTC (permalink / raw)
  To: Joe Ceklosky; +Cc: linux-ide@vger.kernel.org >> IDE/ATA development list

On 12-04-20 10:40 PM, Mark Lord wrote:
> On 12-04-19 10:07 PM, Joe Ceklosky wrote:
>> Mark,
>>
>>
>> Thanks for the info, but nothing like that shows up:
>>
>>
>> [jceklosk@neptune tmp]$ cat c-3.2.15
>> noop deadline [cfq]
>> noop deadline [cfq]
>>
>>
>> [jceklosk@neptune tmp]$ cat c-3.3.2
>> noop deadline [cfq]
>> noop deadline [cfq]
> 
> 
> Well, the stuff you posted (above) shows that cfq is being used
> instead of noop.  For SSDs, noop is the more natural choice,
> and used to be the default in the kernel for a while.
> I wonder when that changed?

Looking into the block layer now, I see that "cfq" at some point
became "SSD aware", which is probably when the default io scheduler
for SSDs changed back to cfq from noop.

Not 100% sure, but that's how it appears now.
I still have my systems set it to noop when an SSD is detected.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
  2012-04-21  2:43       ` Mark Lord
@ 2012-04-21  3:53         ` Stan Hoeppner
  2012-04-21 11:45           ` cwillu
  2012-04-23 14:11         ` Jeff Moyer
  1 sibling, 1 reply; 11+ messages in thread
From: Stan Hoeppner @ 2012-04-21  3:53 UTC (permalink / raw)
  To: Mark Lord
  Cc: Joe Ceklosky,
	linux-ide@vger.kernel.org >> IDE/ATA development list

On 4/20/2012 9:43 PM, Mark Lord wrote:
> On 12-04-20 10:40 PM, Mark Lord wrote:
>> On 12-04-19 10:07 PM, Joe Ceklosky wrote:
>>> Mark,
>>>
>>>
>>> Thanks for the info, but nothing like that shows up:
>>>
>>>
>>> [jceklosk@neptune tmp]$ cat c-3.2.15
>>> noop deadline [cfq]
>>> noop deadline [cfq]
>>>
>>>
>>> [jceklosk@neptune tmp]$ cat c-3.3.2
>>> noop deadline [cfq]
>>> noop deadline [cfq]

Probably not relevant in this case but maybe worth mentioning to get the
word out:

"As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much
of the parallelization in XFS."

http://www.xfs.org/index.php/XFS_FAQ

-- 
Stan


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
  2012-04-21  3:53         ` Stan Hoeppner
@ 2012-04-21 11:45           ` cwillu
  2012-04-21 18:30             ` Stan Hoeppner
  0 siblings, 1 reply; 11+ messages in thread
From: cwillu @ 2012-04-21 11:45 UTC (permalink / raw)
  To: stan
  Cc: Mark Lord, Joe Ceklosky,
	linux-ide@vger.kernel.org >> IDE/ATA development list

> Probably not relevant in this case but maybe worth mentioning to get the
> word out:
>
> "As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much
> of the parallelization in XFS."
>
> http://www.xfs.org/index.php/XFS_FAQ

Not that it's terribly relevant to btrfs, but do you have a better
citation for that than a very recent one-line wiki change that only
cites the user's own anecdote?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
  2012-04-21 11:45           ` cwillu
@ 2012-04-21 18:30             ` Stan Hoeppner
  2012-04-23 12:44               ` Mark Lord
  0 siblings, 1 reply; 11+ messages in thread
From: Stan Hoeppner @ 2012-04-21 18:30 UTC (permalink / raw)
  To: cwillu
  Cc: Mark Lord, Joe Ceklosky,
	linux-ide@vger.kernel.org >> IDE/ATA development list

On 4/21/2012 6:45 AM, cwillu wrote:
>> Probably not relevant in this case but maybe worth mentioning to get the
>> word out:
>>
>> "As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much
>> of the parallelization in XFS."
>>
>> http://www.xfs.org/index.php/XFS_FAQ
> 
> Not that it's terribly relevant to btrfs, but do you have a better
> citation for that than a very recent one-line wiki change that only
> cites the user's own anecdote?

Apologies for the rather weak citation.  It was simply easier to quote
that wiki entry.

How about something directly from Dave's fingers:
http://www.spinics.net/lists/xfs/msg10824.html

The many issues with CFQ+XFS didn't start with 3.2.12, but long before that.

-- 
Stan

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
  2012-04-21 18:30             ` Stan Hoeppner
@ 2012-04-23 12:44               ` Mark Lord
  2012-04-25  0:22                 ` Stan Hoeppner
  0 siblings, 1 reply; 11+ messages in thread
From: Mark Lord @ 2012-04-23 12:44 UTC (permalink / raw)
  To: stan
  Cc: cwillu, Joe Ceklosky,
	linux-ide@vger.kernel.org >> IDE/ATA development list

On 12-04-21 02:30 PM, Stan Hoeppner wrote:
> On 4/21/2012 6:45 AM, cwillu wrote:
>>> Probably not relevant in this case but maybe worth mentioning to get the
>>> word out:
>>>
>>> "As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much
>>> of the parallelization in XFS."
>>>
>>> http://www.xfs.org/index.php/XFS_FAQ
>>
>> Not that it's terribly relevant to btrfs, but do you have a better
>> citation for that than a very recent one-line wiki change that only
>> cites the user's own anecdote?
> 
> Apologies for the rather weak citation.  It was simply easier to quote
> that wiki entry.
> 
> How about something directly from Dave's fingers:
> http://www.spinics.net/lists/xfs/msg10824.html
> 
> The many issues with CFQ+XFS didn't start with 3.2.12, but long before that.


Thanks for the link.  That's handy to know.

The problems there are for XFS+RAID vs. CFQ, not XFS by itself.
Enterprise servers will normally have RAID under XFS,
but not all smaller systems.

Cheers


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
  2012-04-21  2:43       ` Mark Lord
  2012-04-21  3:53         ` Stan Hoeppner
@ 2012-04-23 14:11         ` Jeff Moyer
  1 sibling, 0 replies; 11+ messages in thread
From: Jeff Moyer @ 2012-04-23 14:11 UTC (permalink / raw)
  To: Mark Lord
  Cc: Joe Ceklosky,
	linux-ide@vger.kernel.org >> IDE/ATA development list

Mark Lord <kernel@teksavvy.com> writes:

> On 12-04-20 10:40 PM, Mark Lord wrote:
>> On 12-04-19 10:07 PM, Joe Ceklosky wrote:
>>> Mark,
>>>
>>>
>>> Thanks for the info, but nothing like that shows up:
>>>
>>>
>>> [jceklosk@neptune tmp]$ cat c-3.2.15
>>> noop deadline [cfq]
>>> noop deadline [cfq]
>>>
>>>
>>> [jceklosk@neptune tmp]$ cat c-3.3.2
>>> noop deadline [cfq]
>>> noop deadline [cfq]
>> 
>> 
>> Well, the stuff you posted (above) shows that cfq is being used
>> instead of noop.  For SSDs, noop is the more natural choice,
>> and used to be the default in the kernel for a while.
>> I wonder when that changed?
>
> Looking into the block layer now, I see that "cfq" at some point
> became "SSD aware", which is probably when the default io scheduler
> for SSDs changed back to cfq from noop.

The block layer never changed the I/O scheduler based on whether or not
the underlying storage was an SSD.  Maybe your particular distro did
that for you?  I can't say for sure, and it really doesn't matter as the
original report, here, is that the SAME CONFIGURATION (cfq old and new)
now regresses in performance.  We should concentrate on fixing *that*.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: SSD slowdown with 3.3.X?
  2012-04-23 12:44               ` Mark Lord
@ 2012-04-25  0:22                 ` Stan Hoeppner
  0 siblings, 0 replies; 11+ messages in thread
From: Stan Hoeppner @ 2012-04-25  0:22 UTC (permalink / raw)
  To: Mark Lord
  Cc: cwillu, Joe Ceklosky,
	linux-ide@vger.kernel.org >> IDE/ATA development list

On 4/23/2012 7:44 AM, Mark Lord wrote:
> On 12-04-21 02:30 PM, Stan Hoeppner wrote:
>> On 4/21/2012 6:45 AM, cwillu wrote:
>>>> Probably not relevant in this case but maybe worth mentioning to get the
>>>> word out:
>>>>
>>>> "As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much
>>>> of the parallelization in XFS."
>>>>
>>>> http://www.xfs.org/index.php/XFS_FAQ
>>>
>>> Not that it's terribly relevant to btrfs, but do you have a better
>>> citation for that than a very recent one-line wiki change that only
>>> cites the user's own anecdote?
>>
>> Apologies for the rather weak citation.  It was simply easier to quote
>> that wiki entry.
>>
>> How about something directly from Dave's fingers:
>> http://www.spinics.net/lists/xfs/msg10824.html
>>
>> The many issues with CFQ+XFS didn't start with 3.2.12, but long before that.
> 
> 
> Thanks for the link.  That's handy to know.
> 
> The problems there are for XFS+RAID vs. CFQ, not XFS by itself.
> Enterprise servers will normally have RAID under XFS,
> but not all smaller systems.

While it's true there are single disk XFS filesystems in the wild--I
have one--I'd have to make an educated guess that the vast majority of
XFS filesystems reside atop SAN, HBA, or md based RAID.  For any
hardware RAID solution with write cache, noop is recommended allowing
the hardware scheduler to order the writes, since they're sitting in his
cache.  For md based RAID I believe most are getting best results with
deadline.

I can't quote any numbers as I don't believe anyone has done such a poll
or research on this.  So it's best guess only.

-- 
Stan

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-04-25  0:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-19  2:15 SSD slowdown with 3.3.X? Joe Ceklosky
2012-04-19  3:13 ` Mark Lord
2012-04-20 15:23   ` Jeff Moyer
     [not found]   ` <4F90C4CF.1010000@gmail.com>
2012-04-21  2:40     ` Mark Lord
2012-04-21  2:43       ` Mark Lord
2012-04-21  3:53         ` Stan Hoeppner
2012-04-21 11:45           ` cwillu
2012-04-21 18:30             ` Stan Hoeppner
2012-04-23 12:44               ` Mark Lord
2012-04-25  0:22                 ` Stan Hoeppner
2012-04-23 14:11         ` Jeff Moyer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.