linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] block: Make CFQ default to IOPS mode on SSDs
@ 2015-05-19 20:55 Tahsin Erdogan
  2015-05-27 20:14 ` Tahsin Erdogan
  2015-06-09 10:18 ` Romain Francoise
  0 siblings, 2 replies; 9+ messages in thread
From: Tahsin Erdogan @ 2015-05-19 20:55 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Tahsin Erdogan

CFQ idling causes reduced IOPS throughput on non-rotational disks.
Since disk head seeking is not applicable to SSDs, it doesn't really
help performance by anticipating future near-by IO requests.

By turning off idling (and switching to IOPS mode), we allow other
processes to dispatch IO requests down to the driver and so increase IO
throughput.

Following FIO benchmark results were taken on a cloud SSD offering with
idling on and off:

Idling     iops    avg-lat(ms)    stddev            bw
------------------------------------------------------
    On     7054    90.107         38.697     28217KB/s
   Off    29255    21.836         11.730    117022KB/s

fio --name=temp --size=100G --time_based --ioengine=libaio \
    --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \
    --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \
    --filename=/dev/sdb --runtime=10 --iodepth=64 --numjobs=10

And the following is from a local SSD run:

Idling     iops    avg-lat(ms)    stddev            bw
------------------------------------------------------
    On    19320    33.043         14.068     77281KB/s
   Off    21626    29.465         12.662     86507KB/s

fio --name=temp --size=5G --time_based --ioengine=libaio \
    --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \
    --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \
    --filename=/fio_data --runtime=10 --iodepth=64 --numjobs=10

Reviewed-by: Nauman Rafique <nauman@google.com>
Signed-off-by: Tahsin Erdogan <tahsin@google.com>
---
 block/cfq-iosched.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 5da8e6e..402be01 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -4460,7 +4460,7 @@ static int cfq_init_queue(struct request_queue *q, struct elevator_type *e)
 	cfqd->cfq_slice[1] = cfq_slice_sync;
 	cfqd->cfq_target_latency = cfq_target_latency;
 	cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
-	cfqd->cfq_slice_idle = cfq_slice_idle;
+	cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle;
 	cfqd->cfq_group_idle = cfq_group_idle;
 	cfqd->cfq_latency = 1;
 	cfqd->hw_tag = -1;
-- 
2.2.0.rc0.207.ga3a616c


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs
  2015-05-19 20:55 [PATCH] block: Make CFQ default to IOPS mode on SSDs Tahsin Erdogan
@ 2015-05-27 20:14 ` Tahsin Erdogan
  2015-06-05 22:58   ` Tahsin Erdogan
  2015-06-09 10:18 ` Romain Francoise
  1 sibling, 1 reply; 9+ messages in thread
From: Tahsin Erdogan @ 2015-05-27 20:14 UTC (permalink / raw)
  To: Jens Axboe, Vivek Goyal; +Cc: linux-kernel, Tahsin Erdogan

On Tue, May 19, 2015 at 1:55 PM, Tahsin Erdogan <tahsin@google.com> wrote:
> CFQ idling causes reduced IOPS throughput on non-rotational disks.
> Since disk head seeking is not applicable to SSDs, it doesn't really
> help performance by anticipating future near-by IO requests.
>
> By turning off idling (and switching to IOPS mode), we allow other
> processes to dispatch IO requests down to the driver and so increase IO
> throughput.
>
> Following FIO benchmark results were taken on a cloud SSD offering with
> idling on and off:
>
> Idling     iops    avg-lat(ms)    stddev            bw
> ------------------------------------------------------
>     On     7054    90.107         38.697     28217KB/s
>    Off    29255    21.836         11.730    117022KB/s
>
> fio --name=temp --size=100G --time_based --ioengine=libaio \
>     --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \
>     --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \
>     --filename=/dev/sdb --runtime=10 --iodepth=64 --numjobs=10
>
> And the following is from a local SSD run:
>
> Idling     iops    avg-lat(ms)    stddev            bw
> ------------------------------------------------------
>     On    19320    33.043         14.068     77281KB/s
>    Off    21626    29.465         12.662     86507KB/s
>
> fio --name=temp --size=5G --time_based --ioengine=libaio \
>     --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \
>     --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \
>     --filename=/fio_data --runtime=10 --iodepth=64 --numjobs=10
>
> Reviewed-by: Nauman Rafique <nauman@google.com>
> Signed-off-by: Tahsin Erdogan <tahsin@google.com>
> ---
>  block/cfq-iosched.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index 5da8e6e..402be01 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -4460,7 +4460,7 @@ static int cfq_init_queue(struct request_queue *q, struct elevator_type *e)
>         cfqd->cfq_slice[1] = cfq_slice_sync;
>         cfqd->cfq_target_latency = cfq_target_latency;
>         cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
> -       cfqd->cfq_slice_idle = cfq_slice_idle;
> +       cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle;
>         cfqd->cfq_group_idle = cfq_group_idle;
>         cfqd->cfq_latency = 1;
>         cfqd->hw_tag = -1;
> --
> 2.2.0.rc0.207.ga3a616c
>

Ping...

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs
  2015-05-27 20:14 ` Tahsin Erdogan
@ 2015-06-05 22:58   ` Tahsin Erdogan
  2015-06-06  1:20     ` Jens Axboe
  0 siblings, 1 reply; 9+ messages in thread
From: Tahsin Erdogan @ 2015-06-05 22:58 UTC (permalink / raw)
  To: Jens Axboe, Vivek Goyal, tytso
  Cc: linux-kernel, Tahsin Erdogan, Nauman Rafique

On Wed, May 27, 2015 at 1:14 PM, Tahsin Erdogan <tahsin@google.com> wrote:
> On Tue, May 19, 2015 at 1:55 PM, Tahsin Erdogan <tahsin@google.com> wrote:
>> CFQ idling causes reduced IOPS throughput on non-rotational disks.
>> Since disk head seeking is not applicable to SSDs, it doesn't really
>> help performance by anticipating future near-by IO requests.
>>
>> By turning off idling (and switching to IOPS mode), we allow other
>> processes to dispatch IO requests down to the driver and so increase IO
>> throughput.
>>
>> Following FIO benchmark results were taken on a cloud SSD offering with
>> idling on and off:
>>
>> Idling     iops    avg-lat(ms)    stddev            bw
>> ------------------------------------------------------
>>     On     7054    90.107         38.697     28217KB/s
>>    Off    29255    21.836         11.730    117022KB/s
>>
>> fio --name=temp --size=100G --time_based --ioengine=libaio \
>>     --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \
>>     --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \
>>     --filename=/dev/sdb --runtime=10 --iodepth=64 --numjobs=10
>>
>> And the following is from a local SSD run:
>>
>> Idling     iops    avg-lat(ms)    stddev            bw
>> ------------------------------------------------------
>>     On    19320    33.043         14.068     77281KB/s
>>    Off    21626    29.465         12.662     86507KB/s
>>
>> fio --name=temp --size=5G --time_based --ioengine=libaio \
>>     --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \
>>     --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \
>>     --filename=/fio_data --runtime=10 --iodepth=64 --numjobs=10
>>
>> Reviewed-by: Nauman Rafique <nauman@google.com>
>> Signed-off-by: Tahsin Erdogan <tahsin@google.com>
>> ---
>>  block/cfq-iosched.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
>> index 5da8e6e..402be01 100644
>> --- a/block/cfq-iosched.c
>> +++ b/block/cfq-iosched.c
>> @@ -4460,7 +4460,7 @@ static int cfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>         cfqd->cfq_slice[1] = cfq_slice_sync;
>>         cfqd->cfq_target_latency = cfq_target_latency;
>>         cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
>> -       cfqd->cfq_slice_idle = cfq_slice_idle;
>> +       cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle;
>>         cfqd->cfq_group_idle = cfq_group_idle;
>>         cfqd->cfq_latency = 1;
>>         cfqd->hw_tag = -1;
>> --
>> 2.2.0.rc0.207.ga3a616c
>>
>
> Ping...

Trying once more..

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs
  2015-06-05 22:58   ` Tahsin Erdogan
@ 2015-06-06  1:20     ` Jens Axboe
  0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2015-06-06  1:20 UTC (permalink / raw)
  To: Tahsin Erdogan, Vivek Goyal, tytso; +Cc: linux-kernel, Nauman Rafique

On 06/05/2015 04:58 PM, Tahsin Erdogan wrote:
> On Wed, May 27, 2015 at 1:14 PM, Tahsin Erdogan <tahsin@google.com> wrote:
>> On Tue, May 19, 2015 at 1:55 PM, Tahsin Erdogan <tahsin@google.com> wrote:
>>> CFQ idling causes reduced IOPS throughput on non-rotational disks.
>>> Since disk head seeking is not applicable to SSDs, it doesn't really
>>> help performance by anticipating future near-by IO requests.
>>>
>>> By turning off idling (and switching to IOPS mode), we allow other
>>> processes to dispatch IO requests down to the driver and so increase IO
>>> throughput.
>>>
>>> Following FIO benchmark results were taken on a cloud SSD offering with
>>> idling on and off:
>>>
>>> Idling     iops    avg-lat(ms)    stddev            bw
>>> ------------------------------------------------------
>>>      On     7054    90.107         38.697     28217KB/s
>>>     Off    29255    21.836         11.730    117022KB/s
>>>
>>> fio --name=temp --size=100G --time_based --ioengine=libaio \
>>>      --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \
>>>      --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \
>>>      --filename=/dev/sdb --runtime=10 --iodepth=64 --numjobs=10
>>>
>>> And the following is from a local SSD run:
>>>
>>> Idling     iops    avg-lat(ms)    stddev            bw
>>> ------------------------------------------------------
>>>      On    19320    33.043         14.068     77281KB/s
>>>     Off    21626    29.465         12.662     86507KB/s
>>>
>>> fio --name=temp --size=5G --time_based --ioengine=libaio \
>>>      --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \
>>>      --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \
>>>      --filename=/fio_data --runtime=10 --iodepth=64 --numjobs=10
>>>
>>> Reviewed-by: Nauman Rafique <nauman@google.com>
>>> Signed-off-by: Tahsin Erdogan <tahsin@google.com>
>>> ---
>>>   block/cfq-iosched.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
>>> index 5da8e6e..402be01 100644
>>> --- a/block/cfq-iosched.c
>>> +++ b/block/cfq-iosched.c
>>> @@ -4460,7 +4460,7 @@ static int cfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>>          cfqd->cfq_slice[1] = cfq_slice_sync;
>>>          cfqd->cfq_target_latency = cfq_target_latency;
>>>          cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
>>> -       cfqd->cfq_slice_idle = cfq_slice_idle;
>>> +       cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle;
>>>          cfqd->cfq_group_idle = cfq_group_idle;
>>>          cfqd->cfq_latency = 1;
>>>          cfqd->hw_tag = -1;
>>> --
>>> 2.2.0.rc0.207.ga3a616c
>>>
>>
>> Ping...
>
> Trying once more..

This one worked :-). I agree, it's probably the sane thing to do, I'll 
apply this for 4.2.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs
  2015-05-19 20:55 [PATCH] block: Make CFQ default to IOPS mode on SSDs Tahsin Erdogan
  2015-05-27 20:14 ` Tahsin Erdogan
@ 2015-06-09 10:18 ` Romain Francoise
  2015-06-09 17:42   ` Jens Axboe
  1 sibling, 1 reply; 9+ messages in thread
From: Romain Francoise @ 2015-06-09 10:18 UTC (permalink / raw)
  To: Tahsin Erdogan; +Cc: Jens Axboe, linux-kernel

Hi,

On Tue, May 19, 2015 at 01:55:21PM -0700, Tahsin Erdogan wrote:
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -4460,7 +4460,7 @@ static int cfq_init_queue(struct request_queue *q, struct elevator_type *e)
>  	cfqd->cfq_slice[1] = cfq_slice_sync;
>  	cfqd->cfq_target_latency = cfq_target_latency;
>  	cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
> -	cfqd->cfq_slice_idle = cfq_slice_idle;
> +	cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle;
>  	cfqd->cfq_group_idle = cfq_group_idle;
>  	cfqd->cfq_latency = 1;
>  	cfqd->hw_tag = -1;

Did you test this patch with regular AHCI SSD devices? Applying it on
top of v4.1-rc7 makes no difference, slice_idle is still initialized to
8 in my setup, while rotational is 0.

Isn't the elevator initialized long before the non-rotational flag is
actually set on the device (which probably happens after it's probed on
the scsi bus)?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs
  2015-06-09 10:18 ` Romain Francoise
@ 2015-06-09 17:42   ` Jens Axboe
  2015-06-09 21:54     ` Tahsin Erdogan
  2015-06-10  6:44     ` Romain Francoise
  0 siblings, 2 replies; 9+ messages in thread
From: Jens Axboe @ 2015-06-09 17:42 UTC (permalink / raw)
  To: Romain Francoise, Tahsin Erdogan; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1523 bytes --]

On 06/09/2015 04:18 AM, Romain Francoise wrote:
> Hi,
>
> On Tue, May 19, 2015 at 01:55:21PM -0700, Tahsin Erdogan wrote:
>> --- a/block/cfq-iosched.c
>> +++ b/block/cfq-iosched.c
>> @@ -4460,7 +4460,7 @@ static int cfq_init_queue(struct request_queue *q, struct elevator_type *e)
>>   	cfqd->cfq_slice[1] = cfq_slice_sync;
>>   	cfqd->cfq_target_latency = cfq_target_latency;
>>   	cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
>> -	cfqd->cfq_slice_idle = cfq_slice_idle;
>> +	cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle;
>>   	cfqd->cfq_group_idle = cfq_group_idle;
>>   	cfqd->cfq_latency = 1;
>>   	cfqd->hw_tag = -1;
>
> Did you test this patch with regular AHCI SSD devices? Applying it on
> top of v4.1-rc7 makes no difference, slice_idle is still initialized to
> 8 in my setup, while rotational is 0.
>
> Isn't the elevator initialized long before the non-rotational flag is
> actually set on the device (which probably happens after it's probed on
> the scsi bus)?

You are absolutely correct. What happens is that the queue is allocated 
and initialized, and cfq checks the flag. But the flag is set later in 
the process, when we have finished probing the device checked if it's 
rotational or not.

There are a few options to handle this. The attached might work, not 
tested at all. Basically it adds an io sched registration hook, that is 
called when we are adding the disk on the queue. Non-rotational 
detection should be done at that point.

Does that work for you?

-- 
Jens Axboe


[-- Attachment #2: elv-register.patch --]
[-- Type: text/x-patch, Size: 2290 bytes --]

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index c808ad87652d..af8918eb7cd5 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -4508,7 +4508,7 @@ static int cfq_init_queue(struct request_queue *q, struct elevator_type *e)
 	cfqd->cfq_slice[1] = cfq_slice_sync;
 	cfqd->cfq_target_latency = cfq_target_latency;
 	cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
-	cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle;
+	cfqd->cfq_slice_idle = cfq_slice_idle;
 	cfqd->cfq_group_idle = cfq_group_idle;
 	cfqd->cfq_latency = 1;
 	cfqd->hw_tag = -1;
@@ -4525,6 +4525,15 @@ out_free:
 	return ret;
 }
 
+static void cfq_registered_queue(struct request_queue *q)
+{
+	struct elevator_queue *e = q->elevator;
+	struct cfq_data *cfqd = e->elevator_data;
+
+	if (blk_queue_nonrot(q))
+		cfqd->cfq_slice_idle = 0;
+}
+
 /*
  * sysfs parts below -->
  */
@@ -4640,6 +4649,7 @@ static struct elevator_type iosched_cfq = {
 		.elevator_may_queue_fn =	cfq_may_queue,
 		.elevator_init_fn =		cfq_init_queue,
 		.elevator_exit_fn =		cfq_exit_queue,
+		.elevator_registered_fn =	cfq_registered_queue,
 	},
 	.icq_size	=	sizeof(struct cfq_io_cq),
 	.icq_align	=	__alignof__(struct cfq_io_cq),
diff --git a/block/elevator.c b/block/elevator.c
index 59794d0d38e3..5f0452734a40 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -810,6 +810,8 @@ int elv_register_queue(struct request_queue *q)
 		}
 		kobject_uevent(&e->kobj, KOBJ_ADD);
 		e->registered = 1;
+		if (e->type->ops.elevator_registered_fn)
+			e->type->ops.elevator_registered_fn(q);
 	}
 	return error;
 }
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 45a91474487d..638b324f0291 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -39,6 +39,7 @@ typedef void (elevator_deactivate_req_fn) (struct request_queue *, struct reques
 typedef int (elevator_init_fn) (struct request_queue *,
 				struct elevator_type *e);
 typedef void (elevator_exit_fn) (struct elevator_queue *);
+typedef void (elevator_registered_fn) (struct request_queue *);
 
 struct elevator_ops
 {
@@ -68,6 +69,7 @@ struct elevator_ops
 
 	elevator_init_fn *elevator_init_fn;
 	elevator_exit_fn *elevator_exit_fn;
+	elevator_registered_fn *elevator_registered_fn;
 };
 
 #define ELV_NAME_MAX	(16)

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs
  2015-06-09 17:42   ` Jens Axboe
@ 2015-06-09 21:54     ` Tahsin Erdogan
  2015-06-10  6:44     ` Romain Francoise
  1 sibling, 0 replies; 9+ messages in thread
From: Tahsin Erdogan @ 2015-06-09 21:54 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Romain Francoise, linux-kernel

Thanks for catching this.

In my testing, I was switching to cfq through sysfs. Since disk
initialization happens earlier than manual switching, I didn't hit
this problem.

On Tue, Jun 9, 2015 at 10:42 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On 06/09/2015 04:18 AM, Romain Francoise wrote:
>>
>> Hi,
>>
>> On Tue, May 19, 2015 at 01:55:21PM -0700, Tahsin Erdogan wrote:
>>>
>>> --- a/block/cfq-iosched.c
>>> +++ b/block/cfq-iosched.c
>>> @@ -4460,7 +4460,7 @@ static int cfq_init_queue(struct request_queue *q,
>>> struct elevator_type *e)
>>>         cfqd->cfq_slice[1] = cfq_slice_sync;
>>>         cfqd->cfq_target_latency = cfq_target_latency;
>>>         cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
>>> -       cfqd->cfq_slice_idle = cfq_slice_idle;
>>> +       cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle;
>>>         cfqd->cfq_group_idle = cfq_group_idle;
>>>         cfqd->cfq_latency = 1;
>>>         cfqd->hw_tag = -1;
>>
>>
>> Did you test this patch with regular AHCI SSD devices? Applying it on
>> top of v4.1-rc7 makes no difference, slice_idle is still initialized to
>> 8 in my setup, while rotational is 0.
>>
>> Isn't the elevator initialized long before the non-rotational flag is
>> actually set on the device (which probably happens after it's probed on
>> the scsi bus)?
>
>
> You are absolutely correct. What happens is that the queue is allocated and
> initialized, and cfq checks the flag. But the flag is set later in the
> process, when we have finished probing the device checked if it's rotational
> or not.
>
> There are a few options to handle this. The attached might work, not tested
> at all. Basically it adds an io sched registration hook, that is called when
> we are adding the disk on the queue. Non-rotational detection should be done
> at that point.
>
> Does that work for you?
>
> --
> Jens Axboe
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs
  2015-06-09 17:42   ` Jens Axboe
  2015-06-09 21:54     ` Tahsin Erdogan
@ 2015-06-10  6:44     ` Romain Francoise
  2015-06-10 14:03       ` Jens Axboe
  1 sibling, 1 reply; 9+ messages in thread
From: Romain Francoise @ 2015-06-10  6:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Tahsin Erdogan, linux-kernel

Hi,

On Tue, Jun 09, 2015 at 11:42:29AM -0600, Jens Axboe wrote:
> There are a few options to handle this. The attached might work, not
> tested at all. Basically it adds an io sched registration hook, that is
> called when we are adding the disk on the queue. Non-rotational
> detection should be done at that point.
>
> Does that work for you?

Yep, that works perfectly in my (admittedly limited) testing; slice_idle
is correctly set to 0 on non-rotational devices and keeps its default
value of 8 otherwise. Feel free to add my Tested-by.

Thanks!

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs
  2015-06-10  6:44     ` Romain Francoise
@ 2015-06-10 14:03       ` Jens Axboe
  0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2015-06-10 14:03 UTC (permalink / raw)
  To: Romain Francoise; +Cc: Tahsin Erdogan, linux-kernel

On 06/10/2015 12:44 AM, Romain Francoise wrote:
> Hi,
>
> On Tue, Jun 09, 2015 at 11:42:29AM -0600, Jens Axboe wrote:
>> There are a few options to handle this. The attached might work, not
>> tested at all. Basically it adds an io sched registration hook, that is
>> called when we are adding the disk on the queue. Non-rotational
>> detection should be done at that point.
>>
>> Does that work for you?
>
> Yep, that works perfectly in my (admittedly limited) testing; slice_idle
> is correctly set to 0 on non-rotational devices and keeps its default
> value of 8 otherwise. Feel free to add my Tested-by.
>
> Thanks!

Thanks for testing, it is now committed.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-06-10 14:04 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-19 20:55 [PATCH] block: Make CFQ default to IOPS mode on SSDs Tahsin Erdogan
2015-05-27 20:14 ` Tahsin Erdogan
2015-06-05 22:58   ` Tahsin Erdogan
2015-06-06  1:20     ` Jens Axboe
2015-06-09 10:18 ` Romain Francoise
2015-06-09 17:42   ` Jens Axboe
2015-06-09 21:54     ` Tahsin Erdogan
2015-06-10  6:44     ` Romain Francoise
2015-06-10 14:03       ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).