All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Rachit Agarwal <rach4x0r@gmail.com>
Cc: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-kernel@vger.kernel.org, Keith Busch <kbusch@kernel.org>,
	Ming Lei <ming.lei@redhat.com>,
	Jaehyun Hwang <jaehyun.hwang@cornell.edu>,
	Qizhe Cai <qc228@cornell.edu>,
	Midhul Vuppalapati <mvv25@cornell.edu>,
	Sagi Grimberg <sagi@lightbitslabs.com>,
	Rachit Agarwal <ragarwal@cornell.edu>
Subject: Re: [PATCH] iosched: Add i10 I/O Scheduler
Date: Mon, 30 Nov 2020 11:20:30 -0800	[thread overview]
Message-ID: <0cce60db-5a41-2e1b-ba5d-7905966bec25@grimberg.me> (raw)
In-Reply-To: <CAKeUqKKHg1wD19pnwJEd8whubnuGVic_ZhDjebaq3kKmY9TtsQ@mail.gmail.com>


> Dear all:
> 
> Thanks, again, for the very constructive decisions.
> 
> I am writing back with quite a few updates:
> 
> 1. We have now included a detailed comparison of i10 scheduler with 
> Kyber with NVMe-over-TCP 
> (https://github.com/i10-kernel/upstream-linux/blob/master/i10-evaluation.pdf). 
> In a nutshell, when operating with NVMe-over-TCP, i10 demonstrates the 
> core tradeoff: higher latency, but also higher throughput. This seems to 
> be the core tradeoff exposed by i10.
> 
> 2. We have now implemented an adaptive version of i10 I/O scheduler, 
> that uses the number of outstanding requests at the time of batch 
> dispatch (and whether the dispatch was triggered by timeout or not) to 
> adaptively set the batching size. The new results 
> (https://github.com/i10-kernel/upstream-linux/blob/master/i10-evaluation.pdf) 
> show that i10-adaptive further improves performance for low loads, while 
> keeping the performance for high loads. IMO, there is much to do on 
> designing improved adaptation algorithms.
> 
> 3. We have now updated the i10-evaluation document to include results 
> for local storage access. The core take-away here is that i10-adaptive 
> can achieve similar throughput and latency at low loads and at high 
> loads when compared to noop, but still requires more work for lower 
> loads. However, given that the tradeoff exposed by i10 scheduler is 
> particularly useful for remote storage devices (and as Jens suggested, 
> perhaps for virtualized local storage access), I do agree with Sagi -- I 
> think we should consider including it in the core, since this may be 
> useful for a broad range of new use cases.
> 
> We have also created a second version of the patch that includes these 
> updates: 
> https://github.com/i10-kernel/upstream-linux/blob/master/0002-iosched-Add-i10-I-O-Scheduler.patch
> 
> As always, thank you for the constructive discussion and I look forward 
> to working with you on this.

Thanks Rachit,

Would be good if you can send a formal patch for the adaptive queuing so
people can have a look.

One thing that was left on the table is weather this should be a full
blown scheduler or a core block infrastructure that would either be
set on-demand or by default.

I think that Jens and Ming expressed that this should be something that
we should place this in the block core, but I'd like to hear maybe
Ming can elaborate on his ideas how to do this.

WARNING: multiple messages have this Message-ID (diff)
From: Sagi Grimberg <sagi@grimberg.me>
To: Rachit Agarwal <rach4x0r@gmail.com>
Cc: Jens Axboe <axboe@kernel.dk>, Qizhe Cai <qc228@cornell.edu>,
	Rachit Agarwal <ragarwal@cornell.edu>,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	Ming Lei <ming.lei@redhat.com>,
	linux-block@vger.kernel.org,
	Midhul Vuppalapati <mvv25@cornell.edu>,
	Jaehyun Hwang <jaehyun.hwang@cornell.edu>,
	Keith Busch <kbusch@kernel.org>,
	Sagi Grimberg <sagi@lightbitslabs.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] iosched: Add i10 I/O Scheduler
Date: Mon, 30 Nov 2020 11:20:30 -0800	[thread overview]
Message-ID: <0cce60db-5a41-2e1b-ba5d-7905966bec25@grimberg.me> (raw)
In-Reply-To: <CAKeUqKKHg1wD19pnwJEd8whubnuGVic_ZhDjebaq3kKmY9TtsQ@mail.gmail.com>


> Dear all:
> 
> Thanks, again, for the very constructive decisions.
> 
> I am writing back with quite a few updates:
> 
> 1. We have now included a detailed comparison of i10 scheduler with 
> Kyber with NVMe-over-TCP 
> (https://github.com/i10-kernel/upstream-linux/blob/master/i10-evaluation.pdf). 
> In a nutshell, when operating with NVMe-over-TCP, i10 demonstrates the 
> core tradeoff: higher latency, but also higher throughput. This seems to 
> be the core tradeoff exposed by i10.
> 
> 2. We have now implemented an adaptive version of i10 I/O scheduler, 
> that uses the number of outstanding requests at the time of batch 
> dispatch (and whether the dispatch was triggered by timeout or not) to 
> adaptively set the batching size. The new results 
> (https://github.com/i10-kernel/upstream-linux/blob/master/i10-evaluation.pdf) 
> show that i10-adaptive further improves performance for low loads, while 
> keeping the performance for high loads. IMO, there is much to do on 
> designing improved adaptation algorithms.
> 
> 3. We have now updated the i10-evaluation document to include results 
> for local storage access. The core take-away here is that i10-adaptive 
> can achieve similar throughput and latency at low loads and at high 
> loads when compared to noop, but still requires more work for lower 
> loads. However, given that the tradeoff exposed by i10 scheduler is 
> particularly useful for remote storage devices (and as Jens suggested, 
> perhaps for virtualized local storage access), I do agree with Sagi -- I 
> think we should consider including it in the core, since this may be 
> useful for a broad range of new use cases.
> 
> We have also created a second version of the patch that includes these 
> updates: 
> https://github.com/i10-kernel/upstream-linux/blob/master/0002-iosched-Add-i10-I-O-Scheduler.patch
> 
> As always, thank you for the constructive discussion and I look forward 
> to working with you on this.

Thanks Rachit,

Would be good if you can send a formal patch for the adaptive queuing so
people can have a look.

One thing that was left on the table is weather this should be a full
blown scheduler or a core block infrastructure that would either be
set on-demand or by default.

I think that Jens and Ming expressed that this should be something that
we should place this in the block core, but I'd like to hear maybe
Ming can elaborate on his ideas how to do this.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2020-11-30 19:21 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-12 14:07 [PATCH] iosched: Add i10 I/O Scheduler Rachit Agarwal
2020-11-12 14:07 ` Rachit Agarwal
2020-11-12 18:02 ` Jens Axboe
2020-11-12 18:02   ` Jens Axboe
2020-11-13 20:34   ` Sagi Grimberg
2020-11-13 20:34     ` Sagi Grimberg
2020-11-13 21:03     ` Jens Axboe
2020-11-13 21:03       ` Jens Axboe
2020-11-13 21:23       ` Sagi Grimberg
2020-11-13 21:23         ` Sagi Grimberg
2020-11-13 21:26         ` Jens Axboe
2020-11-13 21:26           ` Jens Axboe
2020-11-13 21:36           ` Sagi Grimberg
2020-11-13 21:36             ` Sagi Grimberg
2020-11-13 21:44             ` Jens Axboe
2020-11-13 21:44               ` Jens Axboe
2020-11-13 21:56               ` Sagi Grimberg
2020-11-13 21:56                 ` Sagi Grimberg
     [not found]                 ` <CAKeUqKKHg1wD19pnwJEd8whubnuGVic_ZhDjebaq3kKmY9TtsQ@mail.gmail.com>
2020-11-30 19:20                   ` Sagi Grimberg [this message]
2020-11-30 19:20                     ` Sagi Grimberg
2020-12-28 14:28                   ` Rachit Agarwal
2021-01-11 18:15                     ` Rachit Agarwal
2021-01-11 18:15                       ` Rachit Agarwal
2020-11-16  8:41             ` Ming Lei
2020-11-16  8:41               ` Ming Lei
2020-11-13 14:59 ` Ming Lei
2020-11-13 14:59   ` Ming Lei
2020-11-13 20:58   ` Sagi Grimberg
2020-11-13 20:58     ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0cce60db-5a41-2e1b-ba5d-7905966bec25@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=jaehyun.hwang@cornell.edu \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=mvv25@cornell.edu \
    --cc=qc228@cornell.edu \
    --cc=rach4x0r@gmail.com \
    --cc=ragarwal@cornell.edu \
    --cc=sagi@lightbitslabs.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.