All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] nvmet: Use direct IO for writes
@ 2016-09-21 18:10 Sagi Grimberg
  2016-09-22 13:57 ` Christoph Hellwig
  0 siblings, 1 reply; 8+ messages in thread
From: Sagi Grimberg @ 2016-09-21 18:10 UTC (permalink / raw)


We're designed to work with high-end devices where
direct IO makes perfect sense. We noticed that we
context switch by scheduling kblockd instead of going
directly to the device without REQ_SYNC for writes.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/target/io-cmd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 2cd069b691ae..4132b6b98182 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -58,6 +58,7 @@ static void nvmet_execute_rw(struct nvmet_req *req)
 
 	if (req->cmd->rw.opcode == nvme_cmd_write) {
 		op = REQ_OP_WRITE;
+		op_flags = WRITE_ODIRECT;
 		if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA))
 			op_flags |= REQ_FUA;
 	} else {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH] nvmet: Use direct IO for writes
  2016-09-21 18:10 [PATCH] nvmet: Use direct IO for writes Sagi Grimberg
@ 2016-09-22 13:57 ` Christoph Hellwig
  2016-09-22 14:09   ` Jens Axboe
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2016-09-22 13:57 UTC (permalink / raw)


On Wed, Sep 21, 2016@11:10:50AM -0700, Sagi Grimberg wrote:
> We're designed to work with high-end devices where
> direct IO makes perfect sense. We noticed that we
> context switch by scheduling kblockd instead of going
> directly to the device without REQ_SYNC for writes.

This looks reasonable.  But I still wonder why we bother to inject delay
for any fast blk-mq device (background: Sagi told me he is observing
issues without this on a NVMe PCIe card backend)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] nvmet: Use direct IO for writes
  2016-09-22 13:57 ` Christoph Hellwig
@ 2016-09-22 14:09   ` Jens Axboe
  2016-09-23 21:00     ` Sagi Grimberg
  0 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2016-09-22 14:09 UTC (permalink / raw)


On 09/22/2016 07:57 AM, Christoph Hellwig wrote:
> On Wed, Sep 21, 2016@11:10:50AM -0700, Sagi Grimberg wrote:
>> We're designed to work with high-end devices where
>> direct IO makes perfect sense. We noticed that we
>> context switch by scheduling kblockd instead of going
>> directly to the device without REQ_SYNC for writes.
>
> This looks reasonable.  But I still wonder why we bother to inject delay
> for any fast blk-mq device (background: Sagi told me he is observing
> issues without this on a NVMe PCIe card backend)

Batching/merging. It's not like we're doing a calculated explicit delay,
it's just async running the queue.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] nvmet: Use direct IO for writes
  2016-09-22 14:09   ` Jens Axboe
@ 2016-09-23 21:00     ` Sagi Grimberg
  2016-09-23 21:01       ` Jens Axboe
  0 siblings, 1 reply; 8+ messages in thread
From: Sagi Grimberg @ 2016-09-23 21:00 UTC (permalink / raw)



>> This looks reasonable.  But I still wonder why we bother to inject delay
>> for any fast blk-mq device (background: Sagi told me he is observing
>> issues without this on a NVMe PCIe card backend)
>
> Batching/merging. It's not like we're doing a calculated explicit delay,
> it's just async running the queue.

So are we OK with bypassing that in nvmet?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] nvmet: Use direct IO for writes
  2016-09-23 21:00     ` Sagi Grimberg
@ 2016-09-23 21:01       ` Jens Axboe
  2016-09-23 21:12         ` Sagi Grimberg
  0 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2016-09-23 21:01 UTC (permalink / raw)


On 09/23/2016 03:00 PM, Sagi Grimberg wrote:
>
>>> This looks reasonable.  But I still wonder why we bother to inject delay
>>> for any fast blk-mq device (background: Sagi told me he is observing
>>> issues without this on a NVMe PCIe card backend)
>>
>> Batching/merging. It's not like we're doing a calculated explicit delay,
>> it's just async running the queue.
>
> So are we OK with bypassing that in nvmet?

I'm OK with the patch as posted.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] nvmet: Use direct IO for writes
  2016-09-23 21:01       ` Jens Axboe
@ 2016-09-23 21:12         ` Sagi Grimberg
  2016-09-23 21:13           ` Jens Axboe
  2016-09-23 23:36           ` Christoph Hellwig
  0 siblings, 2 replies; 8+ messages in thread
From: Sagi Grimberg @ 2016-09-23 21:12 UTC (permalink / raw)



>> So are we OK with bypassing that in nvmet?
>
> I'm OK with the patch as posted.

Thanks, I'll queue it up for 4.9

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] nvmet: Use direct IO for writes
  2016-09-23 21:12         ` Sagi Grimberg
@ 2016-09-23 21:13           ` Jens Axboe
  2016-09-23 23:36           ` Christoph Hellwig
  1 sibling, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2016-09-23 21:13 UTC (permalink / raw)


On 09/23/2016 03:12 PM, Sagi Grimberg wrote:
>
>>> So are we OK with bypassing that in nvmet?
>>
>> I'm OK with the patch as posted.
>
> Thanks, I'll queue it up for 4.9

You can add my Reviewed-by, fwiw.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] nvmet: Use direct IO for writes
  2016-09-23 21:12         ` Sagi Grimberg
  2016-09-23 21:13           ` Jens Axboe
@ 2016-09-23 23:36           ` Christoph Hellwig
  1 sibling, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2016-09-23 23:36 UTC (permalink / raw)


On Fri, Sep 23, 2016@02:12:38PM -0700, Sagi Grimberg wrote:
>
>>> So are we OK with bypassing that in nvmet?
>>
>> I'm OK with the patch as posted.
>
> Thanks, I'll queue it up for 4.9

Btw, it might be a good idea to send our for-4.9 branch to Jens
so it gets some linux-next exposure.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-09-23 23:36 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-21 18:10 [PATCH] nvmet: Use direct IO for writes Sagi Grimberg
2016-09-22 13:57 ` Christoph Hellwig
2016-09-22 14:09   ` Jens Axboe
2016-09-23 21:00     ` Sagi Grimberg
2016-09-23 21:01       ` Jens Axboe
2016-09-23 21:12         ` Sagi Grimberg
2016-09-23 21:13           ` Jens Axboe
2016-09-23 23:36           ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.