io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kanchan Joshi <joshi.k@samsung.com>
To: axboe@kernel.dk, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org
Cc: io-uring@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org, gost.dev@samsung.com,
	anuj1072538@gmail.com, xiaoguang.wang@linux.alibaba.com,
	Kanchan Joshi <joshi.k@samsung.com>
Subject: [RFC PATCH 00/12] io_uring attached nvme queue
Date: Sat, 29 Apr 2023 15:09:13 +0530	[thread overview]
Message-ID: <20230429093925.133327-1-joshi.k@samsung.com> (raw)
In-Reply-To: CGME20230429094228epcas5p4a80d8ed77433989fa804ecf449f83b0b@epcas5p4.samsung.com

This series shows one way to do what the title says.
This puts up a more direct/lean path that enables
 - submission from io_uring SQE to NVMe SQE
 - completion from NVMe CQE to io_uring CQE
Essentially cutting the hoops (involving request/bio) for nvme io path.

Also, io_uring ring is not to be shared among application threads.
Application is responsible for building the sharing (if it feels the
need). This means ring-associated exclusive queue can do away with some
synchronization costs that occur for shared queue.

Primary objective is to amp up of efficiency of kernel io path further
(towards PCIe gen N, N+1 hardware).
And we are seeing some asks too [1].

Building-blocks
===============
At high level, series can be divided into following parts -

1. nvme driver starts exposing some queue-pairs (SQ+CQ) that can
be attached to other in-kernel user (not just to block-layer, which is
the case at the moment) on demand.

Example:
insmod nvme.ko poll_queus=1 raw_queues=2

nvme0: 24/0/1/2 default/read/poll queues/raw queues

While driver registers other queues with block-layer, raw-queues are
rather reserved for exclusive attachment with other in-kernel users.
At this point, each raw-queue is interrupt-disabled (similar to
poll_queues). Maybe we need a better name for these (e.g. app/user queues).
[Refer: patch 2]

2. register/unregister queue interface
(a) one for io_uring application to ask for device-queue and register
with the ring. [Refer: patch 4]
(b) another at nvme so that other in-kernel users (io_uring for now) can
ask for a raw-queue. [Refer: patch 3, 5, 6]

The latter returns a qid, that io_uring stores internally (not exposed
to user-space) in the ring ctx. At max one queue per ring is enabled.
Ring has no other special properties except the fact that it stores a
qid that it can use exclusively. So application can very well use the
ring to do other things than nvme io.

3. user-interface to send commands down this way
(a) uring-cmd is extended to support a new flag "IORING_URING_CMD_DIRECT"
that application passes in the SQE. That is all.
(b) the flag goes down to provider of ->uring_cmd which may choose to do
  things differently based on it (or ignore it).
[Refer: patch 7]

4. nvme uring-cmd understands the above flag. It submits the command
into the known pre-registered queue, and completes (polled-completion)
from it. Transformation from "struct io_uring_cmd" to "nvme command" is
done directly without building other intermediate constructs.
[Refer: patch 8, 10, 12]

Testing and Performance
=======================
fio and t/io_uring is modified to exercise this path.
- fio: new "registerqueues" option
- t/io_uring: new "k" option

Good part:
2.96M -> 5.02M

nvme io (without this):
# t/io_uring -b512 -d64 -c2 -s2 -p1 -F1 -B1 -O0 -n1 -u1 -r4 -k0 /dev/ng0n1
submitter=0, tid=2922, file=/dev/ng0n1, node=-1
polled=1, fixedbufs=1/0, register_files=1, buffered=1, register_queues=0 QD=64
Engine=io_uring, sq_ring=64, cq_ring=64
IOPS=2.89M, BW=1412MiB/s, IOS/call=2/1
IOPS=2.92M, BW=1426MiB/s, IOS/call=2/2
IOPS=2.96M, BW=1444MiB/s, IOS/call=2/1
Exiting on timeout
Maximum IOPS=2.96M

nvme io (with this):
# t/io_uring -b512 -d64 -c2 -s2 -p1 -F1 -B1 -O0 -n1 -u1 -r4 -k1 /dev/ng0n1
submitter=0, tid=2927, file=/dev/ng0n1, node=-1
polled=1, fixedbufs=1/0, register_files=1, buffered=1, register_queues=1 QD=64
Engine=io_uring, sq_ring=64, cq_ring=64
IOPS=4.99M, BW=2.43GiB/s, IOS/call=2/1
IOPS=5.02M, BW=2.45GiB/s, IOS/call=2/1
IOPS=5.02M, BW=2.45GiB/s, IOS/call=2/1
Exiting on timeout
Maximum IOPS=5.02M

Not so good part:
While single IO is fast this way, we do not have batching abilities for
multi-io scenario. Plugging, submission and completion batching are tied to
block-layer constructs. Things should look better if we could do something
about that.
Particularly something is off with the completion-batching.

With -s32 and -c32, the numbers decline:

# t/io_uring -b512 -d64 -c32 -s32 -p1 -F1 -B1 -O0 -n1 -u1 -r4 -k1 /dev/ng0n1
submitter=0, tid=3674, file=/dev/ng0n1, node=-1
polled=1, fixedbufs=1/0, register_files=1, buffered=1, register_queues=1 QD=64
Engine=io_uring, sq_ring=64, cq_ring=64
IOPS=3.70M, BW=1806MiB/s, IOS/call=32/31
IOPS=3.71M, BW=1812MiB/s, IOS/call=32/31
IOPS=3.71M, BW=1812MiB/s, IOS/call=32/32
Exiting on timeout
Maximum IOPS=3.71M

And perf gets restored if we go back to -c2

# t/io_uring -b512 -d64 -c2 -s32 -p1 -F1 -B1 -O0 -n1 -u1 -r4 -k1 /dev/ng0n1
submitter=0, tid=3677, file=/dev/ng0n1, node=-1
polled=1, fixedbufs=1/0, register_files=1, buffered=1, register_queues=1 QD=64
Engine=io_uring, sq_ring=64, cq_ring=64
IOPS=4.99M, BW=2.44GiB/s, IOS/call=5/5
IOPS=5.02M, BW=2.45GiB/s, IOS/call=5/5
IOPS=5.02M, BW=2.45GiB/s, IOS/call=5/5
Exiting on timeout
Maximum IOPS=5.02M

Source
======
Kernel: https://github.com/OpenMPDK/linux/tree/feat/directq-v1
fio: https://github.com/OpenMPDK/fio/commits/feat/rawq-v2

Please take a look.

[1]
https://lore.kernel.org/io-uring/24179a47-ab37-fa32-d177-1086668fbd3d@linux.alibaba.com/

Anuj Gupta (5):
  fs, block: interface to register/unregister the raw-queue
  io_uring, fs: plumb support to register/unregister raw-queue
  nvme: wire-up register/unregister queue f_op callback
  block: add mq_ops to submit and complete commands from raw-queue
  pci: modify nvme_setup_prp_simple parameters

Kanchan Joshi (7):
  nvme: refactor nvme_alloc_io_tag_set
  pci: enable "raw_queues = N" module parameter
  pci: implement register/unregister functionality
  io_uring: support for using registered queue in uring-cmd
  nvme: carve out a helper to prepare nvme_command from ioucmd->cmd
  nvme: submisssion/completion of uring_cmd to/from the registered queue
  pci: implement submission/completion for rawq commands

 drivers/nvme/host/core.c       |  31 ++-
 drivers/nvme/host/fc.c         |   3 +-
 drivers/nvme/host/ioctl.c      | 234 +++++++++++++++----
 drivers/nvme/host/multipath.c  |   2 +
 drivers/nvme/host/nvme.h       |  19 +-
 drivers/nvme/host/pci.c        | 409 +++++++++++++++++++++++++++++++--
 drivers/nvme/host/rdma.c       |   2 +-
 drivers/nvme/host/tcp.c        |   3 +-
 drivers/nvme/target/loop.c     |   3 +-
 fs/file.c                      |  14 ++
 include/linux/blk-mq.h         |   5 +
 include/linux/fs.h             |   4 +
 include/linux/io_uring.h       |   6 +
 include/linux/io_uring_types.h |   3 +
 include/uapi/linux/io_uring.h  |   6 +
 io_uring/io_uring.c            |  60 +++++
 io_uring/uring_cmd.c           |  14 +-
 17 files changed, 739 insertions(+), 79 deletions(-)

-- 
2.25.1


       reply	other threads:[~2023-04-29  9:42 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20230429094228epcas5p4a80d8ed77433989fa804ecf449f83b0b@epcas5p4.samsung.com>
2023-04-29  9:39 ` Kanchan Joshi [this message]
     [not found]   ` <CGME20230429094238epcas5p4efa3dc785fa54ab974852c7f90113025@epcas5p4.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 01/12] nvme: refactor nvme_alloc_io_tag_set Kanchan Joshi
     [not found]   ` <CGME20230429094240epcas5p1a7411f266412244115411b05da509e4a@epcas5p1.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 02/12] pci: enable "raw_queues = N" module parameter Kanchan Joshi
     [not found]   ` <CGME20230429094243epcas5p13be3ca62dc2b03299d09cafaf11923c1@epcas5p1.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 03/12] fs, block: interface to register/unregister the raw-queue Kanchan Joshi
     [not found]   ` <CGME20230429094245epcas5p2843abc5cd54ffe301d36459543bcd228@epcas5p2.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 04/12] io_uring, fs: plumb support to register/unregister raw-queue Kanchan Joshi
     [not found]   ` <CGME20230429094247epcas5p333e0f515000de60fb64dc2590cf9fcd8@epcas5p3.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 05/12] nvme: wire-up register/unregister queue f_op callback Kanchan Joshi
     [not found]   ` <CGME20230429094249epcas5p18bd717f4e34077c0fcf28458f11de8d1@epcas5p1.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 06/12] pci: implement register/unregister functionality Kanchan Joshi
     [not found]   ` <CGME20230429094251epcas5p144d042853e10f090e3119338c2306546@epcas5p1.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 07/12] io_uring: support for using registered queue in uring-cmd Kanchan Joshi
     [not found]   ` <CGME20230429094253epcas5p3cfff90e1c003b6fc9c7c4a61287beecb@epcas5p3.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 08/12] block: add mq_ops to submit and complete commands from raw-queue Kanchan Joshi
     [not found]   ` <CGME20230429094255epcas5p11bcbe76772289f27c41a50ce502c998d@epcas5p1.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 09/12] nvme: carve out a helper to prepare nvme_command from ioucmd->cmd Kanchan Joshi
     [not found]   ` <CGME20230429094257epcas5p463574920bba26cd219275e57c2063d85@epcas5p4.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 10/12] nvme: submisssion/completion of uring_cmd to/from the registered queue Kanchan Joshi
     [not found]   ` <CGME20230429094259epcas5p11f0f3422eb4aa4e3ebf00e0666790efa@epcas5p1.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 11/12] pci: modify nvme_setup_prp_simple parameters Kanchan Joshi
     [not found]   ` <CGME20230429094301epcas5p48cf45da2f83d9ca8140ee777c7446d11@epcas5p4.samsung.com>
2023-04-29  9:39     ` [RFC PATCH 12/12] pci: implement submission/completion for rawq commands Kanchan Joshi
2023-04-29 17:17   ` [RFC PATCH 00/12] io_uring attached nvme queue Jens Axboe
2023-05-01 11:36     ` Kanchan Joshi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230429093925.133327-1-joshi.k@samsung.com \
    --to=joshi.k@samsung.com \
    --cc=anuj1072538@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=gost.dev@samsung.com \
    --cc=hch@lst.de \
    --cc=io-uring@vger.kernel.org \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=xiaoguang.wang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).