All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector
@ 2017-11-16 17:21 Max Gurtovoy
  2017-11-16 17:21 ` [PATCH 1/3] IB/core: add a simple SRQ pool per PD Max Gurtovoy
                   ` (4 more replies)
  0 siblings, 5 replies; 32+ messages in thread
From: Max Gurtovoy @ 2017-11-16 17:21 UTC (permalink / raw)


Since there is an active discussion regarding the CQ pool architecture, I decided to push
this feature (maybe it can be pushed before CQ pool).

This is a new feature for NVMEoF RDMA target, that is intended to save resource allocation
(by sharing them) and utilize the locality of completions to get the best performance with
Shared Receive Queues (SRQs). We'll create a SRQ per completion vector (and not per device)
using a new API (SRQ pool, added to this patchset too) and associate each created QP/CQ with
an appropriate SRQ. This will also reduce the lock contention on the single SRQ per device
(today's solution).

My testing environment included 4 initiators (CX5, CX5, CX4, CX3) that were connected to 4
subsystems (1 ns per sub) throw 2 ports (each initiator connected to unique subsystem
backed in a different bull_blk device) using a switch to the NVMEoF target (CX5).
I used RoCE link layer.

Configuration:
 - Irqbalancer stopped on each server
 - set_irq_affinity.sh on each interface
 - 2 initiators run traffic throw port 1
 - 2 initiators run traffic throw port 2
 - On initiator set register_always=N
 - Fio with 12 jobs, iodepth 128

Memory consumption calculation for recv buffers (target):
 - Multiple SRQ: SRQ_size * comp_num * ib_devs_num * inline_buffer_size
 - Single SRQ: SRQ_size * 1 * ib_devs_num * inline_buffer_size
 - MQ: RQ_size * CPU_num * ctrl_num * inline_buffer_size

Cases:
 1. Multiple SRQ with 1024 entries:
    - Mem = 1024 * 24 * 2 * 4k = 192MiB (Constant number - not depend on initiators number)
 2. Multiple SRQ with 256 entries:
    - Mem = 256 * 24 * 2 * 4k = 48MiB (Constant number - not depend on initiators number)
 3. MQ:
    - Mem = 256 * 24 * 8 * 4k = 192MiB (Mem grows for every new created ctrl)
 4. Single SRQ (current SRQ implementation):
    - Mem = 4096 * 1 * 2 * 4k = 32MiB (Constant number - not depend on initiators number)

results:

BS    1.read (target CPU)   2.read (target CPU)    3.read (target CPU)   4.read (target CPU)
---  --------------------- --------------------- --------------------- ----------------------
1k     5.88M (80%)            5.45M (72%)            6.77M (91%)          2.2M (72%)

2k     3.56M (65%)            3.45M (59%)            3.72M (64%)          2.12M (59%)

4k     1.8M (33%)             1.87M (32%)            1.88M (32%)          1.59M (34%)

BS    1.write (target CPU)   2.write (target CPU) 3.write (target CPU)   4.write (target CPU)
---  --------------------- --------------------- --------------------- ----------------------
1k     5.42M (63%)            5.14M (55%)            7.75M (82%)          2.14M (74%)

2k     4.15M (56%)            4.14M (51%)            4.16M (52%)          2.08M (73%)

4k     2.17M (28%)            2.17M (27%)            2.16M (28%)          1.62M (24%)


We can see the perf improvement between Case 2 and Case 4 (same order of resource).
We can see the benefit in resource consumption (mem and CPU) with a small perf loss
between cases 2 and 3.
There is still an open question between the perf differance for 1k between Case 1 and
Case 3, but I guess we can investigate and improve it incrementaly.

Thanks to Idan Burstein and Oren Duer for suggesting this nice feature.

Changes from V1:
 - Added SRQ pool per protection domain for IB/core
 - Fixed few comments from Christoph and Sagi

Max Gurtovoy (3):
  IB/core: add a simple SRQ pool per PD
  nvmet-rdma: use srq pointer in rdma_cmd
  nvmet-rdma: use SRQ per completion vector

 drivers/infiniband/core/Makefile   |   2 +-
 drivers/infiniband/core/srq_pool.c | 106 +++++++++++++++++++++
 drivers/infiniband/core/verbs.c    |   4 +
 drivers/nvme/target/rdma.c         | 190 +++++++++++++++++++++++++++----------
 include/rdma/ib_verbs.h            |   5 +
 include/rdma/srq_pool.h            |  46 +++++++++
 6 files changed, 301 insertions(+), 52 deletions(-)
 create mode 100644 drivers/infiniband/core/srq_pool.c
 create mode 100644 include/rdma/srq_pool.h

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2017-11-20 11:34 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-16 17:21 [PATCH v2 0/3] nvmet-rdma: SRQ per completion vector Max Gurtovoy
2017-11-16 17:21 ` [PATCH 1/3] IB/core: add a simple SRQ pool per PD Max Gurtovoy
     [not found]   ` <1510852885-25519-2-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-16 18:10     ` Leon Romanovsky
2017-11-16 18:10       ` Leon Romanovsky
     [not found]       ` <20171116181049.GO18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-11-17 19:12         ` Max Gurtovoy
2017-11-17 19:12           ` Max Gurtovoy
     [not found]           ` <c94d2e03-dd32-ff0a-789f-aae4d65f3955-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-18  7:36             ` Leon Romanovsky
2017-11-18  7:36               ` Leon Romanovsky
2017-11-16 18:32   ` Sagi Grimberg
2017-11-17 19:17     ` Max Gurtovoy
2017-11-16 17:21 ` [PATCH 2/3] nvmet-rdma: use srq pointer in rdma_cmd Max Gurtovoy
2017-11-16 17:26   ` [Suspected-Phishing][PATCH " Max Gurtovoy
2017-11-16 18:37   ` [PATCH " Sagi Grimberg
2017-11-16 17:21 ` [PATCH 3/3] nvmet-rdma: use SRQ per completion vector Max Gurtovoy
     [not found] ` <1510852885-25519-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-16 18:08   ` [PATCH v2 0/3] nvmet-rdma: " Leon Romanovsky
2017-11-16 18:08     ` Leon Romanovsky
     [not found]     ` <20171116180853.GN18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-11-17 19:18       ` Max Gurtovoy
2017-11-17 19:18         ` Max Gurtovoy
2017-11-16 18:36 ` Sagi Grimberg
2017-11-17 19:32   ` Max Gurtovoy
     [not found]     ` <a8e2be0b-deb7-8cc1-92ce-fc5dea3e241a-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-18 12:52       ` Leon Romanovsky
2017-11-18 12:52         ` Leon Romanovsky
     [not found]         ` <20171118125229.GT18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-11-18 13:57           ` Max Gurtovoy
2017-11-18 13:57             ` Max Gurtovoy
     [not found]             ` <935af437-d69e-e258-c00a-8bf9a04f9988-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-18 14:40               ` Leon Romanovsky
2017-11-18 14:40                 ` Leon Romanovsky
     [not found]                 ` <20171118144042.GU18825-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-11-18 21:29                   ` Max Gurtovoy
2017-11-18 21:29                     ` Max Gurtovoy
     [not found]                     ` <d6579bd6-053e-1214-ea95-ff72e6191cb0-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-11-20 11:00                       ` Sagi Grimberg
2017-11-20 11:00                         ` Sagi Grimberg
     [not found]                         ` <14682e66-de0d-042b-434b-0eb40fb79f0c-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-11-20 11:34                           ` Leon Romanovsky
2017-11-20 11:34                             ` Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.