All of lore.kernel.org
 help / color / mirror / Atom feed
From: Bart Van Assche <Bart.VanAssche@sandisk.com>
To: "jthumshirn@suse.de" <jthumshirn@suse.de>,
	"jinpu.wang@profitbricks.com" <jinpu.wang@profitbricks.com>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"mail@fholler.de" <mail@fholler.de>,
	"yun.wang@profitbricks.com" <yun.wang@profitbricks.com>,
	"hch@lst.de" <hch@lst.de>, "axboe@kernel.dk" <axboe@kernel.dk>,
	"Milind.dumbare@gmail.com" <Milind.dumbare@gmail.com>,
	"dledford@redhat.com" <dledford@redhat.com>
Subject: Re: [RFC PATCH 00/28] INFINIBAND NETWORK BLOCK DEVICE (IBNBD)
Date: Fri, 24 Mar 2017 13:31:27 +0000	[thread overview]
Message-ID: <1490362271.2516.4.camel@sandisk.com> (raw)
In-Reply-To: <CAMGffE=CitFGj11NhFKPL2MNiOVVyb-ggRe-MhewcobGY0-u5A@mail.gmail.com>

On Fri, 2017-03-24 at 13:46 +0100, Jinpu Wang wrote:
> Our IBNBD project was started 3 years ago based on our need for Cloud
> Computing, NVMeOF is a bit younger.
> - IBNBD is one of our components, part of our software defined storage so=
lution.
> - As I listed in features, IBNBD has it's own features
>=20
> We're planning to look more into NVMeOF, but it's not a replacement for I=
BNBD.

Hello Jack, Danil and Roman,

Thanks for having taken the time to open source this work and to travel to
Boston to present this work at the Vault conference. However, my
understanding of IBNBD is that this driver has several shortcomings neither
NVMeOF nor iSER nor SRP have:
* Doesn't scale in terms of number of CPUs submitting I/O. The graphs shown
  during the Vault talk clearly illustrate this. This is probably the resul=
t
  of sharing a data structure across all client CPUs, maybe the bitmap that
  tracks which parts of the target buffer space are in use.
* Supports IB but none of the other RDMA transports (RoCE / iWARP).

We also need performance numbers that compare IBNBD against SRP and/or
NVMeOF with memory registration disabled to see whether and how much faster
IBNBD is compared to these two protocols.

The fact that IBNBD only needs to messages per I/O is an advantage it has
today over SRP but not over NVMeOF nor over iSER. The upstream initiator
drivers for the latter two protocols already support inline data.

Another question I have is whether integration with multipathd is supported=
?
If multipathd tries to run scsi_id against an IBNBD client device that will
fail.

Thanks,

Bart.=

WARNING: multiple messages have this Message-ID (diff)
From: Bart Van Assche <Bart.VanAssche@sandisk.com>
To: "jthumshirn@suse.de" <jthumshirn@suse.de>,
	"jinpu.wang@profitbricks.com" <jinpu.wang@profitbricks.com>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"mail@fholler.de" <mail@fholler.de>,
	"yun.wang@profitbricks.com" <yun.wang@profitbricks.com>,
	"hch@lst.de" <hch@lst.de>, "axboe@kernel.dk" <axboe@kernel.dk>,
	"Milind.dumbare@gmail.com" <Milind.dumbare@gmail.com>,
	"dledford@redhat.com" <dledford@redhat.com>
Subject: Re: [RFC PATCH 00/28] INFINIBAND NETWORK BLOCK DEVICE (IBNBD)
Date: Fri, 24 Mar 2017 13:31:27 +0000	[thread overview]
Message-ID: <1490362271.2516.4.camel@sandisk.com> (raw)
In-Reply-To: <CAMGffE=CitFGj11NhFKPL2MNiOVVyb-ggRe-MhewcobGY0-u5A@mail.gmail.com>

On Fri, 2017-03-24 at 13:46 +0100, Jinpu Wang wrote:
> Our IBNBD project was started 3 years ago based on our need for Cloud
> Computing, NVMeOF is a bit younger.
> - IBNBD is one of our components, part of our software defined storage solution.
> - As I listed in features, IBNBD has it's own features
> 
> We're planning to look more into NVMeOF, but it's not a replacement for IBNBD.

Hello Jack, Danil and Roman,

Thanks for having taken the time to open source this work and to travel to
Boston to present this work at the Vault conference. However, my
understanding of IBNBD is that this driver has several shortcomings neither
NVMeOF nor iSER nor SRP have:
* Doesn't scale in terms of number of CPUs submitting I/O. The graphs shown
  during the Vault talk clearly illustrate this. This is probably the result
  of sharing a data structure across all client CPUs, maybe the bitmap that
  tracks which parts of the target buffer space are in use.
* Supports IB but none of the other RDMA transports (RoCE / iWARP).

We also need performance numbers that compare IBNBD against SRP and/or
NVMeOF with memory registration disabled to see whether and how much faster
IBNBD is compared to these two protocols.

The fact that IBNBD only needs to messages per I/O is an advantage it has
today over SRP but not over NVMeOF nor over iSER. The upstream initiator
drivers for the latter two protocols already support inline data.

Another question I have is whether integration with multipathd is supported?
If multipathd tries to run scsi_id against an IBNBD client device that will
fail.

Thanks,

Bart.

  parent reply	other threads:[~2017-03-24 13:31 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-24 10:45 [RFC PATCH 00/28] INFINIBAND NETWORK BLOCK DEVICE (IBNBD) Jack Wang
2017-03-24 10:45 ` Jack Wang
2017-03-24 10:45 ` [PATCH 01/28] ibtrs: add header shared between ibtrs_client and ibtrs_server Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 12:35   ` Johannes Thumshirn
2017-03-24 12:35     ` Johannes Thumshirn
2017-03-24 12:54     ` Jinpu Wang
2017-03-24 12:54       ` Jinpu Wang
2017-03-24 14:31       ` Johannes Thumshirn
2017-03-24 14:31         ` Johannes Thumshirn
2017-03-24 14:35         ` Jinpu Wang
2017-03-24 14:35           ` Jinpu Wang
2017-03-24 10:45 ` [PATCH 02/28] ibtrs: add header for log MICROs " Jack Wang
2017-03-24 10:45 ` [PATCH 03/28] ibtrs_lib: add common functions shared by client and server Jack Wang
2017-03-24 10:45 ` [PATCH 04/28] ibtrs_clt: add header file for exported interface Jack Wang
2017-03-24 10:45 ` [PATCH 05/28] ibtrs_clt: main functionality of ibtrs_client Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 06/28] ibtrs_clt: add header file shared only in ibtrs_client Jack Wang
2017-03-24 10:45 ` [PATCH 07/28] ibtrs_clt: add files for sysfs interface Jack Wang
2017-03-24 10:45 ` [PATCH 08/28] ibtrs_clt: add Makefile and Kconfig Jack Wang
2017-03-25  5:51   ` kbuild test robot
2017-03-25  5:51     ` kbuild test robot
2017-03-25  6:55   ` kbuild test robot
2017-03-25  6:55     ` kbuild test robot
2017-03-24 10:45 ` [PATCH 09/28] ibtrs_srv: add header file for exported interface Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 10/28] ibtrs_srv: add main functionality for ibtrs_server Jack Wang
2017-03-24 10:45 ` [PATCH 11/28] ibtrs_srv: add header shared in ibtrs_server Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 12/28] ibtrs_srv: add sysfs interface Jack Wang
2017-03-24 10:45 ` [PATCH 13/28] ibtrs_srv: add Makefile and Kconfig Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-25  7:55   ` kbuild test robot
2017-03-25  7:55     ` kbuild test robot
2017-03-25 10:54   ` kbuild test robot
2017-03-25 10:54     ` kbuild test robot
2017-03-24 10:45 ` [PATCH 14/28] ibnbd: add headers shared by ibnbd_client and ibnbd_server Jack Wang
2017-03-24 10:45 ` [PATCH 15/28] ibnbd: add shared library functions Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 16/28] ibnbd_clt: add main functionality of ibnbd_client Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 17/28] ibnbd_clt: add header shared in ibnbd_client Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 18/28] ibnbd_clt: add sysfs interface Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 19/28] ibnbd_clt: add log helpers Jack Wang
2017-03-24 10:45 ` [PATCH 20/28] ibnbd_clt: add Makefile and Kconfig Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-25  8:38   ` kbuild test robot
2017-03-25  8:38     ` kbuild test robot
2017-03-25 11:17   ` kbuild test robot
2017-03-25 11:17     ` kbuild test robot
2017-03-24 10:45 ` [PATCH 21/28] ibnbd_srv: add header shared in ibnbd_server Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 22/28] ibnbd_srv: add main functionality Jack Wang
2017-03-24 10:45 ` [PATCH 23/28] ibnbd_srv: add abstraction for submit IO to file or block device Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 24/28] ibnbd_srv: add log helpers Jack Wang
2017-03-24 10:45 ` [PATCH 25/28] ibnbd_srv: add sysfs interface Jack Wang
2017-03-24 10:45   ` Jack Wang
2017-03-24 10:45 ` [PATCH 26/28] ibnbd_srv: add Makefile and Kconfig Jack Wang
2017-03-25  9:27   ` kbuild test robot
2017-03-25  9:27     ` kbuild test robot
2017-03-24 10:45 ` [PATCH 27/28] ibnbd: add doc for how to use ibnbd and sysfs interface Jack Wang
2017-03-25  7:44   ` kbuild test robot
2017-03-25  7:44     ` kbuild test robot
2017-03-24 10:45 ` [PATCH 28/28] MAINTRAINERS: Add maintainer for IBNBD/IBTRS Jack Wang
2017-03-24 12:15 ` [RFC PATCH 00/28] INFINIBAND NETWORK BLOCK DEVICE (IBNBD) Johannes Thumshirn
2017-03-24 12:15   ` Johannes Thumshirn
2017-03-24 12:46   ` Jinpu Wang
2017-03-24 12:46     ` Jinpu Wang
2017-03-24 12:48     ` Johannes Thumshirn
2017-03-24 12:48       ` Johannes Thumshirn
2017-03-24 13:31     ` Bart Van Assche [this message]
2017-03-24 13:31       ` Bart Van Assche
2017-03-24 14:24       ` Jinpu Wang
2017-03-24 14:24         ` Jinpu Wang
2017-03-24 14:20 ` Steve Wise
2017-03-24 14:20   ` Steve Wise
2017-03-24 14:37   ` Jinpu Wang
2017-03-24 14:37     ` Jinpu Wang
2017-03-27  2:20 ` Sagi Grimberg
2017-03-27  2:20   ` Sagi Grimberg
2017-03-27 10:21   ` Jinpu Wang
2017-03-27 10:21     ` Jinpu Wang
2017-03-28 14:17     ` Roman Penyaev
2017-03-28 14:17       ` Roman Penyaev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1490362271.2516.4.camel@sandisk.com \
    --to=bart.vanassche@sandisk.com \
    --cc=Milind.dumbare@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=dledford@redhat.com \
    --cc=hch@lst.de \
    --cc=jinpu.wang@profitbricks.com \
    --cc=jthumshirn@suse.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mail@fholler.de \
    --cc=yun.wang@profitbricks.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.