All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@lightbits.io>
To: Christoph Hellwig <hch@lst.de>,
	"Nicholas A. Bellinger" <nab@linux-iscsi.org>
Cc: axboe@kernel.dk, linux-block@vger.kernel.org,
	linux-scsi <linux-scsi@vger.kernel.org>,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	keith.busch@intel.com,
	target-devel <target-devel@vger.kernel.org>
Subject: Re: NVMe over Fabrics target implementation
Date: Wed, 8 Jun 2016 16:12:27 +0300	[thread overview]
Message-ID: <575819BB.7010209@lightbits.io> (raw)
In-Reply-To: <20160608121932.GA31316@lst.de>


>> *) Extensible to multiple types of backend drivers.
>>
>> nvme-target needs a way to absorb new backend drivers, that
>> does not effect existing configfs group layout or attributes.
>>
>> Looking at the nvmet/configfs layout as-is, there are no multiple
>> backend types defined, nor a way to control backend feature bits
>> exposed to nvme namespaces at runtime.

Hey Nic,

As for different type of backends, I still don't see a big justification
for adding the LIO backends pscsi (as it doesn't make sense),
ramdisk (we have brd), or file (losetup).

What kind of feature bits would you want to expose at runtime?

> And that's very much intentional.  We have a very well working block
> layer which we're going to use, no need to reivent it.  The block
> layer supports NVMe pass through just fine in case we'll need it,
> as I spent the last year preparing it for that.
>
>> Why does it ever make sense for $SUBSYSTEM_NQN_0 with $PORT_DRIVER_FOO
>> to block operation of $SUBSYSTEM_NQN_1 with $PORT_DRIVER_BAR..?
>
> Because it keeps the code simple.  If you had actually participated
> on our development list you might have seen that until not too long
> ago we have very fine grainded locks here.  In the end Armen convinced
> me that it's easier to maintain if we don't bother with fine grained
> locking outside the fast path, especially as it significantly simplifies
> the discovery implementation.   If if it ever turns out to be an
> issue we can change it easily as the implementation is well encapsulated.

We did change that, and Nic is raising a valid point in terms of having
a global mutex around all the ports. If the requirement of nvme
subsystems and ports configuration is that it should happen fast enough
and scale to the numbers that Nic is referring to, we'll need to change
that back.

Having said that, I'm not sure this is a real hard requirement for RDMA
and FC in the mid-term, because from what I've seen, the workloads Nic
is referring to are more typical for iscsi/tcp where connections are
cheaper and you need more to saturate a high-speed interconnects, so
we'll probably see this when we have nvme over tcp working.

WARNING: multiple messages have this Message-ID (diff)
From: sagi@lightbits.io (Sagi Grimberg)
Subject: NVMe over Fabrics target implementation
Date: Wed, 8 Jun 2016 16:12:27 +0300	[thread overview]
Message-ID: <575819BB.7010209@lightbits.io> (raw)
In-Reply-To: <20160608121932.GA31316@lst.de>


>> *) Extensible to multiple types of backend drivers.
>>
>> nvme-target needs a way to absorb new backend drivers, that
>> does not effect existing configfs group layout or attributes.
>>
>> Looking at the nvmet/configfs layout as-is, there are no multiple
>> backend types defined, nor a way to control backend feature bits
>> exposed to nvme namespaces at runtime.

Hey Nic,

As for different type of backends, I still don't see a big justification
for adding the LIO backends pscsi (as it doesn't make sense),
ramdisk (we have brd), or file (losetup).

What kind of feature bits would you want to expose at runtime?

> And that's very much intentional.  We have a very well working block
> layer which we're going to use, no need to reivent it.  The block
> layer supports NVMe pass through just fine in case we'll need it,
> as I spent the last year preparing it for that.
>
>> Why does it ever make sense for $SUBSYSTEM_NQN_0 with $PORT_DRIVER_FOO
>> to block operation of $SUBSYSTEM_NQN_1 with $PORT_DRIVER_BAR..?
>
> Because it keeps the code simple.  If you had actually participated
> on our development list you might have seen that until not too long
> ago we have very fine grainded locks here.  In the end Armen convinced
> me that it's easier to maintain if we don't bother with fine grained
> locking outside the fast path, especially as it significantly simplifies
> the discovery implementation.   If if it ever turns out to be an
> issue we can change it easily as the implementation is well encapsulated.

We did change that, and Nic is raising a valid point in terms of having
a global mutex around all the ports. If the requirement of nvme
subsystems and ports configuration is that it should happen fast enough
and scale to the numbers that Nic is referring to, we'll need to change
that back.

Having said that, I'm not sure this is a real hard requirement for RDMA
and FC in the mid-term, because from what I've seen, the workloads Nic
is referring to are more typical for iscsi/tcp where connections are
cheaper and you need more to saturate a high-speed interconnects, so
we'll probably see this when we have nvme over tcp working.

  reply	other threads:[~2016-06-08 13:12 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-06 21:22 NVMe over Fabrics target implementation Christoph Hellwig
2016-06-06 21:22 ` Christoph Hellwig
2016-06-06 21:22 ` [PATCH 1/3] block: Export blk_poll Christoph Hellwig
2016-06-06 21:22   ` Christoph Hellwig
2016-06-07  6:49   ` Nicholas A. Bellinger
2016-06-07  6:49     ` Nicholas A. Bellinger
2016-06-06 21:22 ` [PATCH 2/3] nvmet: add a generic NVMe target Christoph Hellwig
2016-06-06 21:22   ` Christoph Hellwig
2016-06-06 21:22 ` [PATCH 3/3] nvme-loop: add a NVMe loopback host driver Christoph Hellwig
2016-06-06 21:22   ` Christoph Hellwig
2016-06-06 22:00   ` kbuild test robot
2016-06-06 22:00     ` kbuild test robot
2016-06-07  6:23 ` NVMe over Fabrics target implementation Nicholas A. Bellinger
2016-06-07  6:23   ` Nicholas A. Bellinger
2016-06-07  6:23   ` Nicholas A. Bellinger
2016-06-07 10:55   ` Christoph Hellwig
2016-06-07 10:55     ` Christoph Hellwig
2016-06-08  5:21     ` Nicholas A. Bellinger
2016-06-08  5:21       ` Nicholas A. Bellinger
2016-06-08 12:19       ` Christoph Hellwig
2016-06-08 12:19         ` Christoph Hellwig
2016-06-08 13:12         ` Sagi Grimberg [this message]
2016-06-08 13:12           ` Sagi Grimberg
2016-06-08 13:46           ` Christoph Hellwig
2016-06-08 13:46             ` Christoph Hellwig
2016-06-09  4:36           ` Nicholas A. Bellinger
2016-06-09  4:36             ` Nicholas A. Bellinger
2016-06-09 13:46             ` Christoph Hellwig
2016-06-09 13:46               ` Christoph Hellwig
2016-06-09 13:46               ` Christoph Hellwig
2016-06-09  3:32         ` Nicholas A. Bellinger
2016-06-09  3:32           ` Nicholas A. Bellinger
2016-06-07 21:02   ` Andy Grover
2016-06-07 21:02     ` Andy Grover
2016-06-07 21:10     ` Ming Lin
2016-06-07 21:10       ` Ming Lin
2016-06-07 17:01 ` Bart Van Assche
2016-06-07 17:01   ` Bart Van Assche
2016-06-07 17:31   ` Christoph Hellwig
2016-06-07 17:31     ` Christoph Hellwig
2016-06-07 18:11     ` Bart Van Assche
2016-06-07 18:11       ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=575819BB.7010209@lightbits.io \
    --to=sagi@lightbits.io \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=nab@linux-iscsi.org \
    --cc=target-devel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.