All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Matias Bjørling" <mb@lightnvm.io>
To: "Javier González" <javier@javigon.com>,
	"Matias Bjorling" <Matias.Bjorling@wdc.com>
Cc: Damien Le Moal <Damien.LeMoal@wdc.com>,
	Jens Axboe <axboe@kernel.dk>,
	Niklas Cassel <Niklas.Cassel@wdc.com>,
	Ajay Joshi <Ajay.Joshi@wdc.com>, Sagi Grimberg <sagi@grimberg.me>,
	Keith Busch <Keith.Busch@wdc.com>,
	Dmitry Fomichev <Dmitry.Fomichev@wdc.com>,
	Aravind Ramesh <Aravind.Ramesh@wdc.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Hans Holmberg <Hans.Holmberg@wdc.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH 5/5] nvme: support for zoned namespaces
Date: Tue, 16 Jun 2020 18:25:11 +0200	[thread overview]
Message-ID: <1bda189d-bf3e-657d-0689-d36ba4933ff2@lightnvm.io> (raw)
In-Reply-To: <20200616162125.4zz3mjw2p37wfq5t@mpHalley.localdomain>

On 16/06/2020 18.21, Javier González wrote:
> On 16.06.2020 16:07, Matias Bjorling wrote:
>>
>>
>>> -----Original Message-----
>>> From: Javier González <javier@javigon.com>
>>> Sent: Tuesday, 16 June 2020 18.03
>>> To: Matias Bjørling <mb@lightnvm.io>
>>> Cc: Damien Le Moal <Damien.LeMoal@wdc.com>; Jens Axboe
>>> <axboe@kernel.dk>; Niklas Cassel <Niklas.Cassel@wdc.com>; Ajay Joshi
>>> <Ajay.Joshi@wdc.com>; Sagi Grimberg <sagi@grimberg.me>; Keith Busch
>>> <Keith.Busch@wdc.com>; Dmitry Fomichev <Dmitry.Fomichev@wdc.com>;
>>> Aravind Ramesh <Aravind.Ramesh@wdc.com>; linux-
>>> nvme@lists.infradead.org; linux-block@vger.kernel.org; Hans Holmberg
>>> <Hans.Holmberg@wdc.com>; Christoph Hellwig <hch@lst.de>; Matias 
>>> Bjorling
>>> <Matias.Bjorling@wdc.com>
>>> Subject: Re: [PATCH 5/5] nvme: support for zoned namespaces
>>>
>>> On 16.06.2020 17:20, Matias Bjørling wrote:
>>> >On 16/06/2020 17.02, Javier González wrote:
>>> >>On 16.06.2020 14:42, Damien Le Moal wrote:
>>> >>>On 2020/06/16 23:16, Javier González wrote:
>>> >>>>On 16.06.2020 12:35, Damien Le Moal wrote:
>>> >>>>>On 2020/06/16 21:24, Javier González wrote:
>>> >>>>>>On 16.06.2020 14:06, Matias Bjørling wrote:
>>> >>>>>>>On 16/06/2020 14.00, Javier González wrote:
>>> >>>>>>>>On 16.06.2020 13:18, Matias Bjørling wrote:
>>> >>>>>>>>>On 16/06/2020 12.41, Javier González wrote:
>>> >>>>>>>>>>On 16.06.2020 08:34, Keith Busch wrote:
>>> >>>>>>>>>>>Add support for NVM Express Zoned Namespaces (ZNS)
>>> Command
>>> >>>>>>>>>>>Set defined in NVM Express TP4053. Zoned namespaces are
>>> >>>>>>>>>>>discovered based on their Command Set Identifier reported in
>>> >>>>>>>>>>>the namespaces Namespace Identification Descriptor list. A
>>> >>>>>>>>>>>successfully discovered Zoned Namespace will be registered
>>> >>>>>>>>>>>with the block layer as a host managed zoned block device
>>> >>>>>>>>>>>with Zone Append command support. A namespace that does not
>>> >>>>>>>>>>>support append is not supported by the driver.
>>> >>>>>>>>>>
>>> >>>>>>>>>>Why are we enforcing the append command? Append is optional
>>> on
>>> >>>>>>>>>>the current ZNS specification, so we should not make this
>>> >>>>>>>>>>mandatory in the implementation. See specifics below.
>>> >>>>>>>>
>>> >>>>>>>>>
>>> >>>>>>>>>There is already general support in the kernel for the zone
>>> >>>>>>>>>append command. Feel free to submit patches to emulate the
>>> >>>>>>>>>support. It is outside the scope of this patchset.
>>> >>>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>>It is fine that the kernel supports append, but the ZNS
>>> >>>>>>>>specification does not impose the implementation for append, so
>>> >>>>>>>>the driver should not do that either.
>>> >>>>>>>>
>>> >>>>>>>>ZNS SSDs that choose to leave append as a non-implemented
>>> >>>>>>>>optional command should not rely on emulated SW support,
>>> >>>>>>>>specially when traditional writes work very fine for a large
>>> >>>>>>>>part of current ZNS use cases.
>>> >>>>>>>>
>>> >>>>>>>>Please, remove this virtual constraint.
>>> >>>>>>>
>>> >>>>>>>The Zone Append command is mandatory for zoned block devices.
>>> >>>>>>>Please see https://lwn.net/Articles/818709/ for the background.
>>> >>>>>>
>>> >>>>>>I do not see anywhere in the block layer that append is mandatory
>>> >>>>>>for zoned devices. Append is emulated on ZBC, but beyond that
>>> >>>>>>there is no mandatory bits. Please explain.
>>> >>>>>
>>> >>>>>This is to allow a single write IO path for all types of zoned
>>> >>>>>block device for higher layers, e.g file systems. The on-going
>>> >>>>>re-work of btrfs zone support for instance now relies 100% on zone
>>> >>>>>append being supported. That significantly simplifies the file
>>> >>>>>system support and more importantly remove the need for locking
>>> >>>>>around block allocation and BIO issuing, allowing to preserve a
>>> >>>>>fully asynchronous write path that can include workqueues for
>>> >>>>>efficient CPU usage of things like encryption and compression.
>>> >>>>>Without zone append, file system would either (1) have to reject
>>> >>>>>these drives that do not support zone append, or (2) implement 2
>>> >>>>>different write IO path (slower regular write and zone append).
>>> >>>>>None of these options are ideal, to say the least.
>>> >>>>>
>>> >>>>>So the approach is: mandate zone append support for ZNS 
>>> devices. To
>>> >>>>>allow other ZNS drives, an emulation similar to SCSI can be
>>> >>>>>implemented, with that emulation ideally combined to work for both
>>> >>>>>types of drives if possible.
>>> >>>>
>>> >>>>Enforcing QD=1 becomes a problem on devices with large zones. In a
>>> >>>>ZNS device that has smaller zones this should not be a problem.
>>> >>>
>>> >>>Let's be precise: this is not running the drive at QD=1, it is "at
>>> >>>most one write *request* per zone". If the FS is simultaneously 
>>> using
>>> >>>multiple block groups mapped to different zones, you will get a 
>>> total
>>> >>>write QD > 1, and as many reads as you want.
>>> >>>
>>> >>>>Would you agree that it is possible to have a write path that 
>>> relies
>>> >>>>on QD=1, where the FS / application has the responsibility for
>>> >>>>enforcing this? Down the road this QD can be increased if the 
>>> device
>>> >>>>is able to buffer the writes.
>>> >>>
>>> >>>Doing QD=1 per zone for writes at the FS layer, that is, at the BIO
>>> >>>layer does not work. This is because BIOs can be as large as the FS
>>> >>>wants them to be. Such large BIO will be split into multiple 
>>> requests
>>> >>>in the block layer, resulting in more than one write per zone. That
>>> >>>is why the zone write locking is at the scheduler level, between BIO
>>> >>>split and request dispatch. That avoids the multiple requests
>>> >>>fragments of a large BIO to be reordered and fail. That is mandatory
>>> >>>as the block layer itself can occasionally reorder requests and 
>>> lower
>>> >>>levels such as AHCI HW is also notoriously good at reversing
>>> >>>sequential requests. For NVMe with multi-queue, the IO issuing
>>> >>>process getting rescheduled on a different CPU can result in
>>> >>>sequential IOs being in different queues, with the likely result of
>>> >>>an out-of-order execution. All cases are avoided with zone write
>>> >>>locking and at most one write request dispatch per zone as
>>> >>>recommended by the ZNS specifications (ZBC and ZAC standards for SMR
>>> >>>HDDs are silent on this).
>>> >>>
>>> >>
>>> >>I understand. I agree that the current FSs supporting ZNS follow this
>>> >>approach and it makes sense that there is a common interface that
>>> >>simplifies the FS implementation. See the comment below on the part I
>>> >>believe we see things differently.
>>> >>
>>> >>
>>> >>>>I would be OK with some FS implementations to rely on append and
>>> >>>>impose the constraint that append has to be supported (and it would
>>> >>>>be our job to change that), but I would like to avoid the driver
>>> >>>>rejecting initializing the device because current FS 
>>> implementations
>>> >>>>have implemented this logic.
>>> >>>
>>> >>>What is the difference between the driver rejecting drives and 
>>> the FS
>>> >>>rejecting the same drives ? That has the same end result to me: an
>>> >>>entire class of devices cannot be used as desired by the user.
>>> >>>Implementing zone append emulation avoids the rejection entirely
>>> >>>while still allowing the FS to have a single write IO path, thus
>>> >>>simplifying the code.
>>> >>
>>> >>The difference is that users that use a raw ZNS device submitting I/O
>>> >>through the kernel would still be able to use these devices. The
>>> >>result would be that the ZNS SSD is recognized and initialized, but
>>> >>the FS format fails.
>>> >>
>>> >>>
>>> >>>>We can agree that a number of initial customers will use these
>>> >>>>devices raw, using the in-kernel I/O path, but without a FS on top.
>>> >>>>
>>> >>>>Thoughts?
>>> >>>>
>>> >>>>>and note that
>>> >>>>>this emulation would require the drive to be operated with
>>> >>>>>mq-deadline to enable zone write locking for preserving write
>>> >>>>>command order. While on a HDD the performance penalty is minimal,
>>> >>>>>it will likely be significant on a SSD.
>>> >>>>
>>> >>>>Exactly my concern. I do not want ZNS SSDs to be impacted by this
>>> >>>>type of design decision at the driver level.
>>> >>>
>>> >>>But your proposed FS level approach would end up doing the exact 
>>> same
>>> >>>thing with the same limitation and so the same potential performance
>>> >>>impact.
>>> >>>The block
>>> >>>layer generic approach has the advantage that we do not bother the
>>> >>>higher levels with the implementation of in-order request dispatch
>>> >>>guarantees.
>>> >>>File systems
>>> >>>are complex enough. The less complexity is required for zone 
>>> support,
>>> >>>the better.
>>> >>
>>> >>This depends very much on how the FS / application is managing
>>> >>stripping. At the moment our main use case is enabling user-space
>>> >>applications submitting I/Os to raw ZNS devices through the kernel.
>>> >>
>>> >>Can we enable this use case to start with?
>>> >
>>> >It is free for everyone to load kernel modules into the kernel. Those
>>> >modules may not have the appropriate checks or may rely on the zone
>>> >append functionality. Having per use-case limit is a no-go and at best
>>> >a game of whack-a-mole.
>>>
>>> Let's focus on mainline support. We are leaving append as not 
>>> enabled based
>>> on customer requests for some ZNS products and would like this 
>>> devices to be
>>> supported. This is not at all a corner use-case but a very general one.
>>>
>>> >
>>> >You already agreed to create a set of patches to add the appropriate
>>> >support for emulating zone append. As these would fix your specific
>>> >issue, please go ahead and submit those.
>>>
>>> I agreed to solve the use case that some of our customers are 
>>> enabling and this
>>> is what I am doing.
>>>
>>> Again, to start with I would like to have a path where ZNS 
>>> namespaces are
>>> identified independently of append support. Then specific users can 
>>> require
>>> append if they please to do so. We will of course take care of 
>>> sending patches
>>> for this.
>>
>> As was previously said, there are users in the kernel that depends on
>> zone append. As a result, it is not an option not to have this. Please
>> go ahead and send the patches and you'll have the behavior you are
>> seeking.
>>
>
> Never put in doubt that we are the ones implementing support for this,
> but since you keep asking, I want to make it clear that not using append
> command is a very general use case for ZNS adopters.
>
I am not asking. I am confirming that this is orthogonal to _this_ 
specific patchset. This discussion can continue in the potential patches 
that you plan.



WARNING: multiple messages have this Message-ID (diff)
From: "Matias Bjørling" <mb@lightnvm.io>
To: "Javier González" <javier@javigon.com>,
	"Matias Bjorling" <Matias.Bjorling@wdc.com>
Cc: Jens Axboe <axboe@kernel.dk>,
	Niklas Cassel <Niklas.Cassel@wdc.com>,
	Damien Le Moal <Damien.LeMoal@wdc.com>,
	Ajay Joshi <Ajay.Joshi@wdc.com>, Sagi Grimberg <sagi@grimberg.me>,
	Keith Busch <Keith.Busch@wdc.com>,
	Dmitry Fomichev <Dmitry.Fomichev@wdc.com>,
	Aravind Ramesh <Aravind.Ramesh@wdc.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Hans Holmberg <Hans.Holmberg@wdc.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH 5/5] nvme: support for zoned namespaces
Date: Tue, 16 Jun 2020 18:25:11 +0200	[thread overview]
Message-ID: <1bda189d-bf3e-657d-0689-d36ba4933ff2@lightnvm.io> (raw)
In-Reply-To: <20200616162125.4zz3mjw2p37wfq5t@mpHalley.localdomain>

On 16/06/2020 18.21, Javier González wrote:
> On 16.06.2020 16:07, Matias Bjorling wrote:
>>
>>
>>> -----Original Message-----
>>> From: Javier González <javier@javigon.com>
>>> Sent: Tuesday, 16 June 2020 18.03
>>> To: Matias Bjørling <mb@lightnvm.io>
>>> Cc: Damien Le Moal <Damien.LeMoal@wdc.com>; Jens Axboe
>>> <axboe@kernel.dk>; Niklas Cassel <Niklas.Cassel@wdc.com>; Ajay Joshi
>>> <Ajay.Joshi@wdc.com>; Sagi Grimberg <sagi@grimberg.me>; Keith Busch
>>> <Keith.Busch@wdc.com>; Dmitry Fomichev <Dmitry.Fomichev@wdc.com>;
>>> Aravind Ramesh <Aravind.Ramesh@wdc.com>; linux-
>>> nvme@lists.infradead.org; linux-block@vger.kernel.org; Hans Holmberg
>>> <Hans.Holmberg@wdc.com>; Christoph Hellwig <hch@lst.de>; Matias 
>>> Bjorling
>>> <Matias.Bjorling@wdc.com>
>>> Subject: Re: [PATCH 5/5] nvme: support for zoned namespaces
>>>
>>> On 16.06.2020 17:20, Matias Bjørling wrote:
>>> >On 16/06/2020 17.02, Javier González wrote:
>>> >>On 16.06.2020 14:42, Damien Le Moal wrote:
>>> >>>On 2020/06/16 23:16, Javier González wrote:
>>> >>>>On 16.06.2020 12:35, Damien Le Moal wrote:
>>> >>>>>On 2020/06/16 21:24, Javier González wrote:
>>> >>>>>>On 16.06.2020 14:06, Matias Bjørling wrote:
>>> >>>>>>>On 16/06/2020 14.00, Javier González wrote:
>>> >>>>>>>>On 16.06.2020 13:18, Matias Bjørling wrote:
>>> >>>>>>>>>On 16/06/2020 12.41, Javier González wrote:
>>> >>>>>>>>>>On 16.06.2020 08:34, Keith Busch wrote:
>>> >>>>>>>>>>>Add support for NVM Express Zoned Namespaces (ZNS)
>>> Command
>>> >>>>>>>>>>>Set defined in NVM Express TP4053. Zoned namespaces are
>>> >>>>>>>>>>>discovered based on their Command Set Identifier reported in
>>> >>>>>>>>>>>the namespaces Namespace Identification Descriptor list. A
>>> >>>>>>>>>>>successfully discovered Zoned Namespace will be registered
>>> >>>>>>>>>>>with the block layer as a host managed zoned block device
>>> >>>>>>>>>>>with Zone Append command support. A namespace that does not
>>> >>>>>>>>>>>support append is not supported by the driver.
>>> >>>>>>>>>>
>>> >>>>>>>>>>Why are we enforcing the append command? Append is optional
>>> on
>>> >>>>>>>>>>the current ZNS specification, so we should not make this
>>> >>>>>>>>>>mandatory in the implementation. See specifics below.
>>> >>>>>>>>
>>> >>>>>>>>>
>>> >>>>>>>>>There is already general support in the kernel for the zone
>>> >>>>>>>>>append command. Feel free to submit patches to emulate the
>>> >>>>>>>>>support. It is outside the scope of this patchset.
>>> >>>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>>It is fine that the kernel supports append, but the ZNS
>>> >>>>>>>>specification does not impose the implementation for append, so
>>> >>>>>>>>the driver should not do that either.
>>> >>>>>>>>
>>> >>>>>>>>ZNS SSDs that choose to leave append as a non-implemented
>>> >>>>>>>>optional command should not rely on emulated SW support,
>>> >>>>>>>>specially when traditional writes work very fine for a large
>>> >>>>>>>>part of current ZNS use cases.
>>> >>>>>>>>
>>> >>>>>>>>Please, remove this virtual constraint.
>>> >>>>>>>
>>> >>>>>>>The Zone Append command is mandatory for zoned block devices.
>>> >>>>>>>Please see https://lwn.net/Articles/818709/ for the background.
>>> >>>>>>
>>> >>>>>>I do not see anywhere in the block layer that append is mandatory
>>> >>>>>>for zoned devices. Append is emulated on ZBC, but beyond that
>>> >>>>>>there is no mandatory bits. Please explain.
>>> >>>>>
>>> >>>>>This is to allow a single write IO path for all types of zoned
>>> >>>>>block device for higher layers, e.g file systems. The on-going
>>> >>>>>re-work of btrfs zone support for instance now relies 100% on zone
>>> >>>>>append being supported. That significantly simplifies the file
>>> >>>>>system support and more importantly remove the need for locking
>>> >>>>>around block allocation and BIO issuing, allowing to preserve a
>>> >>>>>fully asynchronous write path that can include workqueues for
>>> >>>>>efficient CPU usage of things like encryption and compression.
>>> >>>>>Without zone append, file system would either (1) have to reject
>>> >>>>>these drives that do not support zone append, or (2) implement 2
>>> >>>>>different write IO path (slower regular write and zone append).
>>> >>>>>None of these options are ideal, to say the least.
>>> >>>>>
>>> >>>>>So the approach is: mandate zone append support for ZNS 
>>> devices. To
>>> >>>>>allow other ZNS drives, an emulation similar to SCSI can be
>>> >>>>>implemented, with that emulation ideally combined to work for both
>>> >>>>>types of drives if possible.
>>> >>>>
>>> >>>>Enforcing QD=1 becomes a problem on devices with large zones. In a
>>> >>>>ZNS device that has smaller zones this should not be a problem.
>>> >>>
>>> >>>Let's be precise: this is not running the drive at QD=1, it is "at
>>> >>>most one write *request* per zone". If the FS is simultaneously 
>>> using
>>> >>>multiple block groups mapped to different zones, you will get a 
>>> total
>>> >>>write QD > 1, and as many reads as you want.
>>> >>>
>>> >>>>Would you agree that it is possible to have a write path that 
>>> relies
>>> >>>>on QD=1, where the FS / application has the responsibility for
>>> >>>>enforcing this? Down the road this QD can be increased if the 
>>> device
>>> >>>>is able to buffer the writes.
>>> >>>
>>> >>>Doing QD=1 per zone for writes at the FS layer, that is, at the BIO
>>> >>>layer does not work. This is because BIOs can be as large as the FS
>>> >>>wants them to be. Such large BIO will be split into multiple 
>>> requests
>>> >>>in the block layer, resulting in more than one write per zone. That
>>> >>>is why the zone write locking is at the scheduler level, between BIO
>>> >>>split and request dispatch. That avoids the multiple requests
>>> >>>fragments of a large BIO to be reordered and fail. That is mandatory
>>> >>>as the block layer itself can occasionally reorder requests and 
>>> lower
>>> >>>levels such as AHCI HW is also notoriously good at reversing
>>> >>>sequential requests. For NVMe with multi-queue, the IO issuing
>>> >>>process getting rescheduled on a different CPU can result in
>>> >>>sequential IOs being in different queues, with the likely result of
>>> >>>an out-of-order execution. All cases are avoided with zone write
>>> >>>locking and at most one write request dispatch per zone as
>>> >>>recommended by the ZNS specifications (ZBC and ZAC standards for SMR
>>> >>>HDDs are silent on this).
>>> >>>
>>> >>
>>> >>I understand. I agree that the current FSs supporting ZNS follow this
>>> >>approach and it makes sense that there is a common interface that
>>> >>simplifies the FS implementation. See the comment below on the part I
>>> >>believe we see things differently.
>>> >>
>>> >>
>>> >>>>I would be OK with some FS implementations to rely on append and
>>> >>>>impose the constraint that append has to be supported (and it would
>>> >>>>be our job to change that), but I would like to avoid the driver
>>> >>>>rejecting initializing the device because current FS 
>>> implementations
>>> >>>>have implemented this logic.
>>> >>>
>>> >>>What is the difference between the driver rejecting drives and 
>>> the FS
>>> >>>rejecting the same drives ? That has the same end result to me: an
>>> >>>entire class of devices cannot be used as desired by the user.
>>> >>>Implementing zone append emulation avoids the rejection entirely
>>> >>>while still allowing the FS to have a single write IO path, thus
>>> >>>simplifying the code.
>>> >>
>>> >>The difference is that users that use a raw ZNS device submitting I/O
>>> >>through the kernel would still be able to use these devices. The
>>> >>result would be that the ZNS SSD is recognized and initialized, but
>>> >>the FS format fails.
>>> >>
>>> >>>
>>> >>>>We can agree that a number of initial customers will use these
>>> >>>>devices raw, using the in-kernel I/O path, but without a FS on top.
>>> >>>>
>>> >>>>Thoughts?
>>> >>>>
>>> >>>>>and note that
>>> >>>>>this emulation would require the drive to be operated with
>>> >>>>>mq-deadline to enable zone write locking for preserving write
>>> >>>>>command order. While on a HDD the performance penalty is minimal,
>>> >>>>>it will likely be significant on a SSD.
>>> >>>>
>>> >>>>Exactly my concern. I do not want ZNS SSDs to be impacted by this
>>> >>>>type of design decision at the driver level.
>>> >>>
>>> >>>But your proposed FS level approach would end up doing the exact 
>>> same
>>> >>>thing with the same limitation and so the same potential performance
>>> >>>impact.
>>> >>>The block
>>> >>>layer generic approach has the advantage that we do not bother the
>>> >>>higher levels with the implementation of in-order request dispatch
>>> >>>guarantees.
>>> >>>File systems
>>> >>>are complex enough. The less complexity is required for zone 
>>> support,
>>> >>>the better.
>>> >>
>>> >>This depends very much on how the FS / application is managing
>>> >>stripping. At the moment our main use case is enabling user-space
>>> >>applications submitting I/Os to raw ZNS devices through the kernel.
>>> >>
>>> >>Can we enable this use case to start with?
>>> >
>>> >It is free for everyone to load kernel modules into the kernel. Those
>>> >modules may not have the appropriate checks or may rely on the zone
>>> >append functionality. Having per use-case limit is a no-go and at best
>>> >a game of whack-a-mole.
>>>
>>> Let's focus on mainline support. We are leaving append as not 
>>> enabled based
>>> on customer requests for some ZNS products and would like this 
>>> devices to be
>>> supported. This is not at all a corner use-case but a very general one.
>>>
>>> >
>>> >You already agreed to create a set of patches to add the appropriate
>>> >support for emulating zone append. As these would fix your specific
>>> >issue, please go ahead and submit those.
>>>
>>> I agreed to solve the use case that some of our customers are 
>>> enabling and this
>>> is what I am doing.
>>>
>>> Again, to start with I would like to have a path where ZNS 
>>> namespaces are
>>> identified independently of append support. Then specific users can 
>>> require
>>> append if they please to do so. We will of course take care of 
>>> sending patches
>>> for this.
>>
>> As was previously said, there are users in the kernel that depends on
>> zone append. As a result, it is not an option not to have this. Please
>> go ahead and send the patches and you'll have the behavior you are
>> seeking.
>>
>
> Never put in doubt that we are the ones implementing support for this,
> but since you keep asking, I want to make it clear that not using append
> command is a very general use case for ZNS adopters.
>
I am not asking. I am confirming that this is orthogonal to _this_ 
specific patchset. This discussion can continue in the potential patches 
that you plan.



_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-06-16 16:25 UTC|newest]

Thread overview: 192+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-15 23:34 [PATCH 0/5] nvme support for zoned namespace command set Keith Busch
2020-06-15 23:34 ` Keith Busch
2020-06-15 23:34 ` [PATCH 1/5] block: add capacity field to zone descriptors Keith Busch
2020-06-15 23:34   ` Keith Busch
2020-06-15 23:49   ` Chaitanya Kulkarni
2020-06-15 23:49     ` Chaitanya Kulkarni
2020-06-16 10:28   ` Javier González
2020-06-16 10:28     ` Javier González
2020-06-16 13:47   ` Daniel Wagner
2020-06-16 13:47     ` Daniel Wagner
2020-06-16 13:54   ` Johannes Thumshirn
2020-06-16 13:54     ` Johannes Thumshirn
2020-06-16 15:41   ` Martin K. Petersen
2020-06-16 15:41     ` Martin K. Petersen
2020-06-15 23:34 ` [PATCH 2/5] null_blk: introduce zone capacity for zoned device Keith Busch
2020-06-15 23:34   ` Keith Busch
2020-06-15 23:46   ` Chaitanya Kulkarni
2020-06-15 23:46     ` Chaitanya Kulkarni
2020-06-16 14:18   ` Daniel Wagner
2020-06-16 14:18     ` Daniel Wagner
2020-06-16 15:48   ` Martin K. Petersen
2020-06-16 15:48     ` Martin K. Petersen
2020-06-15 23:34 ` [PATCH 3/5] nvme: implement I/O Command Sets Command Set support Keith Busch
2020-06-15 23:34   ` Keith Busch
2020-06-16 10:33   ` Javier González
2020-06-16 10:33     ` Javier González
2020-06-16 17:14     ` Niklas Cassel
2020-06-16 17:14       ` Niklas Cassel
2020-06-16 15:58   ` Martin K. Petersen
2020-06-16 15:58     ` Martin K. Petersen
2020-06-16 17:01     ` Keith Busch
2020-06-16 17:01       ` Keith Busch
2020-06-17  9:50       ` Niklas Cassel
2020-06-17  9:50         ` Niklas Cassel
2020-06-16 17:06     ` Niklas Cassel
2020-06-16 17:06       ` Niklas Cassel
2020-06-17  2:01       ` Martin K. Petersen
2020-06-17  2:01         ` Martin K. Petersen
2020-06-15 23:34 ` [PATCH 4/5] nvme: support for multi-command set effects Keith Busch
2020-06-15 23:34   ` Keith Busch
2020-06-16 10:34   ` Javier González
2020-06-16 10:34     ` Javier González
2020-06-16 16:03   ` Martin K. Petersen
2020-06-16 16:03     ` Martin K. Petersen
2020-06-15 23:34 ` [PATCH 5/5] nvme: support for zoned namespaces Keith Busch
2020-06-15 23:34   ` Keith Busch
2020-06-16 10:41   ` Javier González
2020-06-16 10:41     ` Javier González
2020-06-16 11:18     ` Matias Bjørling
2020-06-16 11:18       ` Matias Bjørling
2020-06-16 12:00       ` Javier González
2020-06-16 12:00         ` Javier González
2020-06-16 12:06         ` Matias Bjørling
2020-06-16 12:06           ` Matias Bjørling
2020-06-16 12:24           ` Javier González
2020-06-16 12:24             ` Javier González
2020-06-16 12:27             ` Matias Bjørling
2020-06-16 12:27               ` Matias Bjørling
2020-06-16 12:35             ` Damien Le Moal
2020-06-16 12:35               ` Damien Le Moal
     [not found]               ` <CGME20200616130815uscas1p1be34e5fceaa548eac31fb30790a689d4@uscas1p1.samsung.com>
2020-06-16 13:08                 ` Judy Brock
2020-06-16 13:08                   ` Judy Brock
2020-06-16 13:32                   ` Matias Bjørling
2020-06-16 13:32                     ` Matias Bjørling
2020-06-16 13:34                   ` Damien Le Moal
2020-06-16 13:34                     ` Damien Le Moal
2020-06-16 14:16               ` Javier González
2020-06-16 14:16                 ` Javier González
2020-06-16 14:42                 ` Damien Le Moal
2020-06-16 14:42                   ` Damien Le Moal
2020-06-16 15:02                   ` Javier González
2020-06-16 15:02                     ` Javier González
2020-06-16 15:20                     ` Matias Bjørling
2020-06-16 15:20                       ` Matias Bjørling
2020-06-16 16:03                       ` Javier González
2020-06-16 16:03                         ` Javier González
2020-06-16 16:07                         ` Matias Bjorling
2020-06-16 16:07                           ` Matias Bjorling
2020-06-16 16:21                           ` Javier González
2020-06-16 16:21                             ` Javier González
2020-06-16 16:25                             ` Matias Bjørling [this message]
2020-06-16 16:25                               ` Matias Bjørling
2020-06-16 15:48                     ` Keith Busch
2020-06-16 15:48                       ` Keith Busch
2020-06-16 15:55                       ` Javier González
2020-06-16 15:55                         ` Javier González
2020-06-16 16:04                         ` Matias Bjorling
2020-06-16 16:04                           ` Matias Bjorling
2020-06-16 16:07                         ` Keith Busch
2020-06-16 16:07                           ` Keith Busch
2020-06-16 16:13                           ` Javier González
2020-06-16 16:13                             ` Javier González
2020-06-17  0:38                             ` Damien Le Moal
2020-06-17  0:38                               ` Damien Le Moal
2020-06-17  6:18                               ` Javier González
2020-06-17  6:18                                 ` Javier González
2020-06-17  6:54                                 ` Damien Le Moal
2020-06-17  6:54                                   ` Damien Le Moal
2020-06-17  7:11                                   ` Javier González
2020-06-17  7:11                                     ` Javier González
2020-06-17  7:29                                     ` Damien Le Moal
2020-06-17  7:29                                       ` Damien Le Moal
2020-06-17  7:34                                       ` Javier González
2020-06-17  7:34                                         ` Javier González
2020-06-17  0:14                     ` Damien Le Moal
2020-06-17  0:14                       ` Damien Le Moal
2020-06-17  6:09                       ` Javier González
2020-06-17  6:09                         ` Javier González
2020-06-17  6:47                         ` Damien Le Moal
2020-06-17  6:47                           ` Damien Le Moal
2020-06-17  7:02                           ` Javier González
2020-06-17  7:02                             ` Javier González
2020-06-17  7:24                             ` Damien Le Moal
2020-06-17  7:24                               ` Damien Le Moal
2020-06-17  7:29                               ` Javier González
2020-06-17  7:29                                 ` Javier González
     [not found]         ` <CGME20200616123503uscas1p22ce22054a1b4152a20437b5abdd55119@uscas1p2.samsung.com>
2020-06-16 12:35           ` Judy Brock
2020-06-16 12:35             ` Judy Brock
2020-06-16 12:37             ` Damien Le Moal
2020-06-16 12:37               ` Damien Le Moal
2020-06-16 12:37             ` Matias Bjørling
2020-06-16 12:37               ` Matias Bjørling
2020-06-16 13:12               ` Judy Brock
2020-06-16 13:12                 ` Judy Brock
2020-06-16 13:18                 ` Judy Brock
2020-06-16 13:18                   ` Judy Brock
2020-06-16 13:32                   ` Judy Brock
2020-06-16 13:32                     ` Judy Brock
2020-06-16 13:39                     ` Damien Le Moal
2020-06-16 13:39                       ` Damien Le Moal
2020-06-17  7:43     ` Christoph Hellwig
2020-06-17  7:43       ` Christoph Hellwig
2020-06-17 12:01       ` Martin K. Petersen
2020-06-17 12:01         ` Martin K. Petersen
2020-06-17 15:00         ` Javier González
2020-06-17 15:00           ` Javier González
2020-06-17 14:42       ` Javier González
2020-06-17 14:42         ` Javier González
2020-06-17 17:57         ` Matias Bjørling
2020-06-17 17:57           ` Matias Bjørling
2020-06-17 18:28           ` Javier González
2020-06-17 18:28             ` Javier González
2020-06-17 18:55             ` Matias Bjorling
2020-06-17 18:55               ` Matias Bjorling
2020-06-17 19:09               ` Javier González
2020-06-17 19:09                 ` Javier González
2020-06-17 19:23                 ` Matias Bjørling
2020-06-17 19:23                   ` Matias Bjørling
2020-06-17 19:40                   ` Javier González
2020-06-17 19:40                     ` Javier González
2020-06-17 23:44                     ` Heiner Litz
2020-06-17 23:44                       ` Heiner Litz
2020-06-18  1:55                       ` Keith Busch
2020-06-18  1:55                         ` Keith Busch
2020-06-18  4:24                         ` Heiner Litz
2020-06-18  4:24                           ` Heiner Litz
2020-06-18  5:15                           ` Damien Le Moal
2020-06-18  5:15                             ` Damien Le Moal
2020-06-18 20:47                             ` Heiner Litz
2020-06-18 20:47                               ` Heiner Litz
2020-06-18 21:04                               ` Matias Bjorling
2020-06-18 21:04                                 ` Matias Bjorling
2020-06-18 21:19                               ` Keith Busch
2020-06-18 21:19                                 ` Keith Busch
2020-06-18 22:05                                 ` Heiner Litz
2020-06-18 22:05                                   ` Heiner Litz
2020-06-19  0:57                                   ` Damien Le Moal
2020-06-19  0:57                                     ` Damien Le Moal
2020-06-19 10:29                                   ` Matias Bjorling
2020-06-19 10:29                                     ` Matias Bjorling
2020-06-19 18:08                                     ` Heiner Litz
2020-06-19 18:08                                       ` Heiner Litz
2020-06-19 18:10                                       ` Keith Busch
2020-06-19 18:10                                         ` Keith Busch
2020-06-19 18:17                                         ` Heiner Litz
2020-06-19 18:17                                           ` Heiner Litz
2020-06-19 18:22                                           ` Keith Busch
2020-06-19 18:22                                             ` Keith Busch
2020-06-19 18:25                                           ` Matias Bjørling
2020-06-19 18:25                                             ` Matias Bjørling
2020-06-19 18:40                                             ` Heiner Litz
2020-06-19 18:40                                               ` Heiner Litz
2020-06-19 18:18                                       ` Matias Bjørling
2020-06-19 18:18                                         ` Matias Bjørling
2020-06-20  6:33                                       ` Christoph Hellwig
2020-06-20  6:33                                         ` Christoph Hellwig
2020-06-20 17:52                                         ` Heiner Litz
2020-06-20 17:52                                           ` Heiner Litz
2020-06-22 14:01                                           ` Christoph Hellwig
2022-03-02 21:11                   ` Luis Chamberlain
2020-06-17  2:08   ` Martin K. Petersen
2020-06-17  2:08     ` Martin K. Petersen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1bda189d-bf3e-657d-0689-d36ba4933ff2@lightnvm.io \
    --to=mb@lightnvm.io \
    --cc=Ajay.Joshi@wdc.com \
    --cc=Aravind.Ramesh@wdc.com \
    --cc=Damien.LeMoal@wdc.com \
    --cc=Dmitry.Fomichev@wdc.com \
    --cc=Hans.Holmberg@wdc.com \
    --cc=Keith.Busch@wdc.com \
    --cc=Matias.Bjorling@wdc.com \
    --cc=Niklas.Cassel@wdc.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=javier@javigon.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.