* [PATCH 0/2] NVMe 1.4 Identify Namespace Support @ 2019-06-06 21:28 Bart Van Assche 2019-06-06 21:28 ` [PATCH 1/2] nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns Bart Van Assche ` (2 more replies) 0 siblings, 3 replies; 15+ messages in thread From: Bart Van Assche @ 2019-06-06 21:28 UTC (permalink / raw) Hi Keith, These two patches are what I came up with after having read the Identify Namespace sections in final draft of version 1.4 of the NVMe. These patches compile correctly but have not been tested in any other way. Please consider these patches for kernel version 5.3. Thanks, Bart. Bart Van Assche (2): nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns nvme: Set physical block size and optimal I/O size according to NVMe 1.4 drivers/nvme/host/core.c | 14 ++++++++++++-- include/linux/nvme.h | 12 +++++++++--- 2 files changed, 21 insertions(+), 5 deletions(-) -- 2.22.0.rc3 ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/2] nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns 2019-06-06 21:28 [PATCH 0/2] NVMe 1.4 Identify Namespace Support Bart Van Assche @ 2019-06-06 21:28 ` Bart Van Assche 2019-06-06 21:40 ` Chaitanya Kulkarni 2019-06-07 15:18 ` Martin K. Petersen 2019-06-06 21:28 ` [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 Bart Van Assche 2019-06-07 13:56 ` [PATCH 0/2] NVMe 1.4 Identify Namespace Support Keith Busch 2 siblings, 2 replies; 15+ messages in thread From: Bart Van Assche @ 2019-06-06 21:28 UTC (permalink / raw) Several new fields have been introduced in version 1.4 of the NVMe spec at offsets that were defined as reserved in version 1.3d of the NVMe spec. Update the definition of the nvme_id_ns data structure such that it is in sync with version 1.4 of the NVMe spec. Cc: Christoph Hellwig <hch at lst.de> Cc: Sagi Grimberg <sagi at grimberg.me> Cc: Hannes Reinecke <hare at suse.com> Signed-off-by: Bart Van Assche <bvanassche at acm.org> --- include/linux/nvme.h | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/include/linux/nvme.h b/include/linux/nvme.h index 8028adacaff3..2b5072ec4511 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -315,7 +315,7 @@ struct nvme_id_ns { __u8 nmic; __u8 rescap; __u8 fpi; - __u8 rsvd33; + __u8 dlfeat; __le16 nawun; __le16 nawupf; __le16 nacwu; @@ -324,11 +324,17 @@ struct nvme_id_ns { __le16 nabspf; __le16 noiob; __u8 nvmcap[16]; - __u8 rsvd64[28]; + __le16 npwg; + __le16 npwa; + __le16 npdg; + __le16 npda; + __le16 nows; + __u8 rsvd74[18]; __le32 anagrpid; __u8 rsvd96[3]; __u8 nsattr; - __u8 rsvd100[4]; + __le16 nvmsetid; + __le16 endgid; __u8 nguid[16]; __u8 eui64[8]; struct nvme_lbaf lbaf[16]; -- 2.22.0.rc3 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 1/2] nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns 2019-06-06 21:28 ` [PATCH 1/2] nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns Bart Van Assche @ 2019-06-06 21:40 ` Chaitanya Kulkarni 2019-06-07 15:18 ` Martin K. Petersen 1 sibling, 0 replies; 15+ messages in thread From: Chaitanya Kulkarni @ 2019-06-06 21:40 UTC (permalink / raw) Looks good. Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com> On 06/06/2019 02:29 PM, Bart Van Assche wrote: > Several new fields have been introduced in version 1.4 of the NVMe spec > at offsets that were defined as reserved in version 1.3d of the NVMe > spec. Update the definition of the nvme_id_ns data structure such that > it is in sync with version 1.4 of the NVMe spec. > > Cc: Christoph Hellwig <hch at lst.de> > Cc: Sagi Grimberg <sagi at grimberg.me> > Cc: Hannes Reinecke <hare at suse.com> > Signed-off-by: Bart Van Assche <bvanassche at acm.org> > --- > include/linux/nvme.h | 12 +++++++++--- > 1 file changed, 9 insertions(+), 3 deletions(-) > > diff --git a/include/linux/nvme.h b/include/linux/nvme.h > index 8028adacaff3..2b5072ec4511 100644 > --- a/include/linux/nvme.h > +++ b/include/linux/nvme.h > @@ -315,7 +315,7 @@ struct nvme_id_ns { > __u8 nmic; > __u8 rescap; > __u8 fpi; > - __u8 rsvd33; > + __u8 dlfeat; > __le16 nawun; > __le16 nawupf; > __le16 nacwu; > @@ -324,11 +324,17 @@ struct nvme_id_ns { > __le16 nabspf; > __le16 noiob; > __u8 nvmcap[16]; > - __u8 rsvd64[28]; > + __le16 npwg; > + __le16 npwa; > + __le16 npdg; > + __le16 npda; > + __le16 nows; > + __u8 rsvd74[18]; > __le32 anagrpid; > __u8 rsvd96[3]; > __u8 nsattr; > - __u8 rsvd100[4]; > + __le16 nvmsetid; > + __le16 endgid; > __u8 nguid[16]; > __u8 eui64[8]; > struct nvme_lbaf lbaf[16]; > ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/2] nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns 2019-06-06 21:28 ` [PATCH 1/2] nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns Bart Van Assche 2019-06-06 21:40 ` Chaitanya Kulkarni @ 2019-06-07 15:18 ` Martin K. Petersen 1 sibling, 0 replies; 15+ messages in thread From: Martin K. Petersen @ 2019-06-07 15:18 UTC (permalink / raw) Bart, > Several new fields have been introduced in version 1.4 of the NVMe spec > at offsets that were defined as reserved in version 1.3d of the NVMe > spec. Update the definition of the nvme_id_ns data structure such that > it is in sync with version 1.4 of the NVMe spec. Looks good. Reviewed-by: Martin K. Petersen <martin.petersen at oracle.com> -- Martin K. Petersen Oracle Linux Engineering ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 2019-06-06 21:28 [PATCH 0/2] NVMe 1.4 Identify Namespace Support Bart Van Assche 2019-06-06 21:28 ` [PATCH 1/2] nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns Bart Van Assche @ 2019-06-06 21:28 ` Bart Van Assche 2019-06-06 21:42 ` Chaitanya Kulkarni ` (2 more replies) 2019-06-07 13:56 ` [PATCH 0/2] NVMe 1.4 Identify Namespace Support Keith Busch 2 siblings, 3 replies; 15+ messages in thread From: Bart Van Assche @ 2019-06-06 21:28 UTC (permalink / raw) >From the NVMe 1.4 spec: NSFEAT bit 4 if set to 1: indicates that the fields NPWG, NPWA, NPDG, NPDA, and NOWS are defined for this namespace and should be used by the host for I/O optimization; [ ... ] Namespace Preferred Write Granularity (NPWG): This field indicates the smallest recommended write granularity in logical blocks for this namespace. This is a 0's based value. The size indicated should be less than or equal to Maximum Data Transfer Size (MDTS) that is specified in units of minimum memory page size. The value of this field may change if the namespace is reformatted. The size should be a multiple of Namespace Preferred Write Alignment (NPWA). Refer to section 8.25 for how this field is utilized to improve performance and endurance. [ ... ] Each Write, Write Uncorrectable, or Write Zeroes commands should address a multiple of Namespace Preferred Write Granularity (NPWG) (refer to Figure 245) and Stream Write Size (SWS) (refer to Figure 515) logical blocks (as expressed in the NLB field), and the SLBA field of the command should be aligned to Namespace Preferred Write Alignment (NPWA) (refer to Figure 245) for best performance. Cc: Christoph Hellwig <hch at lst.de> Cc: Sagi Grimberg <sagi at grimberg.me> Cc: Hannes Reinecke <hare at suse.com> Signed-off-by: Bart Van Assche <bvanassche at acm.org> --- drivers/nvme/host/core.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 1b7c2afd84cb..c67f2fc8c5c0 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1608,6 +1608,7 @@ static void nvme_update_disk_info(struct gendisk *disk, { sector_t capacity = le64_to_cpu(id->nsze) << (ns->lba_shift - 9); unsigned short bs = 1 << ns->lba_shift; + uint32_t phys_bs, io_opt; if (ns->lba_shift > PAGE_SHIFT) { /* unsupported block size, set capacity to 0 later */ @@ -1616,9 +1617,18 @@ static void nvme_update_disk_info(struct gendisk *disk, blk_mq_freeze_queue(disk->queue); blk_integrity_unregister(disk); + phys_bs = bs; + io_opt = bs; + if (id->nsfeat & (1 << 4)) { + /* NPWG = Namespace Preferred Write Granularity */ + phys_bs *= 1 + le16_to_cpu(id->npwg); + /* NOWS = Namespace Optimal Write Size */ + io_opt *= 1 + le16_to_cpu(id->nows); + } blk_queue_logical_block_size(disk->queue, bs); - blk_queue_physical_block_size(disk->queue, bs); - blk_queue_io_min(disk->queue, bs); + blk_queue_physical_block_size(disk->queue, phys_bs); + blk_queue_io_min(disk->queue, phys_bs); + blk_queue_io_opt(disk->queue, io_opt); if (ns->ms && !ns->ext && (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED)) -- 2.22.0.rc3 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 2019-06-06 21:28 ` [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 Bart Van Assche @ 2019-06-06 21:42 ` Chaitanya Kulkarni 2019-06-06 21:48 ` Bart Van Assche 2019-06-07 15:19 ` Martin K. Petersen 2019-06-07 16:42 ` Christoph Hellwig 2 siblings, 1 reply; 15+ messages in thread From: Chaitanya Kulkarni @ 2019-06-06 21:42 UTC (permalink / raw) Looks good. One quick question though do you actually have a controller on which you have tested this feature ? On 06/06/2019 02:29 PM, Bart Van Assche wrote: > From the NVMe 1.4 spec: > > NSFEAT bit 4 if set to 1: indicates that the fields NPWG, NPWA, NPDG, NPDA, > and NOWS are defined for this namespace and should be used by the host for > I/O optimization; > [ ... ] > Namespace Preferred Write Granularity (NPWG): This field indicates the > smallest recommended write granularity in logical blocks for this namespace. > This is a 0's based value. The size indicated should be less than or equal > to Maximum Data Transfer Size (MDTS) that is specified in units of minimum > memory page size. The value of this field may change if the namespace is > reformatted. The size should be a multiple of Namespace Preferred Write > Alignment (NPWA). Refer to section 8.25 for how this field is utilized to > improve performance and endurance. > [ ... ] > Each Write, Write Uncorrectable, or Write Zeroes commands should address a > multiple of Namespace Preferred Write Granularity (NPWG) (refer to Figure > 245) and Stream Write Size (SWS) (refer to Figure 515) logical blocks (as > expressed in the NLB field), and the SLBA field of the command should be > aligned to Namespace Preferred Write Alignment (NPWA) (refer to Figure 245) > for best performance. > > Cc: Christoph Hellwig <hch at lst.de> > Cc: Sagi Grimberg <sagi at grimberg.me> > Cc: Hannes Reinecke <hare at suse.com> > Signed-off-by: Bart Van Assche <bvanassche at acm.org> > --- > drivers/nvme/host/core.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 1b7c2afd84cb..c67f2fc8c5c0 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -1608,6 +1608,7 @@ static void nvme_update_disk_info(struct gendisk *disk, > { > sector_t capacity = le64_to_cpu(id->nsze) << (ns->lba_shift - 9); > unsigned short bs = 1 << ns->lba_shift; > + uint32_t phys_bs, io_opt; > > if (ns->lba_shift > PAGE_SHIFT) { > /* unsupported block size, set capacity to 0 later */ > @@ -1616,9 +1617,18 @@ static void nvme_update_disk_info(struct gendisk *disk, > blk_mq_freeze_queue(disk->queue); > blk_integrity_unregister(disk); > > + phys_bs = bs; > + io_opt = bs; > + if (id->nsfeat & (1 << 4)) { > + /* NPWG = Namespace Preferred Write Granularity */ > + phys_bs *= 1 + le16_to_cpu(id->npwg); > + /* NOWS = Namespace Optimal Write Size */ > + io_opt *= 1 + le16_to_cpu(id->nows); > + } > blk_queue_logical_block_size(disk->queue, bs); > - blk_queue_physical_block_size(disk->queue, bs); > - blk_queue_io_min(disk->queue, bs); > + blk_queue_physical_block_size(disk->queue, phys_bs); > + blk_queue_io_min(disk->queue, phys_bs); > + blk_queue_io_opt(disk->queue, io_opt); > > if (ns->ms && !ns->ext && > (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED)) > ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 2019-06-06 21:42 ` Chaitanya Kulkarni @ 2019-06-06 21:48 ` Bart Van Assche 2019-06-06 21:58 ` Chaitanya Kulkarni 2019-06-07 16:42 ` Christoph Hellwig 0 siblings, 2 replies; 15+ messages in thread From: Bart Van Assche @ 2019-06-06 21:48 UTC (permalink / raw) On 6/6/19 2:42 PM, Chaitanya Kulkarni wrote: > Looks good. One quick question though do you actually have a controller > on which you have tested this feature ? Hi Chaitanya, From the cover letter of this patch series: "These patches compile correctly but have not been tested in any other way." Bart. ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 2019-06-06 21:48 ` Bart Van Assche @ 2019-06-06 21:58 ` Chaitanya Kulkarni 2019-06-07 16:42 ` Christoph Hellwig 1 sibling, 0 replies; 15+ messages in thread From: Chaitanya Kulkarni @ 2019-06-06 21:58 UTC (permalink / raw) On 06/06/2019 02:48 PM, Bart Van Assche wrote: > On 6/6/19 2:42 PM, Chaitanya Kulkarni wrote: >> Looks good. One quick question though do you actually have a controller >> on which you have tested this feature ? > > Hi Chaitanya, > > From the cover letter of this patch series: "These patches > compile correctly but have not been tested in any other way." > > Bart. > Thanks. Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com> ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 2019-06-06 21:48 ` Bart Van Assche 2019-06-06 21:58 ` Chaitanya Kulkarni @ 2019-06-07 16:42 ` Christoph Hellwig 1 sibling, 0 replies; 15+ messages in thread From: Christoph Hellwig @ 2019-06-07 16:42 UTC (permalink / raw) On Thu, Jun 06, 2019@02:48:24PM -0700, Bart Van Assche wrote: > On 6/6/19 2:42 PM, Chaitanya Kulkarni wrote: >> Looks good. One quick question though do you actually have a controller >> on which you have tested this feature ? > > Hi Chaitanya, > > From the cover letter of this patch series: "These patches > compile correctly but have not been tested in any other way." You could create something based of our linux I/O limits for the Linux nvme target to expose those and test the code. ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 2019-06-06 21:28 ` [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 Bart Van Assche 2019-06-06 21:42 ` Chaitanya Kulkarni @ 2019-06-07 15:19 ` Martin K. Petersen 2019-06-07 16:42 ` Christoph Hellwig 2 siblings, 0 replies; 15+ messages in thread From: Martin K. Petersen @ 2019-06-07 15:19 UTC (permalink / raw) Bart, > + phys_bs = bs; > + io_opt = bs; > + if (id->nsfeat & (1 << 4)) { > + /* NPWG = Namespace Preferred Write Granularity */ > + phys_bs *= 1 + le16_to_cpu(id->npwg); > + /* NOWS = Namespace Optimal Write Size */ > + io_opt *= 1 + le16_to_cpu(id->nows); > + } > blk_queue_logical_block_size(disk->queue, bs); > - blk_queue_physical_block_size(disk->queue, bs); > - blk_queue_io_min(disk->queue, bs); > + blk_queue_physical_block_size(disk->queue, phys_bs); > + blk_queue_io_min(disk->queue, phys_bs); > + blk_queue_io_opt(disk->queue, io_opt); > > if (ns->ms && !ns->ext && > (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED)) Also fine. Nice to get these wired up! Reviewed-by: Martin K. Petersen <martin.petersen at oracle.com> -- Martin K. Petersen Oracle Linux Engineering ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 2019-06-06 21:28 ` [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 Bart Van Assche 2019-06-06 21:42 ` Chaitanya Kulkarni 2019-06-07 15:19 ` Martin K. Petersen @ 2019-06-07 16:42 ` Christoph Hellwig 2 siblings, 0 replies; 15+ messages in thread From: Christoph Hellwig @ 2019-06-07 16:42 UTC (permalink / raw) On Thu, Jun 06, 2019@02:28:54PM -0700, Bart Van Assche wrote: > + phys_bs = bs; > + io_opt = bs; > + if (id->nsfeat & (1 << 4)) { > + /* NPWG = Namespace Preferred Write Granularity */ > + phys_bs *= 1 + le16_to_cpu(id->npwg); > + /* NOWS = Namespace Optimal Write Size */ > + io_opt *= 1 + le16_to_cpu(id->nows); > + } > blk_queue_logical_block_size(disk->queue, bs); > - blk_queue_physical_block_size(disk->queue, bs); > - blk_queue_io_min(disk->queue, bs); > + blk_queue_physical_block_size(disk->queue, phys_bs); Unfortunatly it is not that simple. Filesystems expect the physical block size to be an atomic writable unit. So this value will have to be limited by AWUPF/NAWUPF values. ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 0/2] NVMe 1.4 Identify Namespace Support 2019-06-06 21:28 [PATCH 0/2] NVMe 1.4 Identify Namespace Support Bart Van Assche 2019-06-06 21:28 ` [PATCH 1/2] nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns Bart Van Assche 2019-06-06 21:28 ` [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 Bart Van Assche @ 2019-06-07 13:56 ` Keith Busch 2019-06-07 15:21 ` Martin K. Petersen 2 siblings, 1 reply; 15+ messages in thread From: Keith Busch @ 2019-06-07 13:56 UTC (permalink / raw) On Thu, Jun 6, 2019@3:29 PM Bart Van Assche <bvanassche@acm.org> wrote: > Hi Keith, > > These two patches are what I came up with after having read the Identify > Namespace sections in final draft of version 1.4 of the NVMe. These patches > compile correctly but have not been tested in any other way. Please consider > these patches for kernel version 5.3. > > Thanks, > > Bart. > > Bart Van Assche (2): > nvme: Introduce NVMe 1.4 Identify Namespace fields in struct > nvme_id_ns > nvme: Set physical block size and optimal I/O size according to NVMe > 1.4 > > drivers/nvme/host/core.c | 14 ++++++++++++-- > include/linux/nvme.h | 12 +++++++++--- > 2 files changed, 21 insertions(+), 5 deletions(-) > > -- Series looks good. There doesn't seem to be much in-kernel use for these preferred access attributes unfortunately, but this is a good start. Reviewed-by: Keith Busch <kbusch at kernel.org> ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 0/2] NVMe 1.4 Identify Namespace Support 2019-06-07 13:56 ` [PATCH 0/2] NVMe 1.4 Identify Namespace Support Keith Busch @ 2019-06-07 15:21 ` Martin K. Petersen 2019-06-07 16:26 ` Keith Busch 0 siblings, 1 reply; 15+ messages in thread From: Martin K. Petersen @ 2019-06-07 15:21 UTC (permalink / raw) Keith, > Series looks good. There doesn't seem to be much in-kernel use for > these preferred access attributes unfortunately, but this is a good > start. Userland makes use of them to ensure partition/MD/DM alignment, pick sane values for filesystem layout, etc. -- Martin K. Petersen Oracle Linux Engineering ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 0/2] NVMe 1.4 Identify Namespace Support 2019-06-07 15:21 ` Martin K. Petersen @ 2019-06-07 16:26 ` Keith Busch 2019-06-07 16:44 ` Christoph Hellwig 0 siblings, 1 reply; 15+ messages in thread From: Keith Busch @ 2019-06-07 16:26 UTC (permalink / raw) On Fri, Jun 7, 2019 at 9:21 AM Martin K. Petersen <martin.petersen@oracle.com> wrote: > > Series looks good. There doesn't seem to be much in-kernel use for > > these preferred access attributes unfortunately, but this is a good > > start. > > Userland makes use of them to ensure partition/MD/DM alignment, pick > sane values for filesystem layout, etc. Okay, that makes sense. The intention for this feature is to communicate potentially larger physical blocks than page sizes, so I was just hoping for enforcing that access from filesystems. Something like this from xfs https://lwn.net/Articles/770975/ would be a nice feature. ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 0/2] NVMe 1.4 Identify Namespace Support 2019-06-07 16:26 ` Keith Busch @ 2019-06-07 16:44 ` Christoph Hellwig 0 siblings, 0 replies; 15+ messages in thread From: Christoph Hellwig @ 2019-06-07 16:44 UTC (permalink / raw) On Fri, Jun 07, 2019@10:26:47AM -0600, Keith Busch wrote: > The intention for this feature is to communicate potentially larger > physical blocks than page sizes, so I was just hoping for enforcing > that access from filesystems. Something like this from xfs > > https://lwn.net/Articles/770975/ > > would be a nice feature. No need for new NVMe features for that, you just need a controller that exposes a LBA format with a data size larger than 4k, which has been perfectly doable since NVMe 1.0. ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2019-06-07 16:44 UTC | newest] Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-06-06 21:28 [PATCH 0/2] NVMe 1.4 Identify Namespace Support Bart Van Assche 2019-06-06 21:28 ` [PATCH 1/2] nvme: Introduce NVMe 1.4 Identify Namespace fields in struct nvme_id_ns Bart Van Assche 2019-06-06 21:40 ` Chaitanya Kulkarni 2019-06-07 15:18 ` Martin K. Petersen 2019-06-06 21:28 ` [PATCH 2/2] nvme: Set physical block size and optimal I/O size according to NVMe 1.4 Bart Van Assche 2019-06-06 21:42 ` Chaitanya Kulkarni 2019-06-06 21:48 ` Bart Van Assche 2019-06-06 21:58 ` Chaitanya Kulkarni 2019-06-07 16:42 ` Christoph Hellwig 2019-06-07 15:19 ` Martin K. Petersen 2019-06-07 16:42 ` Christoph Hellwig 2019-06-07 13:56 ` [PATCH 0/2] NVMe 1.4 Identify Namespace Support Keith Busch 2019-06-07 15:21 ` Martin K. Petersen 2019-06-07 16:26 ` Keith Busch 2019-06-07 16:44 ` Christoph Hellwig
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.