qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based
       [not found] <CGME20210409074451epcas5p391e5b072e6245b8fe691d67bb42fb234@epcas5p3.samsung.com>
@ 2021-04-09  7:44 ` Gollu Appalanaidu
  2021-04-09 11:05   ` Minwoo Im
  0 siblings, 1 reply; 8+ messages in thread
From: Gollu Appalanaidu @ 2021-04-09  7:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: fam, kwolf, qemu-block, Gollu Appalanaidu, mreitz, its, stefanha, kbusch

NSZE is the total size of the namespace in logical blocks. So the max
addressable logical block is NLB minus 1. So your starting logical
block is equal to NSZE it is a out of range.

Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
---
 hw/block/nvme.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 953ec64729..be9edb1158 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -2527,7 +2527,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req)
             uint64_t slba = le64_to_cpu(range[i].slba);
             uint32_t nlb = le32_to_cpu(range[i].nlb);
 
-            if (nvme_check_bounds(ns, slba, nlb)) {
+            if (nvme_check_bounds(ns, slba, nlb) || slba == ns->id_ns.nsze) {
                 trace_pci_nvme_err_invalid_lba_range(slba, nlb,
                                                      ns->id_ns.nsze);
                 continue;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based
  2021-04-09  7:44 ` [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based Gollu Appalanaidu
@ 2021-04-09 11:05   ` Minwoo Im
  2021-04-09 11:55     ` Klaus Jensen
  0 siblings, 1 reply; 8+ messages in thread
From: Minwoo Im @ 2021-04-09 11:05 UTC (permalink / raw)
  To: Gollu Appalanaidu
  Cc: fam, kwolf, qemu-block, qemu-devel, mreitz, kbusch, stefanha, its

On 21-04-09 13:14:02, Gollu Appalanaidu wrote:
> NSZE is the total size of the namespace in logical blocks. So the max
> addressable logical block is NLB minus 1. So your starting logical
> block is equal to NSZE it is a out of range.
> 
> Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
> ---
>  hw/block/nvme.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index 953ec64729..be9edb1158 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -2527,7 +2527,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req)
>              uint64_t slba = le64_to_cpu(range[i].slba);
>              uint32_t nlb = le32_to_cpu(range[i].nlb);
>  
> -            if (nvme_check_bounds(ns, slba, nlb)) {
> +            if (nvme_check_bounds(ns, slba, nlb) || slba == ns->id_ns.nsze) {

This patch also looks like check the boundary about slba.  Should it be
also checked inside of nvme_check_bounds() ?


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based
  2021-04-09 11:05   ` Minwoo Im
@ 2021-04-09 11:55     ` Klaus Jensen
  2021-04-09 12:31       ` Minwoo Im
  2021-04-09 15:30       ` Keith Busch
  0 siblings, 2 replies; 8+ messages in thread
From: Klaus Jensen @ 2021-04-09 11:55 UTC (permalink / raw)
  To: Minwoo Im
  Cc: fam, kwolf, qemu-block, Gollu Appalanaidu, qemu-devel, mreitz,
	stefanha, kbusch

[-- Attachment #1: Type: text/plain, Size: 1433 bytes --]

On Apr  9 20:05, Minwoo Im wrote:
>On 21-04-09 13:14:02, Gollu Appalanaidu wrote:
>> NSZE is the total size of the namespace in logical blocks. So the max
>> addressable logical block is NLB minus 1. So your starting logical
>> block is equal to NSZE it is a out of range.
>>
>> Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
>> ---
>>  hw/block/nvme.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
>> index 953ec64729..be9edb1158 100644
>> --- a/hw/block/nvme.c
>> +++ b/hw/block/nvme.c
>> @@ -2527,7 +2527,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req)
>>              uint64_t slba = le64_to_cpu(range[i].slba);
>>              uint32_t nlb = le32_to_cpu(range[i].nlb);
>>
>> -            if (nvme_check_bounds(ns, slba, nlb)) {
>> +            if (nvme_check_bounds(ns, slba, nlb) || slba == ns->id_ns.nsze) {
>
>This patch also looks like check the boundary about slba.  Should it be
>also checked inside of nvme_check_bounds() ?

The catch here is that DSM is like the only command where the number of 
logical blocks is a 1s-based value. Otherwise we always have nlb > 0, 
which means that nvme_check_bounds() will always "do the right thing".

My main gripe here is that (in my mind), by definition, a "zero length 
range" does not reference any LBAs at all. So how can it result in LBA 
Out of Range?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based
  2021-04-09 11:55     ` Klaus Jensen
@ 2021-04-09 12:31       ` Minwoo Im
  2021-04-09 12:36         ` Klaus Jensen
  2021-04-09 15:30       ` Keith Busch
  1 sibling, 1 reply; 8+ messages in thread
From: Minwoo Im @ 2021-04-09 12:31 UTC (permalink / raw)
  To: Klaus Jensen
  Cc: fam, kwolf, qemu-block, Gollu Appalanaidu, qemu-devel, mreitz,
	stefanha, kbusch

On 21-04-09 13:55:01, Klaus Jensen wrote:
> On Apr  9 20:05, Minwoo Im wrote:
> > On 21-04-09 13:14:02, Gollu Appalanaidu wrote:
> > > NSZE is the total size of the namespace in logical blocks. So the max
> > > addressable logical block is NLB minus 1. So your starting logical
> > > block is equal to NSZE it is a out of range.
> > > 
> > > Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
> > > ---
> > >  hw/block/nvme.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > > index 953ec64729..be9edb1158 100644
> > > --- a/hw/block/nvme.c
> > > +++ b/hw/block/nvme.c
> > > @@ -2527,7 +2527,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req)
> > >              uint64_t slba = le64_to_cpu(range[i].slba);
> > >              uint32_t nlb = le32_to_cpu(range[i].nlb);
> > > 
> > > -            if (nvme_check_bounds(ns, slba, nlb)) {
> > > +            if (nvme_check_bounds(ns, slba, nlb) || slba == ns->id_ns.nsze) {
> > 
> > This patch also looks like check the boundary about slba.  Should it be
> > also checked inside of nvme_check_bounds() ?
> 
> The catch here is that DSM is like the only command where the number of
> logical blocks is a 1s-based value. Otherwise we always have nlb > 0, which
> means that nvme_check_bounds() will always "do the right thing".
> 
> My main gripe here is that (in my mind), by definition, a "zero length
> range" does not reference any LBAs at all. So how can it result in LBA Out
> of Range?

Even if this is not the LBA out of range case which is currently what
nvme_check_bounds() checking, but I thought the function checks the
bounds so that we can add one more check inside of that function like:
(If SLBA is 0-based or not, slba should not be nsze, isn't it ?)

diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 7244534a89e9..25a7db5ecbd8 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -1415,6 +1415,10 @@ static inline uint16_t nvme_check_bounds(NvmeNamespace *ns, uint64_t slba,
 {
     uint64_t nsze = le64_to_cpu(ns->id_ns.nsze);
 
+    if (slba == nsze) {
+        return NVME_INVALID_FIELD | NVME_DNR;
+    }
+
     if (unlikely(UINT64_MAX - slba < nlb || slba + nlb > nsze)) {
         return NVME_LBA_RANGE | NVME_DNR;
     }

Or am I missing something here ;) ?


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based
  2021-04-09 12:31       ` Minwoo Im
@ 2021-04-09 12:36         ` Klaus Jensen
  2021-04-09 12:48           ` Minwoo Im
  0 siblings, 1 reply; 8+ messages in thread
From: Klaus Jensen @ 2021-04-09 12:36 UTC (permalink / raw)
  To: Minwoo Im
  Cc: fam, kwolf, qemu-block, Gollu Appalanaidu, qemu-devel, mreitz,
	stefanha, kbusch

[-- Attachment #1: Type: text/plain, Size: 2762 bytes --]

On Apr  9 21:31, Minwoo Im wrote:
>On 21-04-09 13:55:01, Klaus Jensen wrote:
>> On Apr  9 20:05, Minwoo Im wrote:
>> > On 21-04-09 13:14:02, Gollu Appalanaidu wrote:
>> > > NSZE is the total size of the namespace in logical blocks. So the max
>> > > addressable logical block is NLB minus 1. So your starting logical
>> > > block is equal to NSZE it is a out of range.
>> > >
>> > > Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
>> > > ---
>> > >  hw/block/nvme.c | 2 +-
>> > >  1 file changed, 1 insertion(+), 1 deletion(-)
>> > >
>> > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
>> > > index 953ec64729..be9edb1158 100644
>> > > --- a/hw/block/nvme.c
>> > > +++ b/hw/block/nvme.c
>> > > @@ -2527,7 +2527,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req)
>> > >              uint64_t slba = le64_to_cpu(range[i].slba);
>> > >              uint32_t nlb = le32_to_cpu(range[i].nlb);
>> > >
>> > > -            if (nvme_check_bounds(ns, slba, nlb)) {
>> > > +            if (nvme_check_bounds(ns, slba, nlb) || slba == ns->id_ns.nsze) {
>> >
>> > This patch also looks like check the boundary about slba.  Should it be
>> > also checked inside of nvme_check_bounds() ?
>>
>> The catch here is that DSM is like the only command where the number of
>> logical blocks is a 1s-based value. Otherwise we always have nlb > 0, which
>> means that nvme_check_bounds() will always "do the right thing".
>>
>> My main gripe here is that (in my mind), by definition, a "zero length
>> range" does not reference any LBAs at all. So how can it result in LBA Out
>> of Range?
>
>Even if this is not the LBA out of range case which is currently what
>nvme_check_bounds() checking, but I thought the function checks the
>bounds so that we can add one more check inside of that function like:
>(If SLBA is 0-based or not, slba should not be nsze, isn't it ?)
>
>diff --git a/hw/block/nvme.c b/hw/block/nvme.c
>index 7244534a89e9..25a7db5ecbd8 100644
>--- a/hw/block/nvme.c
>+++ b/hw/block/nvme.c
>@@ -1415,6 +1415,10 @@ static inline uint16_t nvme_check_bounds(NvmeNamespace *ns, uint64_t slba,
> {
>     uint64_t nsze = le64_to_cpu(ns->id_ns.nsze);
>
>+    if (slba == nsze) {
>+        return NVME_INVALID_FIELD | NVME_DNR;
>+    }
>+
>     if (unlikely(UINT64_MAX - slba < nlb || slba + nlb > nsze)) {
>         return NVME_LBA_RANGE | NVME_DNR;
>     }
>
>Or am I missing something here ;) ?

No, not at all, it's just that this additional check is never needed for 
any other command than DSM since, as far as I remember, DSM is the only 
command with the 1s-based NLB value fuckup.

This means that nlb will always be at least 1, so slba + 1 > nsze will 
be false if slba == nsze.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based
  2021-04-09 12:36         ` Klaus Jensen
@ 2021-04-09 12:48           ` Minwoo Im
  0 siblings, 0 replies; 8+ messages in thread
From: Minwoo Im @ 2021-04-09 12:48 UTC (permalink / raw)
  To: Klaus Jensen
  Cc: fam, kwolf, qemu-block, Gollu Appalanaidu, qemu-devel, mreitz,
	stefanha, kbusch

On 21-04-09 14:36:19, Klaus Jensen wrote:
> On Apr  9 21:31, Minwoo Im wrote:
> > On 21-04-09 13:55:01, Klaus Jensen wrote:
> > > On Apr  9 20:05, Minwoo Im wrote:
> > > > On 21-04-09 13:14:02, Gollu Appalanaidu wrote:
> > > > > NSZE is the total size of the namespace in logical blocks. So the max
> > > > > addressable logical block is NLB minus 1. So your starting logical
> > > > > block is equal to NSZE it is a out of range.
> > > > >
> > > > > Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
> > > > > ---
> > > > >  hw/block/nvme.c | 2 +-
> > > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > > > > index 953ec64729..be9edb1158 100644
> > > > > --- a/hw/block/nvme.c
> > > > > +++ b/hw/block/nvme.c
> > > > > @@ -2527,7 +2527,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req)
> > > > >              uint64_t slba = le64_to_cpu(range[i].slba);
> > > > >              uint32_t nlb = le32_to_cpu(range[i].nlb);
> > > > >
> > > > > -            if (nvme_check_bounds(ns, slba, nlb)) {
> > > > > +            if (nvme_check_bounds(ns, slba, nlb) || slba == ns->id_ns.nsze) {
> > > >
> > > > This patch also looks like check the boundary about slba.  Should it be
> > > > also checked inside of nvme_check_bounds() ?
> > > 
> > > The catch here is that DSM is like the only command where the number of
> > > logical blocks is a 1s-based value. Otherwise we always have nlb > 0, which
> > > means that nvme_check_bounds() will always "do the right thing".
> > > 
> > > My main gripe here is that (in my mind), by definition, a "zero length
> > > range" does not reference any LBAs at all. So how can it result in LBA Out
> > > of Range?
> > 
> > Even if this is not the LBA out of range case which is currently what
> > nvme_check_bounds() checking, but I thought the function checks the
> > bounds so that we can add one more check inside of that function like:
> > (If SLBA is 0-based or not, slba should not be nsze, isn't it ?)
> > 
> > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > index 7244534a89e9..25a7db5ecbd8 100644
> > --- a/hw/block/nvme.c
> > +++ b/hw/block/nvme.c
> > @@ -1415,6 +1415,10 @@ static inline uint16_t nvme_check_bounds(NvmeNamespace *ns, uint64_t slba,
> > {
> >     uint64_t nsze = le64_to_cpu(ns->id_ns.nsze);
> > 
> > +    if (slba == nsze) {
> > +        return NVME_INVALID_FIELD | NVME_DNR;
> > +    }
> > +
> >     if (unlikely(UINT64_MAX - slba < nlb || slba + nlb > nsze)) {
> >         return NVME_LBA_RANGE | NVME_DNR;
> >     }
> > 
> > Or am I missing something here ;) ?
> 
> No, not at all, it's just that this additional check is never needed for any
> other command than DSM since, as far as I remember, DSM is the only command
> with the 1s-based NLB value fuckup.
> 
> This means that nlb will always be at least 1, so slba + 1 > nsze will be
> false if slba == nsze.

Understood :)

Please have:

Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based
  2021-04-09 11:55     ` Klaus Jensen
  2021-04-09 12:31       ` Minwoo Im
@ 2021-04-09 15:30       ` Keith Busch
  2021-04-09 16:57         ` Klaus Jensen
  1 sibling, 1 reply; 8+ messages in thread
From: Keith Busch @ 2021-04-09 15:30 UTC (permalink / raw)
  To: Klaus Jensen
  Cc: fam, kwolf, qemu-block, Gollu Appalanaidu, qemu-devel, mreitz,
	Minwoo Im, stefanha

On Fri, Apr 09, 2021 at 01:55:01PM +0200, Klaus Jensen wrote:
> On Apr  9 20:05, Minwoo Im wrote:
> > On 21-04-09 13:14:02, Gollu Appalanaidu wrote:
> > > NSZE is the total size of the namespace in logical blocks. So the max
> > > addressable logical block is NLB minus 1. So your starting logical
> > > block is equal to NSZE it is a out of range.
> > > 
> > > Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
> > > ---
> > >  hw/block/nvme.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > > index 953ec64729..be9edb1158 100644
> > > --- a/hw/block/nvme.c
> > > +++ b/hw/block/nvme.c
> > > @@ -2527,7 +2527,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req)
> > >              uint64_t slba = le64_to_cpu(range[i].slba);
> > >              uint32_t nlb = le32_to_cpu(range[i].nlb);
> > > 
> > > -            if (nvme_check_bounds(ns, slba, nlb)) {
> > > +            if (nvme_check_bounds(ns, slba, nlb) || slba == ns->id_ns.nsze) {
> > 
> > This patch also looks like check the boundary about slba.  Should it be
> > also checked inside of nvme_check_bounds() ?
> 
> The catch here is that DSM is like the only command where the number of
> logical blocks is a 1s-based value. Otherwise we always have nlb > 0, which
> means that nvme_check_bounds() will always "do the right thing".
> 
> My main gripe here is that (in my mind), by definition, a "zero length
> range" does not reference any LBAs at all. So how can it result in LBA Out
> of Range?

So what's the problem? If the request is to discard 0 blocks starting
from the last block, then that's valid. Is this patch actually fixing
anything?


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based
  2021-04-09 15:30       ` Keith Busch
@ 2021-04-09 16:57         ` Klaus Jensen
  0 siblings, 0 replies; 8+ messages in thread
From: Klaus Jensen @ 2021-04-09 16:57 UTC (permalink / raw)
  To: Keith Busch
  Cc: fam, kwolf, qemu-block, Gollu Appalanaidu, qemu-devel, mreitz, stefanha

[-- Attachment #1: Type: text/plain, Size: 2359 bytes --]

On Apr 10 00:30, Keith Busch wrote:
>On Fri, Apr 09, 2021 at 01:55:01PM +0200, Klaus Jensen wrote:
>> On Apr  9 20:05, Minwoo Im wrote:
>> > On 21-04-09 13:14:02, Gollu Appalanaidu wrote:
>> > > NSZE is the total size of the namespace in logical blocks. So the max
>> > > addressable logical block is NLB minus 1. So your starting logical
>> > > block is equal to NSZE it is a out of range.
>> > >
>> > > Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
>> > > ---
>> > >  hw/block/nvme.c | 2 +-
>> > >  1 file changed, 1 insertion(+), 1 deletion(-)
>> > >
>> > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
>> > > index 953ec64729..be9edb1158 100644
>> > > --- a/hw/block/nvme.c
>> > > +++ b/hw/block/nvme.c
>> > > @@ -2527,7 +2527,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req)
>> > >              uint64_t slba = le64_to_cpu(range[i].slba);
>> > >              uint32_t nlb = le32_to_cpu(range[i].nlb);
>> > >
>> > > -            if (nvme_check_bounds(ns, slba, nlb)) {
>> > > +            if (nvme_check_bounds(ns, slba, nlb) || slba == ns->id_ns.nsze) {
>> >
>> > This patch also looks like check the boundary about slba.  Should it be
>> > also checked inside of nvme_check_bounds() ?
>>
>> The catch here is that DSM is like the only command where the number of
>> logical blocks is a 1s-based value. Otherwise we always have nlb > 0, which
>> means that nvme_check_bounds() will always "do the right thing".
>>
>> My main gripe here is that (in my mind), by definition, a "zero length
>> range" does not reference any LBAs at all. So how can it result in LBA Out
>> of Range?
>
>So what's the problem? If the request is to discard 0 blocks starting
>from the last block, then that's valid. Is this patch actually fixing
>anything?
>

If SLBA == NSZE we are out of bounds since the last addressable block is 
NSZE-1. But, I don't consider the current behavior buggy or wrong, the 
devices correctly handles the zero length range by just not discarding 
anything anywhere.

The spec is pretty unclear on how invalid ranges in DSM are handled. My 
interpretation is that the advisory nature of DSM allows it to do best 
effort, but as Gollu is suggesting here, a device could just as well 
decide to validate the ranges and return an appropriate status code if 
it wanted to.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-04-09 17:03 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20210409074451epcas5p391e5b072e6245b8fe691d67bb42fb234@epcas5p3.samsung.com>
2021-04-09  7:44 ` [PATCH] hw/block/nvme: slba equal to nsze is out of bounds if nlb is 1-based Gollu Appalanaidu
2021-04-09 11:05   ` Minwoo Im
2021-04-09 11:55     ` Klaus Jensen
2021-04-09 12:31       ` Minwoo Im
2021-04-09 12:36         ` Klaus Jensen
2021-04-09 12:48           ` Minwoo Im
2021-04-09 15:30       ` Keith Busch
2021-04-09 16:57         ` Klaus Jensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).