All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH] nvme: fix bit buckets
       [not found] <20220422163721.3392373-1-kbusch@kernel.org>
@ 2022-04-25  9:59 ` Klaus Jensen
  2022-04-25 14:33   ` Keith Busch
  0 siblings, 1 reply; 2+ messages in thread
From: Klaus Jensen @ 2022-04-25  9:59 UTC (permalink / raw)
  To: Keith Busch; +Cc: kbusch, qemu-block, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2885 bytes --]

+qemu-devel

On Apr 22 09:37, Keith Busch wrote:
> We can't just ignore the bit buckets since the data offsets read from
> disk need to be accounted for. We could either split into multiple reads
> for the actual user data requested and skip the buckets, or we could
> have a place holder for bucket data. Splitting is too much over head, so
> just allocate a memory region to dump unwanted data.
> 
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> This came out easier than I thought, so we can ignore my previous
> feature removal patch:
>   https://lists.nongnu.org/archive/html/qemu-block/2022-04/msg00398.html
> 
>  hw/nvme/ctrl.c | 9 +++++----
>  hw/nvme/nvme.h | 1 +
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
> index 03760ddeae..711c6fac29 100644
> --- a/hw/nvme/ctrl.c
> +++ b/hw/nvme/ctrl.c
> @@ -845,11 +845,11 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg,
>          trans_len = MIN(*len, dlen);
>  
>          if (type == NVME_SGL_DESCR_TYPE_BIT_BUCKET) {
> -            goto next;
> +            addr = n->bitBucket.addr;
> +        } else {
> +            addr = le64_to_cpu(segment[i].addr);
>          }
>  
> -        addr = le64_to_cpu(segment[i].addr);
> -
>          if (UINT64_MAX - addr < dlen) {
>              return NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
>          }
> @@ -859,7 +859,6 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg,
>              return status;
>          }
>  
> -next:
>          *len -= trans_len;
>      }
>  
> @@ -6686,6 +6685,8 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp)
>          nvme_init_pmr(n, pci_dev);
>      }
>  
> +    memory_region_init(&n->bitBucket, OBJECT(n), NULL, 0x100000);
> +
>      return 0;
>  }
>  
> diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
> index 739c8b8f79..d59eadc69d 100644
> --- a/hw/nvme/nvme.h
> +++ b/hw/nvme/nvme.h
> @@ -411,6 +411,7 @@ typedef struct NvmeCtrl {
>      PCIDevice    parent_obj;
>      MemoryRegion bar0;
>      MemoryRegion iomem;
> +    MemoryRegion bitBucket;
>      NvmeBar      bar;
>      NvmeParams   params;
>      NvmeBus      bus;
> -- 
> 2.30.2
> 

The approach is neat and simple, but I don't think it has the intended
effect. The memory region addr is just left at 0x0, so we just end up
with mapping that directly into the qsg and in my test setup, this
basically does DMA to the admin completion queue which happens to be at
0x0.

I would have liked to handle it like we do for CMB addresses, and
reserve some address known to the device (i.e. remapping to a local
allocated buffer), but we can't easily do that because of the iov/qsg
duality thingie. The dma helpers wont work with it.

For now, I think we need to just rip out the bit bucket support.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] nvme: fix bit buckets
  2022-04-25  9:59 ` [PATCH] nvme: fix bit buckets Klaus Jensen
@ 2022-04-25 14:33   ` Keith Busch
  0 siblings, 0 replies; 2+ messages in thread
From: Keith Busch @ 2022-04-25 14:33 UTC (permalink / raw)
  To: Klaus Jensen; +Cc: kbusch, qemu-block, qemu-devel

On Mon, Apr 25, 2022 at 11:59:30AM +0200, Klaus Jensen wrote:
> The approach is neat and simple, but I don't think it has the intended
> effect. The memory region addr is just left at 0x0, so we just end up
> with mapping that directly into the qsg and in my test setup, this
> basically does DMA to the admin completion queue which happens to be at
> 0x0.
> 
> I would have liked to handle it like we do for CMB addresses, and
> reserve some address known to the device (i.e. remapping to a local
> allocated buffer), but we can't easily do that because of the iov/qsg
> duality thingie. The dma helpers wont work with it.
> 
> For now, I think we need to just rip out the bit bucket support.

Ah crap, I think you're right. Not as simple as I thought. The idea was to have
a dummy DMA-able region. We can have a controller DMA to another controller's
CMB without any special handling, so that's kind of what I'm trying except the
region doesn't need to be tied to any particular device.

And now that you got me thinking about it, there needs to be special bit bucket
handling for local CMB as well.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-04-25 14:52 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20220422163721.3392373-1-kbusch@kernel.org>
2022-04-25  9:59 ` [PATCH] nvme: fix bit buckets Klaus Jensen
2022-04-25 14:33   ` Keith Busch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.