From: Kirti Wankhede <kwankhede@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: <cjia@nvidia.com>, <kevin.tian@intel.com>, <ziye.yang@intel.com>,
<changpeng.liu@intel.com>, <yi.l.liu@intel.com>,
<mlevitsk@redhat.com>, <eskultet@redhat.com>, <cohuck@redhat.com>,
<dgilbert@redhat.com>, <jonathan.davies@nutanix.com>,
<eauger@redhat.com>, <aik@ozlabs.ru>, <pasic@linux.ibm.com>,
<felipe@nutanix.com>, <Zhengxiao.zx@Alibaba-inc.com>,
<shuangtai.tst@alibaba-inc.com>, <Ken.Xue@amd.com>,
<zhi.a.wang@intel.com>, <yan.y.zhao@intel.com>,
<qemu-devel@nongnu.org>, <kvm@vger.kernel.org>
Subject: Re: [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl for dirty pages tracking
Date: Tue, 19 May 2020 12:41:32 +0530 [thread overview]
Message-ID: <17511c50-dc59-d9e9-10b6-54b16dec01c4@nvidia.com> (raw)
In-Reply-To: <20200518155342.4dd7df99@x1.home>
On 5/19/2020 3:23 AM, Alex Williamson wrote:
> On Mon, 18 May 2020 11:26:34 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>
>> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
>> - Start dirty pages tracking while migration is active
>> - Stop dirty pages tracking.
>> - Get dirty pages bitmap. Its user space application's responsibility to
>> copy content of dirty pages from source to destination during migration.
>>
>> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
>> structure. Bitmap size is calculated considering smallest supported page
>> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
>>
>> Bitmap is populated for already pinned pages when bitmap is allocated for
>> a vfio_dma with the smallest supported page size. Update bitmap from
>> pinning functions when tracking is enabled. When user application queries
>> bitmap, check if requested page size is same as page size used to
>> populated bitmap. If it is equal, copy bitmap, but if not equal, return
>> error.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>>
>> Fixed error reported by build bot by changing pgsize type from uint64_t
>> to size_t.
>> Reported-by: kbuild test robot <lkp@intel.com>
>> ---
>> drivers/vfio/vfio_iommu_type1.c | 313 +++++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 307 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index de17787ffece..bf740fef196f 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -72,6 +72,7 @@ struct vfio_iommu {
>> uint64_t pgsize_bitmap;
>> bool v2;
>> bool nesting;
>> + bool dirty_page_tracking;
>> };
>>
>> struct vfio_domain {
>> @@ -92,6 +93,7 @@ struct vfio_dma {
>> bool lock_cap; /* capable(CAP_IPC_LOCK) */
>> struct task_struct *task;
>> struct rb_root pfn_list; /* Ex-user pinned pfn list */
>> + unsigned long *bitmap;
>> };
>>
>> struct vfio_group {
>> @@ -126,6 +128,19 @@ struct vfio_regions {
>> #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) \
>> (!list_empty(&iommu->domain_list))
>>
>> +#define DIRTY_BITMAP_BYTES(n) (ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
>> +
>> +/*
>> + * Input argument of number of bits to bitmap_set() is unsigned integer, which
>> + * further casts to signed integer for unaligned multi-bit operation,
>> + * __bitmap_set().
>> + * Then maximum bitmap size supported is 2^31 bits divided by 2^3 bits/byte,
>> + * that is 2^28 (256 MB) which maps to 2^31 * 2^12 = 2^43 (8TB) on 4K page
>> + * system.
>> + */
>> +#define DIRTY_BITMAP_PAGES_MAX ((u64)INT_MAX)
>> +#define DIRTY_BITMAP_SIZE_MAX DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX)
>> +
>> static int put_pfn(unsigned long pfn, int prot);
>>
>> /*
>> @@ -176,6 +191,74 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>> rb_erase(&old->node, &iommu->dma_list);
>> }
>>
>> +
>> +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, size_t pgsize)
>> +{
>> + uint64_t npages = dma->size / pgsize;
>> +
>> + if (npages > DIRTY_BITMAP_PAGES_MAX)
>> + return -EINVAL;
>> +
>> + dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
>
> Curious that the extra 8-bytes are added in the next patch, but they're
> just as necessary here.
>
Yes, moving it in this patch.
While resolving patches, I had to update 6/8 and 8/8 patches also. So
updating 3 patches.
> We also have the explanation above about why we have the signed int
> size limitation, but we sort of ignore that when adding the bytes here.
> That limitation is derived from __bitmap_set(), whereas we only need
> these extra bits for bitmap_shift_left(), where I can't spot a signed
> int limitation. Do you come to the same conclusion?
That's right.
> Maybe worth a
> comment why we think we can exceed DIRTY_BITMAP_PAGES_MAX for that
> extra padding.
>
ok.
>> + if (!dma->bitmap)
>> + return -ENOMEM;
>> +
>> + return 0;
>> +}
>> +
>> +static void vfio_dma_bitmap_free(struct vfio_dma *dma)
>> +{
>> + kfree(dma->bitmap);
>> + dma->bitmap = NULL;
>> +}
>> +
>> +static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
>> +{
>> + struct rb_node *p;
>> +
>> + for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
>> + struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn, node);
>> +
>> + bitmap_set(dma->bitmap, (vpfn->iova - dma->iova) / pgsize, 1);
>> + }
>> +}
>> +
>> +static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
>> +{
>> + struct rb_node *n = rb_first(&iommu->dma_list);
>> +
>> + for (; n; n = rb_next(n)) {
>
> Nit, the previous function above sets the initial value in the for()
> statement, it looks like it would fit in 80 columns here too. We have
> examples either way in the code, so not a must fix.
>
>> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>> + int ret;
>> +
>> + ret = vfio_dma_bitmap_alloc(dma, pgsize);
>> + if (ret) {
>> + struct rb_node *p = rb_prev(n);
>> +
>> + for (; p; p = rb_prev(p)) {
>
> Same.
>
>> + struct vfio_dma *dma = rb_entry(n,
>> + struct vfio_dma, node);
>> +
>> + vfio_dma_bitmap_free(dma);
>> + }
>> + return ret;
>> + }
>> + vfio_dma_populate_bitmap(dma, pgsize);
>> + }
>> + return 0;
>> +}
>> +
>> +static void vfio_dma_bitmap_free_all(struct vfio_iommu *iommu)
>> +{
>> + struct rb_node *n = rb_first(&iommu->dma_list);
>> +
>> + for (; n; n = rb_next(n)) {
>
> And another.
>
>> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>> +
>> + vfio_dma_bitmap_free(dma);
>> + }
>> +}
>> +
>> /*
>> * Helper Functions for host iova-pfn list
>> */
>> @@ -568,6 +651,17 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>> vfio_unpin_page_external(dma, iova, do_accounting);
>> goto pin_unwind;
>> }
>> +
>> + if (iommu->dirty_page_tracking) {
>> + unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
>> +
>> + /*
>> + * Bitmap populated with the smallest supported page
>> + * size
>> + */
>> + bitmap_set(dma->bitmap,
>> + (iova - dma->iova) >> pgshift, 1);
>> + }
>> }
>>
>> ret = i;
>> @@ -802,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>> vfio_unmap_unpin(iommu, dma, true);
>> vfio_unlink_dma(iommu, dma);
>> put_task_struct(dma->task);
>> + vfio_dma_bitmap_free(dma);
>> kfree(dma);
>> iommu->dma_avail++;
>> }
>> @@ -829,6 +924,99 @@ static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>> }
>> }
>>
>> +static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
>> + dma_addr_t base_iova, size_t pgsize)
>> +{
>> + unsigned long pgshift = __ffs(pgsize);
>> + unsigned long nbits = dma->size >> pgshift;
>> + unsigned long bit_offset = (dma->iova - base_iova) >> pgshift;
>> + unsigned long copy_offset = bit_offset / BITS_PER_LONG;
>> + unsigned long shift = bit_offset % BITS_PER_LONG;
>> + unsigned long leftover;
>> +
>> + /* mark all pages dirty if all pages are pinned and mapped. */
>> + if (dma->iommu_mapped)
>> + bitmap_set(dma->bitmap, 0, dma->size >> pgshift);
>
> We already calculated 'dma->size >> pgshift' as nbits above, we should
> use nbits here. I imagine the compiler will optimize this, so take it
> as a nit.
>
>> +
>> + if (shift) {
>> + bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
>> + nbits + shift);
>> +
>> + if (copy_from_user(&leftover, (u64 *)bitmap + copy_offset,
>> + sizeof(leftover)))
>> + return -EFAULT;
>> +
>> + bitmap_or(dma->bitmap, dma->bitmap, &leftover, shift);
>> + }
>> +
>> + if (copy_to_user((u64 *)bitmap + copy_offset, dma->bitmap,
>> + DIRTY_BITMAP_BYTES(nbits + shift)))
>> + return -EFAULT;
>> +
>> + return 0;
>> +}
>> +
>> +static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>> + dma_addr_t iova, size_t size, size_t pgsize)
>> +{
>> + struct vfio_dma *dma;
>> + struct rb_node *n;
>> + unsigned long pgshift = __ffs(pgsize);
>> + int ret;
>> +
>> + /*
>> + * GET_BITMAP request must fully cover vfio_dma mappings. Multiple
>> + * vfio_dma mappings may be clubbed by specifying large ranges, but
>> + * there must not be any previous mappings bisected by the range.
>> + * An error will be returned if these conditions are not met.
>> + */
>> + dma = vfio_find_dma(iommu, iova, 1);
>> + if (dma && dma->iova != iova)
>> + return -EINVAL;
>> +
>> + dma = vfio_find_dma(iommu, iova + size - 1, 0);
>> + if (dma && dma->iova + dma->size != iova + size)
>> + return -EINVAL;
>> +
>> + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
>> + struct vfio_dma *ldma = rb_entry(n, struct vfio_dma, node);
>> +
>> + if (ldma->iova >= iova)
>> + break;
>> + }
>> +
>> + dma = n ? rb_entry(n, struct vfio_dma, node) : NULL;
>> +
>> + while (dma && (dma->iova >= iova) &&
>
> 'dma->iova >= iova' is necessarily true per the above loop, right?
> We'd have NULL if we never reach an iova within range.
>
>> + (dma->iova + dma->size <= iova + size)) {
>
> I think 'dma->iova < iova + size' is sufficient here, we've already
> tested that there are no dmas overlapping the ends, they're all either
> fully contained or fully outside.
>
>> +
>
> The double loop here is a little unnecessary, we could combine them
> into:
>
> for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>
> if (dma->iova < iova)
> continue;
>
> if (dma->iova > iova + size)
> break;
>
> ret = update_user_bitmap(bitmap, dma, iova, pgsize);
> if (ret)
> return ret;
>
> /*
> * Re-populate bitmap to include all pinned pages which are
> * considered as dirty but exclude pages which are unpinned and
> * pages which are marked dirty by vfio_dma_rw()
> */
> bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
> vfio_dma_populate_bitmap(dma, pgsize);
> }
>
> I think what you have works, but it's a little more complicated than it
> needs to be. Thanks,
>
Ok. Chaning it.
Thanks,
Kirti
next prev parent reply other threads:[~2020-05-19 7:11 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-18 5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
2020-05-18 5:56 ` [PATCH Kernel v22 1/8] vfio: UAPI for migration interface for device state Kirti Wankhede
2020-05-18 5:56 ` [PATCH Kernel v22 2/8] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
2020-05-18 5:56 ` [PATCH Kernel v22 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu Kirti Wankhede
2020-05-20 10:08 ` Cornelia Huck
2020-05-20 14:46 ` Kirti Wankhede
2020-05-18 5:56 ` [PATCH Kernel v22 4/8] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
2020-05-18 5:56 ` [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl " Kirti Wankhede
2020-05-18 21:53 ` Alex Williamson
2020-05-19 7:11 ` Kirti Wankhede [this message]
2020-05-19 6:52 ` Kirti Wankhede
2020-05-18 5:56 ` [PATCH Kernel v22 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
2020-05-19 6:54 ` Kirti Wankhede
2020-05-20 10:27 ` Cornelia Huck
2020-05-20 15:16 ` Kirti Wankhede
2020-05-18 5:56 ` [PATCH Kernel v22 7/8] vfio iommu: Add migration capability to report supported features Kirti Wankhede
2020-05-20 10:42 ` Cornelia Huck
2020-05-20 15:23 ` Kirti Wankhede
2020-05-18 5:56 ` [PATCH Kernel v22 8/8] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
2020-05-19 6:54 ` Kirti Wankhede
2020-05-19 16:58 ` [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Alex Williamson
2020-05-20 2:55 ` Yan Zhao
2020-05-20 13:40 ` Kirti Wankhede
2020-05-20 16:46 ` Alex Williamson
2020-05-21 5:08 ` Yan Zhao
2020-05-21 7:09 ` Kirti Wankhede
2020-05-21 7:04 ` Yan Zhao
2020-05-21 7:28 ` Kirti Wankhede
2020-05-21 7:32 ` Kirti Wankhede
2020-05-25 6:59 ` Yan Zhao
2020-05-25 13:20 ` Kirti Wankhede
2020-05-26 20:19 ` Alex Williamson
2020-05-27 6:23 ` Yan Zhao
2020-05-27 8:48 ` Dr. David Alan Gilbert
2020-05-28 8:01 ` Yan Zhao
2020-05-28 22:53 ` Alex Williamson
2020-05-29 11:12 ` Dr. David Alan Gilbert
2020-05-28 22:59 ` Alex Williamson
2020-05-29 4:15 ` Yan Zhao
2020-05-29 17:57 ` Kirti Wankhede
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=17511c50-dc59-d9e9-10b6-54b16dec01c4@nvidia.com \
--to=kwankhede@nvidia.com \
--cc=Ken.Xue@amd.com \
--cc=Zhengxiao.zx@Alibaba-inc.com \
--cc=aik@ozlabs.ru \
--cc=alex.williamson@redhat.com \
--cc=changpeng.liu@intel.com \
--cc=cjia@nvidia.com \
--cc=cohuck@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eauger@redhat.com \
--cc=eskultet@redhat.com \
--cc=felipe@nutanix.com \
--cc=jonathan.davies@nutanix.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=mlevitsk@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=shuangtai.tst@alibaba-inc.com \
--cc=yan.y.zhao@intel.com \
--cc=yi.l.liu@intel.com \
--cc=zhi.a.wang@intel.com \
--cc=ziye.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).