From: weiqi <weiqi4@huawei.com>
To: <alexander.h.duyck@linux.intel.com>, <alex.williamson@redhat.com>
Cc: <kvm@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<pbonzini@redhat.com>, <x86@kernel.org>,
wei qi <weiqi4@huawei.com>
Subject: [PATCH 0/2] page hinting add passthrough support
Date: Tue, 7 Jan 2020 22:46:37 +0800 [thread overview]
Message-ID: <1578408399-20092-1-git-send-email-weiqi4@huawei.com> (raw)
From: wei qi <weiqi4@huawei.com>
I just implemented dynamically updating the iommu table to support pass-through,
It seen to work fine.
Test:
start a 4G vm with 2M hugetlb and ixgbevf passthrough,
GuestOS: linux-5.2.6 + (mm / virtio: Provide support for free page reporting)
HostOS: 5.5-rc4
Host: Intel(R) Xeon(R) Gold 6161 CPU @ 2.20GHz
after enable page hinting, free pages at GuestOS can be free at host.
before,
# cat /sys/devices/system/node/node*/hugepages/hugepages-2048kB/free_hugepages
5620
5620
after start VM,
# numastat -c qemu
Per-node process memory usage (in MBs)
PID Node 0 Node 1 Total
--------------- ------ ------ -----
24463 (qemu_hotr 6 6 12
24479 (qemu_tls_ 0 8 8
70718 (qemu-syst 58 539 597
--------------- ------ ------ -----
Total 64 553 616
# cat /sys/devices/system/node/node*/hugepages/hugepages-2048kB/free_hugepages
5595
5366
the modify at qemu,
+int kvm_discard_range(struct kvm_discard_msg discard_msg)
+{
+ return kvm_vm_ioctl(kvm_state, KVM_DISCARD_RANGE, &discard_msg);
+}
static void virtio_balloon_handle_report(VirtIODevice *vdev, VirtQueue *vq)
{
..................
+ discard_msg.in_addr = elem->in_addr[i];
+ discard_msg.iov_len = elem->in_sg[i].iov_len;
ram_block_discard_range(rb, ram_offset, size);
+ kvm_discard_range(discard_msg);
then, further test network bandwidth, performance seem ok.
Is there any hidden problem in this implementation?
And, is there plan to support pass-throughyour?
wei qi (2):
vfio: add mmap/munmap API for page hinting
KVM: add support for page hinting
arch/x86/kvm/mmu/mmu.c | 79 ++++++++++++++++++++
arch/x86/kvm/x86.c | 96 ++++++++++++++++++++++++
drivers/vfio/vfio.c | 109 ++++++++++++++++++++++++++++
drivers/vfio/vfio_iommu_type1.c | 157 +++++++++++++++++++++++++++++++++++++++-
include/linux/kvm_host.h | 41 +++++++++++
include/linux/vfio.h | 17 ++++-
include/uapi/linux/kvm.h | 7 ++
virt/kvm/vfio.c | 11 ---
8 files changed, 503 insertions(+), 14 deletions(-)
--
1.8.3.1
next reply other threads:[~2020-01-07 14:47 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-07 14:46 weiqi [this message]
2020-01-07 14:46 ` [PATCH 1/2] vfio: add mmap/munmap API for page hinting weiqi
2020-01-07 15:22 ` Alex Williamson
2020-01-10 18:10 ` kbuild test robot
2020-01-10 18:10 ` [RFC PATCH] vfio: vfio_iommu_iova_to_phys() can be static kbuild test robot
2020-01-07 14:46 ` [PATCH 2/2] KVM: add support for page hinting weiqi
2020-02-18 11:45 ` kbuild test robot
2020-01-07 16:37 ` [PATCH 0/2] page hinting add passthrough support Alexander Duyck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1578408399-20092-1-git-send-email-weiqi4@huawei.com \
--to=weiqi4@huawei.com \
--cc=alex.williamson@redhat.com \
--cc=alexander.h.duyck@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).