* [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place
@ 2019-03-06 6:00 Suraj Jitindar Singh
2019-03-06 14:12 ` kbuild test robot
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Suraj Jitindar Singh @ 2019-03-06 6:00 UTC (permalink / raw)
To: kvm-ppc
Implement the function kvm_copy_guest() to be used to perform a memory
copy from one guest physical address to another of a variable length.
This performs similar functionality as the kvm_read_guest() and
kvm_write_guest() functions, except both addresses point to guest memory.
This performs a copy in place using raw_copy_in_user() to avoid having to
buffer the data.
The guest memory can reside in different memslots and the copy length
can span memslots.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
---
I suspect additional checking may be required around the raw_copy_in_user()
call.
---
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 70 insertions(+)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c38cc5eb7e73..53b266c04041 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -697,6 +697,7 @@ int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
void *data, unsigned int offset,
unsigned long len);
+int kvm_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from, unsigned long len);
int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
gpa_t gpa, unsigned long len);
int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 076bc38963bf..0922d45baf72 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1999,6 +1999,75 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
}
EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest);
+static int __kvm_copy_guest_page(struct kvm_memory_slot *to_memslot,
+ gfn_t to_gfn, int to_offset,
+ struct kvm_memory_slot *from_memslot,
+ gfn_t from_gfn, int from_offset, int len)
+{
+ int r;
+ unsigned long to_addr, from_addr;
+
+ to_addr = gfn_to_hva_memslot(to_memslot, to_gfn);
+ if (kvm_is_error_hva(to_addr))
+ return -EFAULT;
+ from_addr = gfn_to_hva_memslot(from_memslot, from_gfn);
+ if (kvm_is_error_hva(from_addr))
+ return -EFAULT;
+ r = raw_copy_in_user((void __user *)to_addr + to_offset,
+ (void __user *)from_addr + from_offset, len);
+ if (r)
+ return -EFAULT;
+ mark_page_dirty_in_slot(to_memslot, to_gfn);
+ return 0;
+}
+
+static int next_segment_many(unsigned long len, int offset1, int offset2)
+{
+ int size = min(PAGE_SIZE - offset1, PAGE_SIZE - offset2);
+
+ if (len > size)
+ return size;
+ else
+ return len;
+}
+
+int kvm_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from, unsigned long len)
+{
+ struct kvm_memory_slot *to_memslot = NULL;
+ struct kvm_memory_slot *from_memslot = NULL;
+ gfn_t to_gfn = to >> PAGE_SHIFT;
+ gfn_t from_gfn = from >> PAGE_SHIFT;
+ int seg;
+ int to_offset = offset_in_page(to);
+ int from_offset = offset_in_page(from);
+ int ret;
+
+ while ((seg = next_segment_many(len, to_offset, from_offset)) != 0) {
+ if (!to_memslot || (to_gfn >= (to_memslot->base_gfn +
+ to_memslot->npages)))
+ to_memslot = gfn_to_memslot(kvm, to_gfn);
+ if (!from_memslot || (from_gfn >= (from_memslot->base_gfn +
+ from_memslot->npages)))
+ from_memslot = gfn_to_memslot(kvm, from_gfn);
+
+ ret = __kvm_copy_guest_page(to_memslot, to_gfn, to_offset,
+ from_memslot, from_gfn, from_offset,
+ seg);
+ if (ret < 0)
+ return ret;
+
+ to_offset = (to_offset + seg) & (PAGE_SIZE - 1);
+ from_offset = (from_offset + seg) & (PAGE_SIZE - 1);
+ len -= seg;
+ if (!to_offset)
+ to_gfn += 1;
+ if (!from_offset)
+ from_gfn += 1;
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_copy_guest);
+
static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots,
struct gfn_to_hva_cache *ghc,
gpa_t gpa, unsigned long len)
--
2.13.6
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place
2019-03-06 6:00 [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place Suraj Jitindar Singh
@ 2019-03-06 14:12 ` kbuild test robot
2019-03-06 17:21 ` Sean Christopherson
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: kbuild test robot @ 2019-03-06 14:12 UTC (permalink / raw)
To: kvm-ppc
[-- Attachment #1: Type: text/plain, Size: 2102 bytes --]
Hi Suraj,
I love your patch! Yet something to improve:
[auto build test ERROR on kvm/linux-next]
[also build test ERROR on v5.0 next-20190305]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Suraj-Jitindar-Singh/KVM-Implement-kvm_copy_guest-to-perform-copy-of-guest-memory-in-place/20190306-170132
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
config: i386-randconfig-l2-03061123 (attached as .config)
compiler: gcc-5 (Debian 5.5.0-3) 5.4.1 20171010
reproduce:
# save the attached .config to linux build tree
make ARCH=i386
All errors (new ones prefixed by >>):
arch/x86/kvm/../../../virt/kvm/kvm_main.c: In function '__kvm_copy_guest_page':
>> arch/x86/kvm/../../../virt/kvm/kvm_main.c:2018:6: error: implicit declaration of function 'raw_copy_in_user' [-Werror=implicit-function-declaration]
r = raw_copy_in_user((void __user *)to_addr + to_offset,
^
cc1: some warnings being treated as errors
vim +/raw_copy_in_user +2018 arch/x86/kvm/../../../virt/kvm/kvm_main.c
2003
2004 static int __kvm_copy_guest_page(struct kvm_memory_slot *to_memslot,
2005 gfn_t to_gfn, int to_offset,
2006 struct kvm_memory_slot *from_memslot,
2007 gfn_t from_gfn, int from_offset, int len)
2008 {
2009 int r;
2010 unsigned long to_addr, from_addr;
2011
2012 to_addr = gfn_to_hva_memslot(to_memslot, to_gfn);
2013 if (kvm_is_error_hva(to_addr))
2014 return -EFAULT;
2015 from_addr = gfn_to_hva_memslot(from_memslot, from_gfn);
2016 if (kvm_is_error_hva(from_addr))
2017 return -EFAULT;
> 2018 r = raw_copy_in_user((void __user *)to_addr + to_offset,
2019 (void __user *)from_addr + from_offset, len);
2020 if (r)
2021 return -EFAULT;
2022 mark_page_dirty_in_slot(to_memslot, to_gfn);
2023 return 0;
2024 }
2025
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 30863 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place
2019-03-06 6:00 [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place Suraj Jitindar Singh
2019-03-06 14:12 ` kbuild test robot
@ 2019-03-06 17:21 ` Sean Christopherson
2019-03-06 18:28 ` kbuild test robot
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Sean Christopherson @ 2019-03-06 17:21 UTC (permalink / raw)
To: kvm-ppc
On Wed, Mar 06, 2019 at 05:00:15PM +1100, Suraj Jitindar Singh wrote:
> Implement the function kvm_copy_guest() to be used to perform a memory
> copy from one guest physical address to another of a variable length.
>
> This performs similar functionality as the kvm_read_guest() and
> kvm_write_guest() functions, except both addresses point to guest memory.
> This performs a copy in place using raw_copy_in_user() to avoid having to
> buffer the data.
>
> The guest memory can reside in different memslots and the copy length
> can span memslots.
>
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
>
> ---
>
> I suspect additional checking may be required around the raw_copy_in_user()
> call.
>
> ---
> include/linux/kvm_host.h | 1 +
> virt/kvm/kvm_main.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 70 insertions(+)
>
...
> +static int next_segment_many(unsigned long len, int offset1, int offset2)
> +{
> + int size = min(PAGE_SIZE - offset1, PAGE_SIZE - offset2);
> +
> + if (len > size)
> + return size;
> + else
> + return len;
> +}
> +
> +int kvm_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from, unsigned long len)
> +{
> + struct kvm_memory_slot *to_memslot = NULL;
> + struct kvm_memory_slot *from_memslot = NULL;
> + gfn_t to_gfn = to >> PAGE_SHIFT;
> + gfn_t from_gfn = from >> PAGE_SHIFT;
> + int seg;
> + int to_offset = offset_in_page(to);
> + int from_offset = offset_in_page(from);
> + int ret;
> +
> + while ((seg = next_segment_many(len, to_offset, from_offset)) != 0) {
> + if (!to_memslot || (to_gfn >= (to_memslot->base_gfn +
> + to_memslot->npages)))
> + to_memslot = gfn_to_memslot(kvm, to_gfn);
> + if (!from_memslot || (from_gfn >= (from_memslot->base_gfn +
> + from_memslot->npages)))
> + from_memslot = gfn_to_memslot(kvm, from_gfn);
> +
> + ret = __kvm_copy_guest_page(to_memslot, to_gfn, to_offset,
> + from_memslot, from_gfn, from_offset,
> + seg);
> + if (ret < 0)
> + return ret;
> +
> + to_offset = (to_offset + seg) & (PAGE_SIZE - 1);
> + from_offset = (from_offset + seg) & (PAGE_SIZE - 1);
> + len -= seg;
> + if (!to_offset)
> + to_gfn += 1;
> + if (!from_offset)
> + from_gfn += 1;
> + }
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(kvm_copy_guest);
Is there a need to support spanning multiple pages at this time? Your use
case always accesses exactly a page and requires both dst and src to be
page aligned. I.e. provide just kvm_copy_guest_page() for simplicity.
> +
> static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots,
> struct gfn_to_hva_cache *ghc,
> gpa_t gpa, unsigned long len)
> --
> 2.13.6
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place
2019-03-06 6:00 [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place Suraj Jitindar Singh
2019-03-06 14:12 ` kbuild test robot
2019-03-06 17:21 ` Sean Christopherson
@ 2019-03-06 18:28 ` kbuild test robot
2019-03-06 23:18 ` Suraj Jitindar Singh
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: kbuild test robot @ 2019-03-06 18:28 UTC (permalink / raw)
To: kvm-ppc
[-- Attachment #1: Type: text/plain, Size: 2140 bytes --]
Hi Suraj,
I love your patch! Yet something to improve:
[auto build test ERROR on kvm/linux-next]
[also build test ERROR on v5.0 next-20190306]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Suraj-Jitindar-Singh/KVM-Implement-kvm_copy_guest-to-perform-copy-of-guest-memory-in-place/20190306-170132
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
config: i386-allmodconfig (attached as .config)
compiler: gcc-8 (Debian 8.2.0-21) 8.2.0
reproduce:
# save the attached .config to linux build tree
make ARCH=i386
All errors (new ones prefixed by >>):
arch/x86/kvm/../../../virt/kvm/kvm_main.c: In function '__kvm_copy_guest_page':
>> arch/x86/kvm/../../../virt/kvm/kvm_main.c:2018:6: error: implicit declaration of function 'raw_copy_in_user'; did you mean 'raw_copy_to_user'? [-Werror=implicit-function-declaration]
r = raw_copy_in_user((void __user *)to_addr + to_offset,
^~~~~~~~~~~~~~~~
raw_copy_to_user
cc1: some warnings being treated as errors
vim +2018 arch/x86/kvm/../../../virt/kvm/kvm_main.c
2003
2004 static int __kvm_copy_guest_page(struct kvm_memory_slot *to_memslot,
2005 gfn_t to_gfn, int to_offset,
2006 struct kvm_memory_slot *from_memslot,
2007 gfn_t from_gfn, int from_offset, int len)
2008 {
2009 int r;
2010 unsigned long to_addr, from_addr;
2011
2012 to_addr = gfn_to_hva_memslot(to_memslot, to_gfn);
2013 if (kvm_is_error_hva(to_addr))
2014 return -EFAULT;
2015 from_addr = gfn_to_hva_memslot(from_memslot, from_gfn);
2016 if (kvm_is_error_hva(from_addr))
2017 return -EFAULT;
> 2018 r = raw_copy_in_user((void __user *)to_addr + to_offset,
2019 (void __user *)from_addr + from_offset, len);
2020 if (r)
2021 return -EFAULT;
2022 mark_page_dirty_in_slot(to_memslot, to_gfn);
2023 return 0;
2024 }
2025
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 66578 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place
2019-03-06 6:00 [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place Suraj Jitindar Singh
` (2 preceding siblings ...)
2019-03-06 18:28 ` kbuild test robot
@ 2019-03-06 23:18 ` Suraj Jitindar Singh
2019-03-21 14:34 ` Sean Christopherson
2019-03-22 2:33 ` Suraj Jitindar Singh
5 siblings, 0 replies; 7+ messages in thread
From: Suraj Jitindar Singh @ 2019-03-06 23:18 UTC (permalink / raw)
To: kvm-ppc
On Wed, 2019-03-06 at 09:21 -0800, Sean Christopherson wrote:
> On Wed, Mar 06, 2019 at 05:00:15PM +1100, Suraj Jitindar Singh wrote:
> > Implement the function kvm_copy_guest() to be used to perform a
> > memory
> > copy from one guest physical address to another of a variable
> > length.
> >
> > This performs similar functionality as the kvm_read_guest() and
> > kvm_write_guest() functions, except both addresses point to guest
> > memory.
> > This performs a copy in place using raw_copy_in_user() to avoid
> > having to
> > buffer the data.
> >
> > The guest memory can reside in different memslots and the copy
> > length
> > can span memslots.
> >
> > Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> >
> > ---
> >
> > I suspect additional checking may be required around the
> > raw_copy_in_user()
> > call.
> >
> > ---
> > include/linux/kvm_host.h | 1 +
> > virt/kvm/kvm_main.c | 69
> > ++++++++++++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 70 insertions(+)
> >
>
> ...
>
> > +static int next_segment_many(unsigned long len, int offset1, int
> > offset2)
> > +{
> > + int size = min(PAGE_SIZE - offset1, PAGE_SIZE - offset2);
> > +
> > + if (len > size)
> > + return size;
> > + else
> > + return len;
> > +}
> > +
> > +int kvm_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from, unsigned
> > long len)
> > +{
> > + struct kvm_memory_slot *to_memslot = NULL;
> > + struct kvm_memory_slot *from_memslot = NULL;
> > + gfn_t to_gfn = to >> PAGE_SHIFT;
> > + gfn_t from_gfn = from >> PAGE_SHIFT;
> > + int seg;
> > + int to_offset = offset_in_page(to);
> > + int from_offset = offset_in_page(from);
> > + int ret;
> > +
> > + while ((seg = next_segment_many(len, to_offset,
> > from_offset)) != 0) {
> > + if (!to_memslot || (to_gfn >= (to_memslot-
> > >base_gfn +
> > + to_memslot-
> > >npages)))
> > + to_memslot = gfn_to_memslot(kvm, to_gfn);
> > + if (!from_memslot || (from_gfn >= (from_memslot-
> > >base_gfn +
> > + from_memslot-
> > >npages)))
> > + from_memslot = gfn_to_memslot(kvm,
> > from_gfn);
> > +
> > + ret = __kvm_copy_guest_page(to_memslot, to_gfn,
> > to_offset,
> > + from_memslot,
> > from_gfn, from_offset,
> > + seg);
> > + if (ret < 0)
> > + return ret;
> > +
> > + to_offset = (to_offset + seg) & (PAGE_SIZE - 1);
> > + from_offset = (from_offset + seg) & (PAGE_SIZE -
> > 1);
> > + len -= seg;
> > + if (!to_offset)
> > + to_gfn += 1;
> > + if (!from_offset)
> > + from_gfn += 1;
> > + }
> > + return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(kvm_copy_guest);
>
> Is there a need to support spanning multiple pages at this
> time? Your use
> case always accesses exactly a page and requires both dst and src to
> be
> page aligned. I.e. provide just kvm_copy_guest_page() for
> simplicity.
There is no need for multiple pages at this stage. However I wanted to
match kvm_[read/write]_guest which allow multiple pages and there
didn't seem any reason to not just implement it this way from the
start.
>
> > +
> > static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots,
> > struct gfn_to_hva_cache
> > *ghc,
> > gpa_t gpa, unsigned long
> > len)
> > --
> > 2.13.6
> >
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place
2019-03-06 6:00 [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place Suraj Jitindar Singh
` (3 preceding siblings ...)
2019-03-06 23:18 ` Suraj Jitindar Singh
@ 2019-03-21 14:34 ` Sean Christopherson
2019-03-22 2:33 ` Suraj Jitindar Singh
5 siblings, 0 replies; 7+ messages in thread
From: Sean Christopherson @ 2019-03-21 14:34 UTC (permalink / raw)
To: kvm-ppc
On Thu, Mar 07, 2019 at 10:18:05AM +1100, Suraj Jitindar Singh wrote:
> On Wed, 2019-03-06 at 09:21 -0800, Sean Christopherson wrote:
> > On Wed, Mar 06, 2019 at 05:00:15PM +1100, Suraj Jitindar Singh wrote:
> > > Implement the function kvm_copy_guest() to be used to perform a
> > > memory
> > > copy from one guest physical address to another of a variable
> > > length.
> > >
> > > This performs similar functionality as the kvm_read_guest() and
> > > kvm_write_guest() functions, except both addresses point to guest
> > > memory.
> > > This performs a copy in place using raw_copy_in_user() to avoid
> > > having to
> > > buffer the data.
> > >
> > > The guest memory can reside in different memslots and the copy
> > > length
> > > can span memslots.
> > >
> > > Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> > >
> > > ---
> > >
> > > I suspect additional checking may be required around the
> > > raw_copy_in_user()
> > > call.
> > >
> > > ---
> > > include/linux/kvm_host.h | 1 +
> > > virt/kvm/kvm_main.c | 69
> > > ++++++++++++++++++++++++++++++++++++++++++++++++
> > > 2 files changed, 70 insertions(+)
> > >
> >
> > ...
> >
> > > +static int next_segment_many(unsigned long len, int offset1, int
> > > offset2)
> > > +{
> > > + int size = min(PAGE_SIZE - offset1, PAGE_SIZE - offset2);
> > > +
> > > + if (len > size)
> > > + return size;
> > > + else
> > > + return len;
> > > +}
> > > +
> > > +int kvm_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from, unsigned
> > > long len)
> > > +{
> > > + struct kvm_memory_slot *to_memslot = NULL;
> > > + struct kvm_memory_slot *from_memslot = NULL;
> > > + gfn_t to_gfn = to >> PAGE_SHIFT;
> > > + gfn_t from_gfn = from >> PAGE_SHIFT;
> > > + int seg;
> > > + int to_offset = offset_in_page(to);
> > > + int from_offset = offset_in_page(from);
> > > + int ret;
> > > +
> > > + while ((seg = next_segment_many(len, to_offset,
> > > from_offset)) != 0) {
> > > + if (!to_memslot || (to_gfn >= (to_memslot-
> > > >base_gfn +
> > > + to_memslot-
> > > >npages)))
> > > + to_memslot = gfn_to_memslot(kvm, to_gfn);
> > > + if (!from_memslot || (from_gfn >= (from_memslot-
> > > >base_gfn +
> > > + from_memslot-
> > > >npages)))
> > > + from_memslot = gfn_to_memslot(kvm,
> > > from_gfn);
> > > +
> > > + ret = __kvm_copy_guest_page(to_memslot, to_gfn,
> > > to_offset,
> > > + from_memslot,
> > > from_gfn, from_offset,
> > > + seg);
> > > + if (ret < 0)
> > > + return ret;
> > > +
> > > + to_offset = (to_offset + seg) & (PAGE_SIZE - 1);
> > > + from_offset = (from_offset + seg) & (PAGE_SIZE -
> > > 1);
> > > + len -= seg;
> > > + if (!to_offset)
> > > + to_gfn += 1;
> > > + if (!from_offset)
> > > + from_gfn += 1;
> > > + }
> > > + return 0;
> > > +}
> > > +EXPORT_SYMBOL_GPL(kvm_copy_guest);
> >
> > Is there a need to support spanning multiple pages at this
> > time? Your use
> > case always accesses exactly a page and requires both dst and src to
> > be
> > page aligned. I.e. provide just kvm_copy_guest_page() for
> > simplicity.
>
> There is no need for multiple pages at this stage. However I wanted to
> match kvm_[read/write]_guest which allow multiple pages and there
> didn't seem any reason to not just implement it this way from the
> start.
From a correctness standpoint, the multi-page version is quite difficult
to comprehend and review, e.g. no matter how long I stare at this code
I doubt I'll ever be 100% certain that it correctly handles every corner
case (not saying it doesn't :-) ).
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place
2019-03-06 6:00 [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place Suraj Jitindar Singh
` (4 preceding siblings ...)
2019-03-21 14:34 ` Sean Christopherson
@ 2019-03-22 2:33 ` Suraj Jitindar Singh
5 siblings, 0 replies; 7+ messages in thread
From: Suraj Jitindar Singh @ 2019-03-22 2:33 UTC (permalink / raw)
To: kvm-ppc
On Thu, 2019-03-21 at 07:34 -0700, Sean Christopherson wrote:
> On Thu, Mar 07, 2019 at 10:18:05AM +1100, Suraj Jitindar Singh wrote:
> > On Wed, 2019-03-06 at 09:21 -0800, Sean Christopherson wrote:
> > > On Wed, Mar 06, 2019 at 05:00:15PM +1100, Suraj Jitindar Singh
> > > wrote:
> > > > Implement the function kvm_copy_guest() to be used to perform a
> > > > memory
> > > > copy from one guest physical address to another of a variable
> > > > length.
> > > >
> > > > This performs similar functionality as the kvm_read_guest() and
> > > > kvm_write_guest() functions, except both addresses point to
> > > > guest
> > > > memory.
> > > > This performs a copy in place using raw_copy_in_user() to avoid
> > > > having to
> > > > buffer the data.
> > > >
> > > > The guest memory can reside in different memslots and the copy
> > > > length
> > > > can span memslots.
> > > >
> > > > Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> > > >
> > > > ---
> > > >
> > > > I suspect additional checking may be required around the
> > > > raw_copy_in_user()
> > > > call.
> > > >
> > > > ---
> > > > include/linux/kvm_host.h | 1 +
> > > > virt/kvm/kvm_main.c | 69
> > > > ++++++++++++++++++++++++++++++++++++++++++++++++
> > > > 2 files changed, 70 insertions(+)
> > > >
> > >
> > > ...
> > >
> > > > +static int next_segment_many(unsigned long len, int offset1,
> > > > int
> > > > offset2)
> > > > +{
> > > > + int size = min(PAGE_SIZE - offset1, PAGE_SIZE -
> > > > offset2);
> > > > +
> > > > + if (len > size)
> > > > + return size;
> > > > + else
> > > > + return len;
> > > > +}
> > > > +
> > > > +int kvm_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from,
> > > > unsigned
> > > > long len)
> > > > +{
> > > > + struct kvm_memory_slot *to_memslot = NULL;
> > > > + struct kvm_memory_slot *from_memslot = NULL;
> > > > + gfn_t to_gfn = to >> PAGE_SHIFT;
> > > > + gfn_t from_gfn = from >> PAGE_SHIFT;
> > > > + int seg;
> > > > + int to_offset = offset_in_page(to);
> > > > + int from_offset = offset_in_page(from);
> > > > + int ret;
> > > > +
> > > > + while ((seg = next_segment_many(len, to_offset,
> > > > from_offset)) != 0) {
> > > > + if (!to_memslot || (to_gfn >= (to_memslot-
> > > > > base_gfn +
> > > >
> > > > + to_memslot-
> > > > > npages)))
> > > >
> > > > + to_memslot = gfn_to_memslot(kvm,
> > > > to_gfn);
> > > > + if (!from_memslot || (from_gfn >> > > > (from_memslot-
> > > > > base_gfn +
> > > >
> > > > + from_memslo
> > > > t-
> > > > > npages)))
> > > >
> > > > + from_memslot = gfn_to_memslot(kvm,
> > > > from_gfn);
> > > > +
> > > > + ret = __kvm_copy_guest_page(to_memslot,
> > > > to_gfn,
> > > > to_offset,
> > > > + from_memslot,
> > > > from_gfn, from_offset,
> > > > + seg);
> > > > + if (ret < 0)
> > > > + return ret;
> > > > +
> > > > + to_offset = (to_offset + seg) & (PAGE_SIZE -
> > > > 1);
> > > > + from_offset = (from_offset + seg) & (PAGE_SIZE
> > > > -
> > > > 1);
> > > > + len -= seg;
> > > > + if (!to_offset)
> > > > + to_gfn += 1;
> > > > + if (!from_offset)
> > > > + from_gfn += 1;
> > > > + }
> > > > + return 0;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(kvm_copy_guest);
> > >
> > > Is there a need to support spanning multiple pages at this
> > > time? Your use
> > > case always accesses exactly a page and requires both dst and src
> > > to
> > > be
> > > page aligned. I.e. provide just kvm_copy_guest_page() for
> > > simplicity.
> >
> > There is no need for multiple pages at this stage. However I wanted
> > to
> > match kvm_[read/write]_guest which allow multiple pages and there
> > didn't seem any reason to not just implement it this way from the
> > start.
>
> From a correctness standpoint, the multi-page version is quite
> difficult
> to comprehend and review, e.g. no matter how long I stare at this
> code
> I doubt I'll ever be 100% certain that it correctly handles every
> corner
> case (not saying it doesn't :-) ).
Sure, I'll drop this patch
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2019-03-22 2:33 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-06 6:00 [PATCH 1/2] KVM: Implement kvm_copy_guest() to perform copy of guest memory in place Suraj Jitindar Singh
2019-03-06 14:12 ` kbuild test robot
2019-03-06 17:21 ` Sean Christopherson
2019-03-06 18:28 ` kbuild test robot
2019-03-06 23:18 ` Suraj Jitindar Singh
2019-03-21 14:34 ` Sean Christopherson
2019-03-22 2:33 ` Suraj Jitindar Singh
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.