From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>,
Dave Hansen <dave.hansen@linux.intel.com>,
Andy Lutomirski <luto@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Paolo Bonzini <pbonzini@redhat.com>,
Sean Christopherson <sean.j.christopherson@intel.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
David Rientjes <rientjes@google.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Kees Cook <keescook@chromium.org>, Will Drewry <wad@chromium.org>,
"Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
"Kleen, Andi" <andi.kleen@intel.com>,
Liran Alon <liran.alon@oracle.com>,
Mike Rapoport <rppt@kernel.org>,
x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [RFCv2 08/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory
Date: Tue, 20 Oct 2020 15:51:55 +0300 [thread overview]
Message-ID: <20201020125155.7hubssbqhbm2dypj@black.fi.intel.com> (raw)
In-Reply-To: <c8b0405f-14ed-a1bb-3a91-586a30bdf39b@nvidia.com>
On Tue, Oct 20, 2020 at 01:25:59AM -0700, John Hubbard wrote:
> On 10/19/20 11:18 PM, Kirill A. Shutemov wrote:
> > New helpers copy_from_guest()/copy_to_guest() to be used if KVM memory
> > protection feature is enabled.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > ---
> > include/linux/kvm_host.h | 4 ++
> > virt/kvm/kvm_main.c | 90 +++++++++++++++++++++++++++++++---------
> > 2 files changed, 75 insertions(+), 19 deletions(-)
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 05e3c2fb3ef7..380a64613880 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -504,6 +504,7 @@ struct kvm {
> > struct srcu_struct irq_srcu;
> > pid_t userspace_pid;
> > unsigned int max_halt_poll_ns;
> > + bool mem_protected;
> > };
> > #define kvm_err(fmt, ...) \
> > @@ -728,6 +729,9 @@ void kvm_set_pfn_dirty(kvm_pfn_t pfn);
> > void kvm_set_pfn_accessed(kvm_pfn_t pfn);
> > void kvm_get_pfn(kvm_pfn_t pfn);
> > +int copy_from_guest(void *data, unsigned long hva, int len, bool protected);
> > +int copy_to_guest(unsigned long hva, const void *data, int len, bool protected);
> > +
> > void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache);
> > int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset,
> > int len);
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index cf88233b819a..a9884cb8c867 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -2313,19 +2313,70 @@ static int next_segment(unsigned long len, int offset)
> > return len;
> > }
> > +int copy_from_guest(void *data, unsigned long hva, int len, bool protected)
> > +{
> > + int offset = offset_in_page(hva);
> > + struct page *page;
> > + int npages, seg;
> > +
> > + if (!protected)
> > + return __copy_from_user(data, (void __user *)hva, len);
> > +
> > + might_fault();
> > + kasan_check_write(data, len);
> > + check_object_size(data, len, false);
> > +
> > + while ((seg = next_segment(len, offset)) != 0) {
> > + npages = get_user_pages_unlocked(hva, 1, &page, 0);
> > + if (npages != 1)
> > + return -EFAULT;
> > + memcpy(data, page_address(page) + offset, seg);
>
> Hi Kirill!
>
> OK, so the copy_from_guest() is a read-only case for gup, which I think is safe
> from a gup/pup + filesystem point of view, but see below about copy_to_guest()...
>
> > + put_page(page);
> > + len -= seg;
> > + hva += seg;
> > + offset = 0;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +int copy_to_guest(unsigned long hva, const void *data, int len, bool protected)
> > +{
> > + int offset = offset_in_page(hva);
> > + struct page *page;
> > + int npages, seg;
> > +
> > + if (!protected)
> > + return __copy_to_user((void __user *)hva, data, len);
> > +
> > + might_fault();
> > + kasan_check_read(data, len);
> > + check_object_size(data, len, true);
> > +
> > + while ((seg = next_segment(len, offset)) != 0) {
> > + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE);
>
>
> Should copy_to_guest() use pin_user_pages_unlocked() instead of gup_unlocked?
> We wrote a "Case 5" in Documentation/core-api/pin_user_pages.rst, just for this
> situation, I think:
>
>
> CASE 5: Pinning in order to write to the data within the page
> -------------------------------------------------------------
> Even though neither DMA nor Direct IO is involved, just a simple case of "pin,
> write to a page's data, unpin" can cause a problem. Case 5 may be considered a
> superset of Case 1, plus Case 2, plus anything that invokes that pattern. In
> other words, if the code is neither Case 1 nor Case 2, it may still require
> FOLL_PIN, for patterns like this:
>
> Correct (uses FOLL_PIN calls):
> pin_user_pages()
> write to the data within the pages
> unpin_user_pages()
Right. I didn't internalize changes in GUP interface yet. Will update.
--
Kirill A. Shutemov
next prev parent reply other threads:[~2020-10-20 12:52 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-20 6:18 [RFCv2 00/16] KVM protected memory extension Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 01/16] x86/mm: Move force_dma_unencrypted() to common code Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 02/16] x86/kvm: Introduce KVM memory protection feature Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 03/16] x86/kvm: Make DMA pages shared Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 04/16] x86/kvm: Use bounce buffers for KVM memory protection Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 05/16] x86/kvm: Make VirtIO use DMA API in KVM guest Kirill A. Shutemov
2020-10-20 8:06 ` Christoph Hellwig
2020-10-20 12:47 ` Kirill A. Shutemov
2020-10-22 3:31 ` Halil Pasic
2020-10-20 6:18 ` [RFCv2 06/16] x86/kvmclock: Share hvclock memory with the host Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 07/16] x86/realmode: Share trampoline area if KVM memory protection enabled Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 08/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory Kirill A. Shutemov
2020-10-20 8:25 ` John Hubbard
2020-10-20 12:51 ` Kirill A. Shutemov [this message]
2020-10-22 11:49 ` Matthew Wilcox
2020-10-22 19:58 ` John Hubbard
2020-10-26 4:21 ` Matthew Wilcox
2020-10-26 4:44 ` John Hubbard
2020-10-26 13:28 ` Matthew Wilcox
2020-10-26 14:16 ` Jason Gunthorpe
2020-10-26 20:52 ` John Hubbard
2020-10-20 17:29 ` Ira Weiny
2020-10-22 11:37 ` Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 09/16] KVM: mm: Introduce VM_KVM_PROTECTED Kirill A. Shutemov
2020-10-21 18:47 ` Edgecombe, Rick P
2020-10-22 12:01 ` Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 10/16] KVM: x86: Use GUP for page walk instead of __get_user() Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 11/16] KVM: Protected memory extension Kirill A. Shutemov
2020-10-20 7:17 ` Peter Zijlstra
2020-10-20 12:55 ` Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 12/16] KVM: x86: Enabled protected " Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 13/16] KVM: Rework copy_to/from_guest() to avoid direct mapping Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 14/16] KVM: Handle protected memory in __kvm_map_gfn()/__kvm_unmap_gfn() Kirill A. Shutemov
2020-10-21 18:50 ` Edgecombe, Rick P
2020-10-22 12:06 ` Kirill A. Shutemov
2020-10-22 16:59 ` Edgecombe, Rick P
2020-10-23 10:36 ` Kirill A. Shutemov
2020-10-22 3:26 ` Halil Pasic
2020-10-22 12:07 ` Kirill A. Shutemov
2020-10-20 6:18 ` [RFCv2 15/16] KVM: Unmap protected pages from direct mapping Kirill A. Shutemov
2020-10-20 7:12 ` Peter Zijlstra
2020-10-20 12:18 ` David Hildenbrand
2020-10-20 13:20 ` David Hildenbrand
2020-10-21 1:20 ` Edgecombe, Rick P
2020-10-26 19:55 ` Tom Lendacky
2020-10-21 18:49 ` Edgecombe, Rick P
2020-10-23 12:37 ` Mike Rapoport
2020-10-23 16:32 ` Sean Christopherson
2020-10-20 6:18 ` [RFCv2 16/16] mm: Do not use zero page for VM_KVM_PROTECTED VMAs Kirill A. Shutemov
2020-10-20 7:46 ` [RFCv2 00/16] KVM protected memory extension Vitaly Kuznetsov
2020-10-20 13:49 ` Kirill A. Shutemov
2020-10-21 14:46 ` Vitaly Kuznetsov
2020-10-23 11:35 ` Kirill A. Shutemov
2020-10-23 12:01 ` Vitaly Kuznetsov
2020-10-21 18:20 ` Andy Lutomirski
2020-10-26 15:29 ` Kirill A. Shutemov
2020-10-26 23:58 ` Andy Lutomirski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201020125155.7hubssbqhbm2dypj@black.fi.intel.com \
--to=kirill.shutemov@linux.intel.com \
--cc=aarcange@redhat.com \
--cc=andi.kleen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=jhubbard@nvidia.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=keescook@chromium.org \
--cc=kirill@shutemov.name \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liran.alon@oracle.com \
--cc=luto@kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=rientjes@google.com \
--cc=rppt@kernel.org \
--cc=sean.j.christopherson@intel.com \
--cc=vkuznets@redhat.com \
--cc=wad@chromium.org \
--cc=wanpengli@tencent.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).