From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tamas K Lengyel Subject: Re: [PATCH V5 10/12] xen/vm_event: Relocate memop checks Date: Sat, 14 Feb 2015 00:20:12 +0100 Message-ID: References: <1423845203-18941-1-git-send-email-tamas.lengyel@zentific.com> <1423845203-18941-11-git-send-email-tamas.lengyel@zentific.com> <54DE6B6C.70401@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <54DE6B6C.70401@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andrew Cooper Cc: "Tian, Kevin" , "wei.liu2@citrix.com" , Ian Campbell , Stefano Stabellini , Tim Deegan , Steven Maresca , "xen-devel@lists.xen.org" , Jan Beulich , "Dong, Eddie" , Andres Lagar-Cavilla , Jun Nakajima , "rshriram@cs.ubc.ca" , Keir Fraser , Daniel De Graaf , "yanghy@cn.fujitsu.com" , Ian Jackson List-Id: xen-devel@lists.xenproject.org On Fri, Feb 13, 2015 at 10:23 PM, Andrew Cooper wrote: > On 13/02/15 16:33, Tamas K Lengyel wrote: >> The memop handler function for paging/sharing responsible for calling XSM >> doesn't really have anything to do with vm_event, thus in this patch we >> relocate it into mem_paging_memop and mem_sharing_memop. This has already >> been the approach in mem_access_memop, so in this patch we just make it >> consistent. >> >> Signed-off-by: Tamas K Lengyel >> --- >> xen/arch/x86/mm/mem_paging.c | 36 ++++++++++++++----- >> xen/arch/x86/mm/mem_sharing.c | 76 ++++++++++++++++++++++++++------------- >> xen/arch/x86/x86_64/compat/mm.c | 28 +++------------ >> xen/arch/x86/x86_64/mm.c | 26 +++----------- >> xen/common/vm_event.c | 43 ---------------------- >> xen/include/asm-x86/mem_paging.h | 7 +++- >> xen/include/asm-x86/mem_sharing.h | 4 +-- >> xen/include/xen/vm_event.h | 1 - >> 8 files changed, 97 insertions(+), 124 deletions(-) >> >> diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c >> index e63d8c1..4aee6b7 100644 >> --- a/xen/arch/x86/mm/mem_paging.c >> +++ b/xen/arch/x86/mm/mem_paging.c >> @@ -21,28 +21,45 @@ >> */ >> >> >> +#include >> #include >> -#include >> +#include > > Order of includes. > >> >> - >> -int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo) >> +int mem_paging_memop(unsigned long cmd, > > I don't believe cmd is a useful parameter to pass. You know that its > value is XENMEM_paging_op by virtue of being in this function. > >> + XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg) >> { >> - int rc = -ENODEV; >> + int rc; >> + xen_mem_paging_op_t mpo; >> + struct domain *d; >> + >> + rc = -EFAULT; >> + if ( copy_from_guest(&mpo, arg, 1) ) >> + return rc; >> + >> + rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d); >> + if ( rc ) >> + return rc; >> + >> + rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op); >> + if ( rc ) >> + return rc; >> + >> + rc = -ENODEV; >> if ( unlikely(!d->vm_event->paging.ring_page) ) >> return rc; >> >> - switch( mpo->op ) >> + switch( mpo.op ) >> { >> case XENMEM_paging_op_nominate: >> - rc = p2m_mem_paging_nominate(d, mpo->gfn); >> + rc = p2m_mem_paging_nominate(d, mpo.gfn); >> break; >> >> case XENMEM_paging_op_evict: >> - rc = p2m_mem_paging_evict(d, mpo->gfn); >> + rc = p2m_mem_paging_evict(d, mpo.gfn); >> break; >> >> case XENMEM_paging_op_prep: >> - rc = p2m_mem_paging_prep(d, mpo->gfn, mpo->buffer); >> + rc = p2m_mem_paging_prep(d, mpo.gfn, mpo.buffer); >> break; >> >> default: >> @@ -50,6 +67,9 @@ int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo) >> break; >> } >> >> + if ( !rc && __copy_to_guest(arg, &mpo, 1) ) >> + rc = -EFAULT; > > Do any of the paging ops need to be copied back? Nothing appears to > write into mpo in this function. (The original code looks to be overly > pessimistic). I'm not sure if any of these actually copy back - I just tried to keep as much of the existing flow intact as possible while sanitizing the vm_event side of things. That is not to say mem_sharing and mem_paging couldn't use more scrutiny and optimizations, it's just a bit out of the scope of what this series is about. > >> + >> return rc; >> } >> >> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c >> index 4959407..612ed89 100644 >> --- a/xen/arch/x86/mm/mem_sharing.c >> +++ b/xen/arch/x86/mm/mem_sharing.c >> @@ -28,6 +28,7 @@ >> #include >> #include >> #include >> +#include >> #include >> #include >> #include >> @@ -1293,30 +1294,52 @@ int relinquish_shared_pages(struct domain *d) >> return rc; >> } >> >> -int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec) >> +int mem_sharing_memop(unsigned long cmd, >> + XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg) >> { >> - int rc = 0; >> + int rc; >> + xen_mem_sharing_op_t mso; >> + struct domain *d; >> + >> + rc = -EFAULT; >> + if ( copy_from_guest(&mso, arg, 1) ) >> + return rc; >> + >> + if ( mso.op == XENMEM_sharing_op_audit ) >> + return mem_sharing_audit(); >> + >> + rc = rcu_lock_live_remote_domain_by_id(mso.domain, &d); >> + if ( rc ) >> + return rc; >> >> /* Only HAP is supported */ >> if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled ) >> return -ENODEV; >> >> - switch(mec->op) >> + rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op); >> + if ( rc ) >> + return rc; >> + >> + rc = -ENODEV; >> + if ( unlikely(!d->vm_event->share.ring_page) ) >> + return rc; >> + >> + switch(mso.op) > > Style ( spaces ) Ack. > >> @@ -1465,6 +1488,9 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec) >> break; >> } >> >> + if ( !rc && __copy_to_guest(arg, &mso, 1) ) >> + return -EFAULT; > > Only certain subops need to copy information back. It is common to have > a function-level bool_t copyback which relevant subops sets. If it's not a critical fix I would rather just keep it as it was in this series. It could be part of another cleanup series for mem_paging and mem_sharing. Thanks, Tamas > > ~Andrew > >> + >> return rc; >> } > >