From mboxrd@z Thu Jan 1 00:00:00 1970 From: Konrad Rzeszutek Wilk Subject: Re: [PATCH 02/18] PVH xen: add XENMEM_add_to_physmap_range Date: Fri, 7 Jun 2013 08:08:04 -0700 (PDT) Message-ID: <20130607150804.GA27011@phenom.dumpdata.com> References: <1369445137-19755-1-git-send-email-mukesh.rathor@oracle.com> <1369445137-19755-3-git-send-email-mukesh.rathor@oracle.com> <51A8897602000078000D9F13@nat28.tlf.novell.com> <20130604173143.7ef56db1@mantra.us.oracle.com> <51AF05C402000078000DB564@nat28.tlf.novell.com> <20130605134146.2bb6b342@mantra.us.oracle.com> <51B04B9D02000078000DBC2A@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <51B04B9D02000078000DBC2A@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: xen-devel List-Id: xen-devel@lists.xenproject.org On Thu, Jun 06, 2013 at 07:43:09AM +0100, Jan Beulich wrote: > >>> On 05.06.13 at 22:41, Mukesh Rathor wrote: > > On Wed, 05 Jun 2013 08:32:52 +0100 > > "Jan Beulich" wrote: > > > >> >>> On 05.06.13 at 02:31, Mukesh Rathor > >> >>> wrote: > >> >> I also vaguely recall having pointed out in a much earlier review > >> >> that this functionality is lacking a counterpart in > >> >> compat_arch_memory_op(). > >> > > >> > Hmm.. confused how/why a 64bit PVH go thru compat_arch_memory_op()? > >> > Can you pl explain? > >> > >> Iirc the new hypercall isn't restricted to PVH guests, and hence > >> needs a compat implementation regardless of 32-bit PVH not > >> existing yet. > > > > This patch does not introduce the hcall, it was introduced much earlier. > > It implements the portion needed for 64bit PVH. > > So how is this statement of yours in line with this hunk of the > patch? > > @@ -4716,6 +4751,39 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg) > return rc; > } > > + case XENMEM_add_to_physmap_range: > + { > + struct xen_add_to_physmap_range xatpr; > + struct domain *d; > + > + if ( copy_from_guest(&xatpr, arg, 1) ) > + return -EFAULT; > + > + /* This mapspace is redundant for this hypercall */ > + if ( xatpr.space == XENMAPSPACE_gmfn_range ) > + return -EINVAL; > + > + d = rcu_lock_domain_by_any_id(xatpr.domid); > + if ( d == NULL ) > + return -ESRCH; > + > + if ( xsm_add_to_physmap(XSM_TARGET, current->domain, d) ) > + { > + rcu_unlock_domain(d); > + return -EPERM; > + } > + > + rc = xenmem_add_to_physmap_range(d, &xatpr); > + > + rcu_unlock_domain(d); > + > + if ( rc == -EAGAIN ) > + rc = hypercall_create_continuation( > + __HYPERVISOR_memory_op, "ih", op, arg); > + > + return rc; > + } > + > case XENMEM_set_memory_map: > { > struct xen_foreign_memory_map fmap; > > If the hypercall were handled before, adding a new case statement > to arch_memory_op() would cause a compilation error. All that was Meaning if it was handled in do_memory_op, right? > there before this patch was the definition of the hypercall (for ARM), > but I'm quite strongly opposed to adding x86 support for this > hypercall only for one half of the possible set of PV (and HVM?) > guests; the fact that PVH is 64-bit only for the moment has nothing > to do with this. The only alternative would be to constrain the > specific sub-hypercall to PVH, but that would be rather contrived, > so I'm already trying to make clear that I wouldn't accept such a > solution to the original comment. Pardon my misunderstanding - what you are saying is that the location of this hypercall in arch_memory_op is correct - but only if it works with 32-bit guests as well? Which would imply at least doing something in compat_arch_memory_op (to copy the arguments properly), and in the arch_memory_op return -ENOSYS if the guest is 32-bit (at least for right now). In the future the return -ENOSYS would be removed so that 32-bit guests can use this hypercall. Or perhaps it make sense to squash the x86 and ARM version of this hypercall in the generic code? (later on)