From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: [PATCH 02/18] PVH xen: add XENMEM_add_to_physmap_range Date: Thu, 06 Jun 2013 07:43:09 +0100 Message-ID: <51B04B9D02000078000DBC2A@nat28.tlf.novell.com> References: <1369445137-19755-1-git-send-email-mukesh.rathor@oracle.com> <1369445137-19755-3-git-send-email-mukesh.rathor@oracle.com> <51A8897602000078000D9F13@nat28.tlf.novell.com> <20130604173143.7ef56db1@mantra.us.oracle.com> <51AF05C402000078000DB564@nat28.tlf.novell.com> <20130605134146.2bb6b342@mantra.us.oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20130605134146.2bb6b342@mantra.us.oracle.com> Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Mukesh Rathor Cc: xen-devel List-Id: xen-devel@lists.xenproject.org >>> On 05.06.13 at 22:41, Mukesh Rathor wrote: > On Wed, 05 Jun 2013 08:32:52 +0100 > "Jan Beulich" wrote: > >> >>> On 05.06.13 at 02:31, Mukesh Rathor >> >>> wrote: >> >> I also vaguely recall having pointed out in a much earlier review >> >> that this functionality is lacking a counterpart in >> >> compat_arch_memory_op(). >> > >> > Hmm.. confused how/why a 64bit PVH go thru compat_arch_memory_op()? >> > Can you pl explain? >> >> Iirc the new hypercall isn't restricted to PVH guests, and hence >> needs a compat implementation regardless of 32-bit PVH not >> existing yet. > > This patch does not introduce the hcall, it was introduced much earlier. > It implements the portion needed for 64bit PVH. So how is this statement of yours in line with this hunk of the patch? @@ -4716,6 +4751,39 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg) return rc; } + case XENMEM_add_to_physmap_range: + { + struct xen_add_to_physmap_range xatpr; + struct domain *d; + + if ( copy_from_guest(&xatpr, arg, 1) ) + return -EFAULT; + + /* This mapspace is redundant for this hypercall */ + if ( xatpr.space == XENMAPSPACE_gmfn_range ) + return -EINVAL; + + d = rcu_lock_domain_by_any_id(xatpr.domid); + if ( d == NULL ) + return -ESRCH; + + if ( xsm_add_to_physmap(XSM_TARGET, current->domain, d) ) + { + rcu_unlock_domain(d); + return -EPERM; + } + + rc = xenmem_add_to_physmap_range(d, &xatpr); + + rcu_unlock_domain(d); + + if ( rc == -EAGAIN ) + rc = hypercall_create_continuation( + __HYPERVISOR_memory_op, "ih", op, arg); + + return rc; + } + case XENMEM_set_memory_map: { struct xen_foreign_memory_map fmap; If the hypercall were handled before, adding a new case statement to arch_memory_op() would cause a compilation error. All that was there before this patch was the definition of the hypercall (for ARM), but I'm quite strongly opposed to adding x86 support for this hypercall only for one half of the possible set of PV (and HVM?) guests; the fact that PVH is 64-bit only for the moment has nothing to do with this. The only alternative would be to constrain the specific sub-hypercall to PVH, but that would be rather contrived, so I'm already trying to make clear that I wouldn't accept such a solution to the original comment. Jan