From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Vrabel Subject: [PATCHv1 2/3] mm: don't free pages until mm locks are released Date: Fri, 6 Nov 2015 17:37:16 +0000 Message-ID: <1446831437-5897-3-git-send-email-david.vrabel@citrix.com> References: <1446831437-5897-1-git-send-email-david.vrabel@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1Zukx9-0002KS-J3 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2015 17:37:31 +0000 In-Reply-To: <1446831437-5897-1-git-send-email-david.vrabel@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xenproject.org Cc: Andrew Cooper , Kevin Tian , Jan Beulich , David Vrabel , Jun Nakajima List-Id: xen-devel@lists.xenproject.org If a page is freed without translations being invalidated, and the page is subsequently allocated to another domain, a guest with a cached translation will still be able to access the page. Currently translations are invalidated before releasing the page ref, but while still holding the mm locks. To allow translations to be invalidated without holding the mm locks, we need to keep a reference to the page for a bit longer in some cases. [ This seems difficult to a) verify as correct; and b) difficult to get correct in the future. A better suggestion would be useful. Perhaps using something like pg->tlbflush_needed mechanism that already exists for pages from PV guests? ] Signed-off-by: David Vrabel --- xen/arch/x86/mm/p2m.c | 4 ++++ xen/common/memory.c | 2 +- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index ed0bbd7..e13672d 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2758,6 +2758,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, p2m_type_t p2mt, p2mt_prev; unsigned long prev_mfn, mfn; struct page_info *page; + struct page_info *prev_page = NULL; int rc; struct domain *fdom; @@ -2805,6 +2806,9 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, prev_mfn = mfn_x(get_gfn(tdom, gpfn, &p2mt_prev)); if ( mfn_valid(_mfn(prev_mfn)) ) { + prev_page = mfn_to_page(_mfn(prev_mfn)); + get_page(prev_page, tdom); + if ( is_xen_heap_mfn(prev_mfn) ) /* Xen heap frames are simply unhooked from this phys slot */ guest_physmap_remove_page(tdom, gpfn, prev_mfn, 0); diff --git a/xen/common/memory.c b/xen/common/memory.c index a3bffb7..571c754 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -272,8 +272,8 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) guest_physmap_remove_page(d, gmfn, mfn, 0); - put_page(page); put_gfn(d, gmfn); + put_page(page); return 1; } -- 2.1.4