All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>, Tim Deegan <tim@xen.org>
Subject: [PATCH 6/6] x86: use paging_mark_pfn_dirty()
Date: Tue, 12 Dec 2017 08:09:13 -0700	[thread overview]
Message-ID: <5A2FFF290200007800196E34@prv-mh.provo.novell.com> (raw)
In-Reply-To: <5A2FFB8D0200007800196DDF@prv-mh.provo.novell.com>

... in preference over paging_mark_dirty(), when the PFN is known
anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
This has a contextual dependency on
https://lists.xenproject.org/archives/html/xen-devel/2017-12/msg00151.html 
which is ready to go in, just waiting for the tree to fully re-open.

--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -219,14 +219,12 @@ static int modified_memory(struct domain
             page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
             if ( page )
             {
-                mfn_t gmfn = _mfn(page_to_mfn(page));
-
-                paging_mark_dirty(d, gmfn);
+                paging_mark_pfn_dirty(d, _pfn(pfn));
                 /*
                  * These are most probably not page tables any more
                  * don't take a long time and don't die either.
                  */
-                sh_remove_shadows(d, gmfn, 1, 0);
+                sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
                 put_page(page);
             }
         }
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1848,7 +1848,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
          */
         if ( npfec.write_access )
         {
-            paging_mark_dirty(currd, mfn);
+            paging_mark_pfn_dirty(currd, _pfn(gfn));
             /*
              * If p2m is really an altp2m, unlock here to avoid lock ordering
              * violation when the change below is propagated from host p2m.
@@ -2553,7 +2553,7 @@ static void *_hvm_map_guest_frame(unsign
         if ( unlikely(p2m_is_discard_write(p2mt)) )
             *writable = 0;
         else if ( !permanent )
-            paging_mark_dirty(d, _mfn(page_to_mfn(page)));
+            paging_mark_pfn_dirty(d, _pfn(gfn));
     }
 
     if ( !permanent )
@@ -3216,7 +3216,7 @@ static enum hvm_translation_result __hvm
                     memcpy(p, buf, count);
                 else
                     memset(p, 0, count);
-                paging_mark_dirty(v->domain, _mfn(page_to_mfn(page)));
+                paging_mark_pfn_dirty(v->domain, _pfn(gfn_x(gfn)));
             }
         }
         else
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -283,7 +283,7 @@ static int hvm_add_ioreq_gfn(
     rc = guest_physmap_add_page(d, _gfn(iorp->gfn),
                                 _mfn(page_to_mfn(iorp->page)), 0);
     if ( rc == 0 )
-        paging_mark_dirty(d, _mfn(page_to_mfn(iorp->page)));
+        paging_mark_pfn_dirty(d, _pfn(iorp->gfn));
 
     return rc;
 }
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1215,7 +1215,7 @@ p2m_pod_demand_populate(struct p2m_domai
     for( i = 0; i < (1UL << order); i++ )
     {
         set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn_aligned) + i);
-        paging_mark_dirty(d, mfn_add(mfn, i));
+        paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn_aligned) + i));
     }
 
     p2m->pod.entry_count -= (1UL << order);
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3728,8 +3728,7 @@ long do_mmu_update(
             }
 
             set_gpfn_from_mfn(mfn, gpfn);
-
-            paging_mark_dirty(pg_owner, _mfn(mfn));
+            paging_mark_pfn_dirty(pg_owner, _pfn(gpfn));
 
             put_page(page);
             break;




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2017-12-12 15:09 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-12 14:53 [PATCH 0/6] XSA-248...251 follow-up Jan Beulich
2017-12-12 15:04 ` [PATCH 1/6] x86/shadow: drop further 32-bit relics Jan Beulich
2017-12-20  8:03   ` Tim Deegan
2017-12-12 15:05 ` [PATCH 2/6] x86/shadow: remove pointless loops over all vCPU-s Jan Beulich
2017-12-20  8:06   ` Tim Deegan
2017-12-12 15:06 ` [PATCH 3/6] x86/shadow: ignore sh_pin() failure in one more case Jan Beulich
2017-12-20  8:08   ` Tim Deegan
2017-12-12 15:07 ` [PATCH 4/6] x86/shadow: widen reference count Jan Beulich
2017-12-12 16:32   ` George Dunlap
2017-12-13  9:17     ` Jan Beulich
2017-12-13 10:32       ` George Dunlap
2017-12-13 14:20         ` Jan Beulich
2017-12-20  8:08   ` Tim Deegan
2017-12-12 15:08 ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{,_ENTRY} uses Jan Beulich
2017-12-12 17:50   ` [PATCH 5/6] x86/mm: clean up SHARED_M2P{, _ENTRY} uses George Dunlap
2017-12-13  9:30     ` Jan Beulich
2017-12-18 16:56     ` Jan Beulich
2017-12-20  8:09   ` Tim Deegan
2017-12-12 15:09 ` Jan Beulich [this message]
2017-12-20  8:10   ` [PATCH 6/6] x86: use paging_mark_pfn_dirty() Tim Deegan
2017-12-20  9:37 ` [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
2017-12-20  9:40   ` [PATCH v2 1/3] x86/shadow: widen reference count Jan Beulich
2017-12-20  9:41   ` [PATCH v2 2/3] x86/mm: clean up SHARED_M2P{, _ENTRY} uses Jan Beulich
2018-02-08 12:31     ` George Dunlap
2017-12-20  9:42   ` [PATCH v2 3/3] x86: use paging_mark_pfn_dirty() Jan Beulich
2017-12-20  9:44     ` Paul Durrant
2018-02-08 12:32     ` George Dunlap
2018-02-07 15:27 ` Ping: [PATCH v2 0/3] XSA-248...251 follow-up Jan Beulich
2018-02-08 12:34   ` George Dunlap
2018-02-13  7:44     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5A2FFF290200007800196E34@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=paul.durrant@citrix.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.