xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Tamas K Lengyel <tamas@tklengyel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v5 2/4] x86/mem_sharing: copy a page_lock version to be internal to memshr
Date: Fri, 17 May 2019 14:04:28 -0600	[thread overview]
Message-ID: <CABfawhnu91Qjy+DHcoBC4zG5rF8LurCZ1=kMXT2aHg0qg8f7vQ@mail.gmail.com> (raw)
Message-ID: <20190517200428.QshFHNZ6dTmwnTkm0wyWKoHgRJjAGA-rmCKZi6XpxDc@z> (raw)
In-Reply-To: <5CDE610D020000780022FF42@prv1-mh.provo.novell.com>

On Fri, May 17, 2019 at 1:21 AM Jan Beulich <JBeulich@suse.com> wrote:
>
> >>> On 16.05.19 at 23:37, <tamas@tklengyel.com> wrote:
> > --- a/xen/include/asm-x86/mm.h
> > +++ b/xen/include/asm-x86/mm.h
> > @@ -356,24 +356,15 @@ struct platform_bad_page {
> >  const struct platform_bad_page *get_platform_badpages(unsigned int *array_size);
> >
> >  /* Per page locks:
> > - * page_lock() is used for two purposes: pte serialization, and memory sharing.
> > + * page_lock() is used for pte serialization.
> >   *
> >   * All users of page lock for pte serialization live in mm.c, use it
> >   * to lock a page table page during pte updates, do not take other locks within
> >   * the critical section delimited by page_lock/unlock, and perform no
> >   * nesting.
> >   *
> > - * All users of page lock for memory sharing live in mm/mem_sharing.c. Page_lock
> > - * is used in memory sharing to protect addition (share) and removal (unshare)
> > - * of (gfn,domain) tupples to a list of gfn's that the shared page is currently
> > - * backing. Nesting may happen when sharing (and locking) two pages -- deadlock
> > - * is avoided by locking pages in increasing order.
> > - * All memory sharing code paths take the p2m lock of the affected gfn before
> > - * taking the lock for the underlying page. We enforce ordering between page_lock
> > - * and p2m_lock using an mm-locks.h construct.
> > - *
> > - * These two users (pte serialization and memory sharing) do not collide, since
> > - * sharing is only supported for hvm guests, which do not perform pv pte updates.
> > + * The use of PGT_locked in mem_sharing does not collide, since mem_sharing is
> > + * only supported for hvm guests, which do not perform pv pte updates.
>
> Hmm, I thought we had agreed on you also correcting the wording of
> the sentence you now retain (as requested). As said before, a HVM
> (PVH to be precise) Dom0 can very well perform PV PTE updates, just
> not on itself. I had suggested the wording "which do not have PV PTEs
> updated" - I'd be fine for this to be folded in while committing, to avoid
> another round trip. With this

Thanks, I do seem to have missed that.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2019-05-17 20:05 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-16 21:37 [PATCH v5 1/4] x86/mem_sharing: reorder when pages are unlocked and released Tamas K Lengyel
2019-05-16 21:37 ` [Xen-devel] " Tamas K Lengyel
2019-05-16 21:37 ` [PATCH v5 2/4] x86/mem_sharing: copy a page_lock version to be internal to memshr Tamas K Lengyel
2019-05-16 21:37   ` [Xen-devel] " Tamas K Lengyel
2019-05-17  7:21   ` Jan Beulich
2019-05-17  7:21     ` [Xen-devel] " Jan Beulich
2019-05-17 20:04     ` Tamas K Lengyel [this message]
2019-05-17 20:04       ` Tamas K Lengyel
2019-06-17 12:21   ` Tamas K Lengyel
2019-05-16 21:37 ` [PATCH v5 3/4] x86/mem_sharing: enable mem_share audit mode only in debug builds Tamas K Lengyel
2019-05-16 21:37   ` [Xen-devel] " Tamas K Lengyel
2019-06-17 12:24   ` Tamas K Lengyel
2019-05-16 21:37 ` [PATCH v5 4/4] x86/mem_sharing: compile mem_sharing subsystem only when kconfig is enabled Tamas K Lengyel
2019-05-16 21:37   ` [Xen-devel] " Tamas K Lengyel
2019-05-17  7:23   ` Jan Beulich
2019-05-17  7:23     ` [Xen-devel] " Jan Beulich
2019-06-03  8:26   ` Jan Beulich
2019-06-03  8:26     ` [Xen-devel] " Jan Beulich
2019-06-03 16:38     ` Tamas K Lengyel
2019-06-03 16:38       ` [Xen-devel] " Tamas K Lengyel
2019-06-03 16:40       ` Julien Grall
2019-06-03 16:40         ` [Xen-devel] " Julien Grall
2019-06-03 16:55         ` Tamas K Lengyel
2019-06-03 16:55           ` [Xen-devel] " Tamas K Lengyel
2019-06-04  8:41       ` Razvan Cojocaru
2019-06-04  8:41         ` [Xen-devel] " Razvan Cojocaru
2019-06-04 14:36     ` Daniel De Graaf
2019-06-17 12:17   ` Tamas K Lengyel
2019-06-17 12:23 ` [Xen-devel] [PATCH v5 1/4] x86/mem_sharing: reorder when pages are unlocked and released Tamas K Lengyel
2019-06-17 13:46   ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABfawhnu91Qjy+DHcoBC4zG5rF8LurCZ1=kMXT2aHg0qg8f7vQ@mail.gmail.com' \
    --to=tamas@tklengyel.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).