All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Tim (Xen.org)" <tim@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...
Date: Tue, 29 Aug 2017 14:10:16 +0000	[thread overview]
Message-ID: <b837264a39344fa4a9413b7309f833b9@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <CAFLBxZY=VuzL+FQNKxG3sdjATuf_-6oTFNqN60U0AgL2vTD52A@mail.gmail.com>

> -----Original Message-----
> From: George Dunlap
> Sent: 29 August 2017 14:40
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Tim (Xen.org) <tim@xen.org>; Jan Beulich
> <jbeulich@suse.com>; Ian Jackson <Ian.Jackson@citrix.com>; xen-
> devel@lists.xenproject.org
> Subject: Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a
> new mappable resource type...
> 
> On Fri, Aug 25, 2017 at 10:46 AM, Paul Durrant <Paul.Durrant@citrix.com>
> wrote:
> >> > +    /*
> >> > +     * Allocated IOREQ server pages are assigned to the emulating
> >> > +     * domain, not the target domain. This is because the emulator is
> >> > +     * likely to be destroyed after the target domain has been torn
> >> > +     * down, and we must use MEMF_no_refcount otherwise page
> >> allocation
> >> > +     * could fail if the emulating domain has already reached its
> >> > +     * maximum allocation.
> >> > +     */
> >> > +    iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
> >>
> >> I don't really like the fact that the page is not accounted for any
> >> domain, but I can see the point in doing it like that (which you
> >> argument in the comment).
> >>
> >> IIRC there where talks about tightening the accounting of memory
> >> pages, so that ideally everything would be accounted for in the memory
> >> assigned to the domain.
> >>
> >> Just some random through, but could the toolstack set aside some
> >> memory pages (ie: not map them into the domain p2m), that could then
> >> be used by this? (not asking you to do this here)
> >>
> >> And how many pages are we expecting to use for each domain? I assume
> >> the number will be quite low.
> >>
> >
> > Yes, I agree the use on MEMF_no_refcount is not ideal and you do
> highlight an issue: I don't think there is currently an upper limit on the
> number of ioreq servers so an emulating domain could exhaust memory
> using the new scheme. I'll need to introduce a limit to avoid that.
> 
> I'm not terribly happy with allocating out-of-band pages either.  One
> of the advantages of the way things are done now (with the page
> allocated to the guest VM) is that it is more resilient to unexpected
> events:  If the domain dies before the emulator is done, you have a
> "zombie" domain until the process exits.  But once the process exits
> for any reason -- whether crashing or whatever -- the ref is freed and
> the domain can finish dying.
> 
> What happens in this case if the dm process in dom0 is killed /
> segfaults before it can unmap the page?  Will the page be properly
> freed, or will it just leak?

The page is referenced by the ioreq server in the target domain, so it will be freed when the target domain is destroyed.

> 
> I don't immediately see an advantage to doing what you're doing here,
> instaed of just calling hvm_alloc_ioreq_gfn().  The only reason you
> give is that the domain is usually destroyed before the emulator
> (meaning a short period of time where you have a 'zombie' domain), but
> I don't see why that's an issue -- it doesn't seem like that's worth
> the can of worms that it opens up.
> 

The advantage is that the page is *never* in the guest P2M so it cannot be mapped by the guest. The use of guest pages for communication between Xen and an emulator is a well-known attack surface and IIRC has already been the subject of at least one XSA. Until we have better infrastructure to account hypervisor memory to guests then I think using alloc_domheap_page() with MEMF_no_refcount is the best way.

  Paul

>  -George
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-08-29 14:11 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-22 14:50 [PATCH v2 REPOST 00/12] x86: guest resource mapping Paul Durrant
2017-08-22 14:50 ` [PATCH v2 REPOST 01/12] [x86|arm]: remove code duplication Paul Durrant
2017-08-24 14:12   ` Jan Beulich
2017-08-24 14:16     ` Paul Durrant
2017-08-22 14:50 ` [PATCH v2 REPOST 02/12] x86/mm: allow a privileged PV domain to map guest mfns Paul Durrant
2017-08-24 16:33   ` Wei Liu
2017-08-25 10:05     ` Paul Durrant
2017-08-28 14:38       ` Wei Liu
2017-08-29  8:37         ` Paul Durrant
2017-08-22 14:50 ` [PATCH v2 REPOST 03/12] x86/mm: add HYPERVISOR_memory_op to acquire guest resources Paul Durrant
2017-08-28 15:01   ` Wei Liu
2017-08-29  8:32     ` Paul Durrant
2017-08-29  8:59       ` Jan Beulich
2017-08-29  9:13         ` Paul Durrant
2017-08-29  9:27           ` Jan Beulich
2017-08-29  9:31             ` Paul Durrant
2017-08-29  9:38               ` Jan Beulich
2017-08-29 11:16   ` George Dunlap
2017-08-29 11:19     ` Paul Durrant
2017-08-22 14:50 ` [PATCH v2 REPOST 04/12] tools/libxenforeignmemory: add support for resource mapping Paul Durrant
2017-08-24 15:52   ` Roger Pau Monné
2017-08-24 15:58     ` Paul Durrant
2017-08-22 14:50 ` [PATCH v2 REPOST 05/12] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant
2017-08-24 16:02   ` Roger Pau Monné
2017-08-24 16:09     ` Paul Durrant
2017-08-28 15:04       ` Wei Liu
2017-08-22 14:51 ` [PATCH v2 REPOST 06/12] x86/hvm/ioreq: rename .*pfn and .*gmfn to .*gfn Paul Durrant
2017-08-24 16:06   ` Roger Pau Monné
2017-08-28 15:01   ` Wei Liu
2017-08-22 14:51 ` [PATCH v2 REPOST 07/12] x86/hvm/ioreq: use bool rather than bool_t Paul Durrant
2017-08-24 16:11   ` Roger Pau Monné
2017-08-22 14:51 ` [PATCH v2 REPOST 08/12] x86/hvm/ioreq: move is_default into struct hvm_ioreq_server Paul Durrant
2017-08-24 16:21   ` Roger Pau Monné
2017-08-24 16:31     ` Paul Durrant
2017-08-22 14:51 ` [PATCH v2 REPOST 09/12] x86/hvm/ioreq: simplify code and use consistent naming Paul Durrant
2017-08-24 17:02   ` Roger Pau Monné
2017-08-25 10:18     ` Paul Durrant
2017-08-22 14:51 ` [PATCH v2 REPOST 10/12] x86/hvm/ioreq: use gfn_t in struct hvm_ioreq_page Paul Durrant
2017-08-24 17:05   ` Roger Pau Monné
2017-08-22 14:51 ` [PATCH v2 REPOST 11/12] x86/hvm/ioreq: defer mapping gfns until they are actually requsted Paul Durrant
2017-08-24 17:21   ` Roger Pau Monné
2017-08-25  9:52     ` Paul Durrant
2017-08-28 15:08   ` Wei Liu
2017-08-29  8:51     ` Paul Durrant
2017-08-22 14:51 ` [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type Paul Durrant
2017-08-25  9:32   ` Roger Pau Monné
2017-08-25  9:46     ` Paul Durrant
2017-08-25  9:53       ` Roger Pau Monne
2017-08-25  9:58         ` Paul Durrant
2017-08-29 11:36       ` George Dunlap
2017-08-29 13:40       ` George Dunlap
2017-08-29 14:10         ` Paul Durrant [this message]
2017-08-29 14:26           ` George Dunlap
2017-08-29 14:31             ` Paul Durrant
2017-08-29 14:38               ` George Dunlap
2017-08-29 14:49                 ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b837264a39344fa4a9413b7309f833b9@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.