All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Yu Zhang <yu.c.zhang@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Paul Durrant <paul.durrant@citrix.com>,
	Zhiyuan Lv <zhiyuan.lv@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [PATCH v10 5/6] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries.
Date: Thu, 06 Apr 2017 02:44:57 -0600	[thread overview]
Message-ID: <58E61C29020000780014DC3B@prv-mh.provo.novell.com> (raw)
In-Reply-To: <58E5FC0A.4010809@linux.intel.com>

>>> On 06.04.17 at 10:27, <yu.c.zhang@linux.intel.com> wrote:
> On 4/6/2017 3:48 PM, Jan Beulich wrote:
>>>>> On 05.04.17 at 20:04, <yu.c.zhang@linux.intel.com> wrote:
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -288,6 +288,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
>>>            put_gfn(d, gmfn);
>>>            return 1;
>>>        }
>>> +    if ( unlikely(p2mt == p2m_ioreq_server) )
>>> +        p2m_change_type_one(d, gmfn,
>>> +                            p2m_ioreq_server, p2m_ram_rw);
>>> +
>>>    #else
>>>        mfn = gfn_to_mfn(d, _gfn(gmfn));
>>>    #endif
>> To be honest, this looks more like a quick hack than a proper solution
>> at the first glance. To me it would seem preferable if the count was
> 
> Yeah, right. :)
> 
>> adjusted at the point the P2M entry is being replaced (i.e. down the
>> call stack from guest_physmap_remove_page()). The closer to the
>> actual changing of the P2M entry, the less likely you miss any call
>> paths (as was already explained by George while suggesting to put
>> the accounting into the leaf most EPT/NPT functions).
> 
> Well, I thought I have explained the reason why I have always been 
> hesitating to do the count in
> atomic_write_ept_entry(). But seems I did not make it clear:

Well, there was no reason to re-explain. I understand your
reasoning, and I understand George's. Hence my request to move
it _closer_ to the leaf function, not specifically to move it _into_
there.

> But I had to admit I did not think of the extreme scenarios raised by 
> George - I had always assumed an p2m_ioreq_server
> page will not be allocated to the balloon driver when it is in use.
> 
> So here I have another proposal - we shall not allow a p2m_ioreq_server 
> being ballooned out. I mean, if some bug in
> kernel really allocates a p2m_ioreq_server page to a balloon driver, or 
> if the driver is a malicious one which does not
> tell the device model this gfn shall be no longer emulated, the 
> hypervisor shall let the ballooning fail for this gfn. After
> all, if such situation happens, the guest or the device model already 
> have bugs, and these last 2 patches are to make
> sure that even if there's bug in guest/device model, xen will help do 
> the cleanup, instead of to tolerate guest bugs.
> 
> If you think this is reasonable, I have drafted a patch, like this:

Well, that's still the same hack as before (merely extended to
PoD code), isn't it? I'd still prefer the accounting to be got right
instead of the page remove attempt to be failed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-04-06  8:44 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-02 12:24 [PATCH v10 0/6] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2017-04-02 12:24 ` [PATCH v10 1/6] x86/ioreq server: Release the p2m lock after mmio is handled Yu Zhang
2017-04-02 12:24 ` [PATCH v10 2/6] x86/ioreq server: Add DMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2017-04-03 14:21   ` Jan Beulich
2017-04-05 13:48   ` George Dunlap
2017-04-02 12:24 ` [PATCH v10 3/6] x86/ioreq server: Add device model wrappers for new DMOP Yu Zhang
2017-04-03  8:13   ` Paul Durrant
2017-04-03  9:28     ` Wei Liu
2017-04-05  6:53       ` Yu Zhang
2017-04-05  9:21         ` Jan Beulich
2017-04-05  9:22           ` Yu Zhang
2017-04-05  9:38             ` Jan Beulich
2017-04-05 10:08         ` Wei Liu
2017-04-05 10:20           ` Wei Liu
2017-04-05 10:21             ` Yu Zhang
2017-04-05 10:21           ` Yu Zhang
2017-04-05 10:33             ` Wei Liu
2017-04-05 10:26               ` Yu Zhang
2017-04-05 10:46                 ` Jan Beulich
2017-04-05 10:50                   ` Yu Zhang
2017-04-02 12:24 ` [PATCH v10 4/6] x86/ioreq server: Handle read-modify-write cases for p2m_ioreq_server pages Yu Zhang
2017-04-02 12:24 ` [PATCH v10 5/6] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries Yu Zhang
2017-04-03 14:36   ` Jan Beulich
2017-04-03 14:38     ` Jan Beulich
2017-04-05  7:18     ` Yu Zhang
2017-04-05  8:11       ` Jan Beulich
2017-04-05 14:41   ` George Dunlap
2017-04-05 16:22     ` Yu Zhang
2017-04-05 16:35       ` George Dunlap
2017-04-05 16:32         ` Yu Zhang
2017-04-05 17:01           ` George Dunlap
2017-04-05 17:18             ` Yu Zhang
2017-04-05 17:28               ` Yu Zhang
2017-04-05 18:02                 ` Yu Zhang
2017-04-05 18:04                   ` Yu Zhang
2017-04-06  7:48                     ` Jan Beulich
2017-04-06  8:27                       ` Yu Zhang
2017-04-06  8:44                         ` Jan Beulich [this message]
2017-04-06  7:43                 ` Jan Beulich
2017-04-05 17:29               ` George Dunlap
2017-04-02 12:24 ` [PATCH v10 6/6] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps Yu Zhang
2017-04-03  8:16   ` Paul Durrant
2017-04-03 15:23   ` Jan Beulich
2017-04-05  9:11     ` Yu Zhang
2017-04-05  9:41       ` Jan Beulich
2017-04-05 14:46   ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=58E61C29020000780014DC3B@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=paul.durrant@citrix.com \
    --cc=xen-devel@lists.xen.org \
    --cc=yu.c.zhang@linux.intel.com \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.