xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Yu Zhang <yu.c.zhang@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xen.org, Paul Durrant <paul.durrant@citrix.com>,
	zhiyuan.lv@intel.com, Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [PATCH v5 4/4] x86/ioreq server: Reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.
Date: Tue, 09 Aug 2016 03:45:53 -0600	[thread overview]
Message-ID: <57A9C2710200007800104216@prv-mh.provo.novell.com> (raw)
In-Reply-To: <57A9A171.7070401@linux.intel.com>

>>> On 09.08.16 at 11:25, <yu.c.zhang@linux.intel.com> wrote:

> 
> On 8/9/2016 4:13 PM, Jan Beulich wrote:
>>>>> On 09.08.16 at 09:39, <yu.c.zhang@linux.intel.com> wrote:
>>> On 8/9/2016 12:29 AM, Jan Beulich wrote:
>>>>>>> On 12.07.16 at 11:02, <yu.c.zhang@linux.intel.com> wrote:
>>>>> @@ -5512,6 +5513,12 @@ static int hvmop_set_mem_type(
>>>>>            if ( rc )
>>>>>                goto out;
>>>>>    
>>>>> +        if ( t == p2m_ram_rw && memtype[a.hvmmem_type] == p2m_ioreq_server )
>>>>> +            p2m->ioreq.entry_count++;
>>>>> +
>>>>> +        if ( t == p2m_ioreq_server && memtype[a.hvmmem_type] == p2m_ram_rw )
>>>>> +            p2m->ioreq.entry_count--;
>>>>> +
>>>> These (and others below) happen, afaict, outside of any lock, which
>>>> can't be right.
>>> How about we make this entry_count as atomic_t and use atomic_inc/dec
>>> instead?
>> That's certainly one of the options, but beware of overflow.
> 
> Oh, thanks for your remind, Jan. I just found that atomic_t is defined 
> as  "typedef struct { int counter; } atomic_t;"
> which do have overflow issues if entry_count is supposed to be a uint32 
> or uint64.
> 
> Now I have another thought: the entry_count is referenced in 3 
> different  scenarios:
> 1> the hvmop handlers -  hvmop_set_mem_type() and 
> hvmop_map_mem_type_to_ioreq_server(),
> which  are triggered by device model, and will not be concurrent. And 
> hvmop_set_mem_type() will
> permit the mem type change only when an ioreq server has already been 
> mapped to this type.

You shouldn't rely on a well behaved dm, and that's already
leaving aside the question whether there couldn't even be well
behaved use cases of parallel invocations of this op.

> 2> the misconfiguration handlers - resolve_misconfig() or do_recalc(),  
> which are triggered by HVM's
> vm exit, yet this will only happen after the ioreq server has already 
> been unmapped. This means the
> accesses to the entry_count will not be concurrent with the above hvmop 
> handlers.

This case may be fine, but not for (just) the named reason:
Multiple misconfig invocations can happen at the same time, but
presumably your addition is inside the p2m-locked region (you'd
have to double check that).

> 3> routine hap_enable_log_dirty() - this can be triggered by tool stack 
> at any time, but only by read
> access to this entry_count, so a read_atomic() would work.

A read access may be fine, but only if the value can't get incremented
in a racy way.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-08-09  9:45 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-12  9:02 [PATCH v5 0/4] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-07-12  9:02 ` [PATCH v5 1/4] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-07-12  9:02 ` [PATCH v5 2/4] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-07-12  9:02 ` [PATCH v5 3/4] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2016-08-08 15:40   ` Jan Beulich
2016-08-09  7:39     ` Yu Zhang
2016-08-09  8:11       ` Jan Beulich
2016-08-09  8:20         ` Paul Durrant
2016-08-09  8:51           ` Yu Zhang
2016-08-09  9:07             ` Paul Durrant
2016-08-10  8:09     ` Yu Zhang
2016-08-10 10:33       ` Jan Beulich
2016-08-10 10:43         ` Paul Durrant
2016-08-10 12:32           ` Yu Zhang
2016-08-10 10:43         ` Yu Zhang
2016-08-11  8:47           ` Yu Zhang
2016-08-11  8:58             ` Jan Beulich
2016-08-11  9:19               ` Yu Zhang
2016-07-12  9:02 ` [PATCH v5 4/4] x86/ioreq server: Reset outstanding p2m_ioreq_server entries when an ioreq server unmaps Yu Zhang
2016-08-08 16:29   ` Jan Beulich
2016-08-09  7:39     ` Yu Zhang
2016-08-09  8:13       ` Jan Beulich
2016-08-09  9:25         ` Yu Zhang
2016-08-09  9:45           ` Jan Beulich [this message]
2016-08-09 10:21             ` Yu Zhang
2016-08-16 13:35   ` George Dunlap
2016-08-16 13:54     ` Jan Beulich
2016-08-30 12:17     ` Yu Zhang
2016-09-02 11:00       ` Yu Zhang
2016-08-05  2:44 ` [PATCH v5 0/4] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-08-05  6:16   ` Jan Beulich
2016-08-05 12:46   ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57A9C2710200007800104216@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=paul.durrant@citrix.com \
    --cc=xen-devel@lists.xen.org \
    --cc=yu.c.zhang@linux.intel.com \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).