From: Jason Andryuk <jandryuk@gmail.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
Jan Beulich <jbeulich@suse.com>,
Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K
Date: Mon, 20 Jan 2020 15:48:35 -0500 [thread overview]
Message-ID: <CAKf6xptc2QUW4yZ8mk7sj9viZjeXMBKsCbmCUuqXVnm+KZn6Yw@mail.gmail.com> (raw)
In-Reply-To: <20200120171840.GF11756@Air-de-Roger>
On Mon, Jan 20, 2020 at 12:18 PM Roger Pau Monné <roger.pau@citrix.com> wrote:
>
> On Mon, Jan 20, 2020 at 05:10:33PM +0100, Jan Beulich wrote:
> > On 17.01.2020 12:08, Roger Pau Monne wrote:
> > > When placing memory BARs with sizes smaller than 4K multiple memory
> > > BARs can end up mapped to the same guest physical address, and thus
> > > won't work correctly.
> >
> > Thinking about it again, aren't you fixing one possible case by
> > breaking the opposite one: What you fix is when the two distinct
> > BARs (of the same or different devices) map to distinct MFNs
> > (which would have required a single GFN to map to both of these
> > MFNs). But don't you, at the same time, break the case of two
> > BARs (perhaps, but not necessarily, of the same physical device)
> > mapping both to the same MFN, i.e. requiring to have two distinct
> > GFNs map to one MFN? (At least for the moment I can't see a way
> > for hvmloader to distinguish the two cases.)
>
> IMO we should force all BARs to be page-isolated by dom0 (since Xen
> doesn't have the knowledge of doing so), but I don't see the issue
> with having different gfns pointing to the same mfn. Is that a
> limitation of paging? I think you can map a grant multiple times into
> different gfns, which achieves the same AFAICT.
BARs on a shared MFN would be a problem since the second BAR would be
at an offset into the page. Meanwhile the guest's view of the BAR
would be at offset 0 of the page.
But I agree with Roger that we basically need page alignment for all
pass-through memory BARs. With my limited test hardware, all the PCI
memory BARs are page aligned in dom0. So it was only the guest
addresses that needed alignment.
Regards,
Jason
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2020-01-20 20:49 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-17 11:08 [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K Roger Pau Monne
2020-01-17 16:05 ` Jason Andryuk
2020-01-20 16:10 ` Jan Beulich
2020-01-20 17:18 ` Roger Pau Monné
2020-01-20 20:48 ` Jason Andryuk [this message]
2020-01-21 9:18 ` Jan Beulich
2020-01-21 10:29 ` Roger Pau Monné
2020-01-21 10:43 ` Jan Beulich
2020-01-21 15:57 ` Roger Pau Monné
2020-01-21 16:15 ` Jan Beulich
2020-01-21 16:57 ` Roger Pau Monné
2020-01-22 10:27 ` Jan Beulich
2020-01-22 10:51 ` Roger Pau Monné
2020-01-22 14:04 ` Jason Andryuk
2020-01-22 14:15 ` Jan Beulich
2020-01-22 14:12 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAKf6xptc2QUW4yZ8mk7sj9viZjeXMBKsCbmCUuqXVnm+KZn6Yw@mail.gmail.com \
--to=jandryuk@gmail.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=roger.pau@citrix.com \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).