xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K
Date: Tue, 21 Jan 2020 11:43:58 +0100	[thread overview]
Message-ID: <624c69b6-9a9d-7719-fdec-1c6e939a9f65@suse.com> (raw)
In-Reply-To: <20200121102941.GH11756@Air-de-Roger>

On 21.01.2020 11:29, Roger Pau Monné wrote:
> So I'm not sure how to progress with this patch, are we fine with
> those limitations?

I'm afraid this depends on ...

> As I said, Xen hasn't got enough knowledge to correctly isolate the
> BARs, and hence we have to rely on dom0 DTRT. We could add checks in
> Xen to make sure no BARs share a page, but it's a non-trivial amount
> of scanning and sizing each possible BAR on the system.

... whether Dom0 actually "DTRT", which in turn is complicated by there
not being a specific Dom0 kernel incarnation to check against. Perhaps
rather than having Xen check _all_ BARs, Xen or the tool stack could
check BARs of devices about to be handed to a guest? Perhaps we need to
pass auxiliary information to hvmloader to be able to judge whether a
BAR shares a page with another one? Perhaps there also needs to be a
way for hvmloader to know what offset into a page has to be maintained
for any particular BAR, as follows from Jason's recent reply?

> IMO this patch is an improvement over the current state, and we can
> always do further improvements afterwards.

As said, to me it looks as if it was breaking one case in order to fix
another. If I'm not wrong with this, I don't see how the patch is an
improvement.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2020-01-21 10:44 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-17 11:08 [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K Roger Pau Monne
2020-01-17 16:05 ` Jason Andryuk
2020-01-20 16:10 ` Jan Beulich
2020-01-20 17:18   ` Roger Pau Monné
2020-01-20 20:48     ` Jason Andryuk
2020-01-21  9:18     ` Jan Beulich
2020-01-21 10:29       ` Roger Pau Monné
2020-01-21 10:43         ` Jan Beulich [this message]
2020-01-21 15:57           ` Roger Pau Monné
2020-01-21 16:15             ` Jan Beulich
2020-01-21 16:57               ` Roger Pau Monné
2020-01-22 10:27                 ` Jan Beulich
2020-01-22 10:51                   ` Roger Pau Monné
2020-01-22 14:04                     ` Jason Andryuk
2020-01-22 14:15                       ` Jan Beulich
2020-01-22 14:12                     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=624c69b6-9a9d-7719-fdec-1c6e939a9f65@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jandryuk@gmail.com \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).