xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Wei Liu" <wl@xen.org>, "Roger Pau Monné" <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Ping: [PATCH] x86/AMD: also determine L3 cache size
Date: Fri, 7 May 2021 10:25:43 +0200	[thread overview]
Message-ID: <ebfb246f-ace8-f0eb-1860-70f74d894b4c@suse.com> (raw)
In-Reply-To: <487bed52-bd1d-ceee-a85a-9bed9aad4712@suse.com>

On 29.04.2021 11:21, Jan Beulich wrote:
> On 16.04.2021 16:21, Andrew Cooper wrote:
>> On 16/04/2021 14:20, Jan Beulich wrote:
>>> For Intel CPUs we record L3 cache size, hence we should also do so for
>>> AMD and alike.
>>>
>>> While making these additions, also make sure (throughout the function)
>>> that we don't needlessly overwrite prior values when the new value to be
>>> stored is zero.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> I have to admit though that I'm not convinced the sole real use of the
>>> field (in flush_area_local()) is a good one - flushing an entire L3's
>>> worth of lines via CLFLUSH may not be more efficient than using WBINVD.
>>> But I didn't measure it (yet).
>>
>> WBINVD always needs a broadcast IPI to work correctly.
>>
>> CLFLUSH and friends let you do this from a single CPU, using cache
>> coherency to DTRT with the line, wherever it is.
>>
>>
>> Looking at that logic in flush_area_local(), I don't see how it can be
>> correct.  The WBINVD path is a decomposition inside the IPI, but in the
>> higher level helpers, I don't see how the "area too big, convert to
>> WBINVD" can be safe.
>>
>> All users of FLUSH_CACHE are flush_all(), except two PCI
>> Passthrough-restricted cases. MMUEXT_FLUSH_CACHE_GLOBAL looks to be
>> safe, while vmx_do_resume() has very dubious reasoning, and is dead code
>> I think, because I'm not aware of a VT-x capable CPU without WBINVD-exiting.
> 
> Besides my prior question on your reply, may I also ask what all of
> this means for the patch itself? After all you've been replying to
> the post-commit-message remark only so far.

As for the other patch just pinged again, unless I hear back on the
patch itself by then, I'm intending to commit this the week after the
next one, if need be without any acks.

Jan


      reply	other threads:[~2021-05-07  8:25 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-16 13:20 [PATCH] x86/AMD: also determine L3 cache size Jan Beulich
2021-04-16 14:21 ` Andrew Cooper
2021-04-16 14:27   ` Jan Beulich
2021-04-29  9:21   ` Jan Beulich
2021-05-07  8:25     ` Jan Beulich [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ebfb246f-ace8-f0eb-1860-70f74d894b4c@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).