All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Singh, Balbir" <sblbir@amazon.com>
To: "thomas.lendacky@amd.com" <thomas.lendacky@amd.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Cc: "keescook@chromium.org" <keescook@chromium.org>,
	"tony.luck@intel.com" <tony.luck@intel.com>,
	"benh@kernel.crashing.org" <benh@kernel.crashing.org>,
	"jpoimboe@redhat.com" <jpoimboe@redhat.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"dave.hansen@intel.com" <dave.hansen@intel.com>
Subject: Re:  [PATCH v4 1/6] arch/x86/kvm: Refactor l1d flush lifecycle management
Date: Fri, 1 May 2020 03:48:36 +0000	[thread overview]
Message-ID: <c6a8fa86551357005a6421159ba628049eade129.camel@amazon.com> (raw)
In-Reply-To: <dc40ceb1e6ff29f90b2579deba5ad107fe1fe905.camel@amazon.com>

On Sat, 2020-04-25 at 11:49 +1000, Balbir Singh wrote:
> On Fri, 2020-04-24 at 13:59 -0500, Tom Lendacky wrote:
> > 
> > On 4/23/20 9:01 AM, Balbir Singh wrote:
> > > Split out the allocation and free routines to be used in a follow
> > > up set of patches (to reuse for L1D flushing).
> > > 
> > > Signed-off-by: Balbir Singh <sblbir@amazon.com>
> > > Reviewed-by: Kees Cook <keescook@chromium.org>
> > > ---
> > >   arch/x86/include/asm/cacheflush.h |  3 +++
> > >   arch/x86/kernel/Makefile          |  1 +
> > >   arch/x86/kernel/l1d_flush.c       | 36 +++++++++++++++++++++++++++++++
> > >   arch/x86/kvm/vmx/vmx.c            | 25 +++------------------
> > >   4 files changed, 43 insertions(+), 22 deletions(-)
> > >   create mode 100644 arch/x86/kernel/l1d_flush.c
> > > 
> > > diff --git a/arch/x86/include/asm/cacheflush.h
> > > b/arch/x86/include/asm/cacheflush.h
> > > index 63feaf2a5f93..bac56fcd9790 100644
> > > --- a/arch/x86/include/asm/cacheflush.h
> > > +++ b/arch/x86/include/asm/cacheflush.h
> > > @@ -6,6 +6,9 @@
> > >   #include <asm-generic/cacheflush.h>
> > >   #include <asm/special_insns.h>
> > > 
> > > +#define L1D_CACHE_ORDER 4
> > 
> > Since this is becoming a generic function now, shouldn't this value be
> > based on the actual L1D cache size? Is this value based on a 32KB data
> > cache and the idea is to write twice the size of the cache to be sure that
> > every entry has been replaced - with the second 32KB to catch the odd line
> > that might not have been pulled in?
> > 
> 
> Currently the only users are VMX L1TF and optional prctl(). It should be
> based
> on actual L1D cache size, I checked a little bit and the largest L1D cache
> size across various x86 bits is 64K. so there are three options here:
> 
> 1. We refactor the code, we would need to save the L1D cache size and use
> cpu_dev callbacks for L1D flush
> 2. We can make the current code depend on L1D_FLUSH MSR and enable it only
> when that feature is available. There would be no software fallback. Then
> follow it up with #1
> 3. We keep the code as is on the assumption that all of L1D <= 64K across
> the
> current platforms and we do #1 in a followup (since the prctl is optional
> and
> the only other user is the VMX code).
> 
> Thanks for the review,
> Balbir Singh.
> 

Tom,

I have the following changes that I think will suffice for now (these are not
properly formatted, but you get the idea)

diff --git a/arch/x86/kernel/l1d_flush.c b/arch/x86/kernel/l1d_flush.c
index a754b6c288a9..7fec0cc8f85c 100644
--- a/arch/x86/kernel/l1d_flush.c
+++ b/arch/x86/kernel/l1d_flush.c
@@ -92,6 +92,9 @@ int l1d_flush_init_once(void)
 {
        int ret = 0;
 
+       if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+               return -ENOTSUPP;
+
        if (static_cpu_has(X86_FEATURE_FLUSH_L1D) || l1d_flush_pages)
                return ret;


Does that satisfy your comments about patch 1/6 and 2/6 in the series?

Balbir Singh.

  reply	other threads:[~2020-05-01  3:48 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-23 14:01 [PATCH v4 0/6] Optionally flush L1D on context switch Balbir Singh
2020-04-23 14:01 ` [PATCH v4 1/6] arch/x86/kvm: Refactor l1d flush lifecycle management Balbir Singh
2020-04-24 18:59   ` Tom Lendacky
2020-04-25  1:49     ` Singh, Balbir
2020-05-01  3:48       ` Singh, Balbir [this message]
2020-05-01 14:16         ` Tom Lendacky
2020-04-23 14:01 ` [PATCH v4 2/6] arch/x86/kvm: Refactor tlbflush and l1d flush Balbir Singh
2020-04-24 19:04   ` Tom Lendacky
2020-04-25  2:00     ` Singh, Balbir
2020-04-23 14:01 ` [PATCH v4 3/6] arch/x86/mm: Refactor cond_ibpb() to support other use cases Balbir Singh
2020-04-23 14:01 ` [PATCH v4 4/6] arch/x86/kvm: Refactor L1D flushing Balbir Singh
2020-04-23 14:01 ` [PATCH v4 5/6] Optionally flush L1D on context switch Balbir Singh
2020-04-23 19:19   ` Kees Cook
2020-04-24  9:56     ` Singh, Balbir
2020-04-23 14:01 ` [PATCH v4 6/6] Documentation: Add L1D flushing Documentation Balbir Singh
2020-04-23 19:20   ` Kees Cook

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c6a8fa86551357005a6421159ba628049eade129.camel@amazon.com \
    --to=sblbir@amazon.com \
    --cc=benh@kernel.crashing.org \
    --cc=dave.hansen@intel.com \
    --cc=jpoimboe@redhat.com \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=tony.luck@intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.