All of lore.kernel.org
 help / color / mirror / Atom feed
From: Xiaoyao Li <xiaoyao.li@intel.com>
To: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	hpa@zytor.com, Paolo Bonzini <pbonzini@redhat.com>,
	Andy Lutomirski <luto@kernel.org>,
	tony.luck@intel.com, peterz@infradead.org, fenghua.yu@intel.com,
	x86@kernel.org, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 2/8] x86/split_lock: Ensure X86_FEATURE_SPLIT_LOCK_DETECT means the existence of feature
Date: Wed, 4 Mar 2020 09:49:14 +0800	[thread overview]
Message-ID: <ab2a83e1-8ae4-b471-1968-7f6baaac602e@intel.com> (raw)
In-Reply-To: <20200303194134.GW1439@linux.intel.com>

On 3/4/2020 3:41 AM, Sean Christopherson wrote:
> On Tue, Mar 03, 2020 at 10:55:24AM -0800, Sean Christopherson wrote:
>> On Thu, Feb 06, 2020 at 03:04:06PM +0800, Xiaoyao Li wrote:
>>> When flag X86_FEATURE_SPLIT_LOCK_DETECT is set, it should ensure the
>>> existence of MSR_TEST_CTRL and MSR_TEST_CTRL.SPLIT_LOCK_DETECT bit.
>>
>> The changelog confused me a bit.  "When flag X86_FEATURE_SPLIT_LOCK_DETECT
>> is set" makes it sound like the logic is being applied after the feature
>> bit is set.  Maybe something like:
>>
>> ```
>> Verify MSR_TEST_CTRL.SPLIT_LOCK_DETECT can be toggled via WRMSR prior to
>> setting the SPLIT_LOCK_DETECT feature bit so that runtime consumers,
>> e.g. KVM, don't need to worry about WRMSR failure.
>> ```
>>
>>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
>>> ---
>>>   arch/x86/kernel/cpu/intel.c | 41 +++++++++++++++++++++----------------
>>>   1 file changed, 23 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
>>> index 2b3874a96bd4..49535ed81c22 100644
>>> --- a/arch/x86/kernel/cpu/intel.c
>>> +++ b/arch/x86/kernel/cpu/intel.c
>>> @@ -702,7 +702,8 @@ static void init_intel(struct cpuinfo_x86 *c)
>>>   	if (tsx_ctrl_state == TSX_CTRL_DISABLE)
>>>   		tsx_disable();
>>>   
>>> -	split_lock_init();
>>> +	if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
>>> +		split_lock_init();
>>>   }
>>>   
>>>   #ifdef CONFIG_X86_32
>>> @@ -986,9 +987,26 @@ static inline bool match_option(const char *arg, int arglen, const char *opt)
>>>   
>>>   static void __init split_lock_setup(void)
>>>   {
>>> +	u64 test_ctrl_val;
>>>   	char arg[20];
>>>   	int i, ret;
>>> +	/*
>>> +	 * Use the "safe" versions of rdmsr/wrmsr here to ensure MSR_TEST_CTRL
>>> +	 * and MSR_TEST_CTRL.SPLIT_LOCK_DETECT bit do exist. Because there may
>>> +	 * be glitches in virtualization that leave a guest with an incorrect
>>> +	 * view of real h/w capabilities.
>>> +	 */
>>> +	if (rdmsrl_safe(MSR_TEST_CTRL, &test_ctrl_val))
>>> +		return;
>>> +
>>> +	if (wrmsrl_safe(MSR_TEST_CTRL,
>>> +			test_ctrl_val | MSR_TEST_CTRL_SPLIT_LOCK_DETECT))
>>> +		return;
>>> +
>>> +	if (wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val))
>>> +		return;a
>>
>> Probing the MSR should be skipped if SLD is disabled in sld_options, i.e.
>> move this code (and setup_force_cpu_cap() etc...) down below the
>> match_option() logic.  The above would temporarily enable SLD even if the
>> admin has explicitly disabled it, e.g. makes the kernel param useless for
>> turning off the feature due to bugs.
> 
> Hmm, but this prevents KVM from exposing SLD to a guest when it's off in
> the kernel, which would be a useful debug/testing scenario.
> 
> Maybe add another SLD state to forcefully disable SLD?  That way the admin
> can turn of SLD in the host kernel but still allow KVM to expose it to its
> guests.  E.g.

I don't think we need do this.

IMO, this a the bug of split_lock_init(), which assume the initial value 
of MSR_TEST_CTRL is zero, at least bit SPLIT_LOCK of which is zero.
This is problem, it's possible that BIOS has set this bit.

split_lock_setup() here, is to check if the feature really exists. So 
probing MSR_TEST_CTRL and bit MSR_TEST_CTRL_SPLIT_LOCK_DETECT here. If 
there all exist, setup_force_cpu_cap(X86_FEATURE_SPLIT_LOCK_DETECT) to 
indicate feature does exist.
Only when feature exists, there is a need to parse the command line 
config of split_lock_detect.

In split_lock_init(), we should explicitly clear 
MSR_TEST_CTRL_SPLIT_LOCK_DETECT bit if sld_off.

> static const struct {
>          const char                      *option;
>          enum split_lock_detect_state    state;
> } sld_options[] __initconst = {
> 	{ "disable",	sld_disable },
>          { "off",        sld_off     },
>          { "warn",       sld_warn    },
>          { "fatal",      sld_fatal   },
> };
> 
> 
> Then the new setup() becomes:
> 
> static void __init split_lock_setup(void)
> {
>          u64 test_ctrl_val;
>          char arg[20];
>          int i, ret;
> 
>          sld_state = sld_warn;
> 
>          ret = cmdline_find_option(boot_command_line, "split_lock_detect",
>                                    arg, sizeof(arg));
>          if (ret >= 0) {
>                  for (i = 0; i < ARRAY_SIZE(sld_options); i++) {
>                          if (match_option(arg, ret, sld_options[i].option)) {
>                                  sld_state = sld_options[i].state;
>                                  break;
>                          }
>                  }
>          }
> 
>          if (sld_state == sld_disable)
>                  goto log_sld;
> 
>          /*
>           * Use the "safe" versions of rdmsr/wrmsr here to ensure MSR_TEST_CTRL
>           * and MSR_TEST_CTRL.SPLIT_LOCK_DETECT bit do exist. Because there may
>           * be glitches in virtualization that leave a guest with an incorrect
>           * view of real h/w capabilities.
>           */
>          if (rdmsrl_safe(MSR_TEST_CTRL, &test_ctrl_val))
>                  goto sld_broken;
> 
>          if (wrmsrl_safe(MSR_TEST_CTRL,
>                          test_ctrl_val | MSR_TEST_CTRL_SPLIT_LOCK_DETECT))
>                  goto sld_broken;
> 
>          if (wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val))
>                  goto sld_broken;
> 
>          setup_force_cpu_cap(X86_FEATURE_SPLIT_LOCK_DETECT);
> 
> log_sld:
>          switch (sld_state) {
>          case sld_disable:
>                  pr_info("split_lock detection disabled\n");
>                  break;
>          case sld_off:
>                  pr_info("split_lock detection off in kernel\n");
>                  break;
>          case sld_warn:
>                  pr_info("warning about user-space split_locks\n");
>                  break;
>          case sld_fatal:
>                  pr_info("sending SIGBUS on user-space split_locks\n");
>                  break;
>          }
> 
>          return;
> 
> sld_broken:
>          sld_state = sld_disable;
>          pr_err("split_lock detection disabled, MSR access faulted\n");
> }
> 
>> And with that, IMO failing any of RDMSR/WRSMR here warrants a pr_err().
>> The CPU says it supports split lock and the admin hasn't explicitly turned
>> it off, so failure to enable should be logged.
>>
>>> +
>>>   	setup_force_cpu_cap(X86_FEATURE_SPLIT_LOCK_DETECT);
>>>   	sld_state = sld_warn;
>>>   
>>> @@ -1022,24 +1040,19 @@ static void __init split_lock_setup(void)
>>>    * Locking is not required at the moment because only bit 29 of this
>>>    * MSR is implemented and locking would not prevent that the operation
>>>    * of one thread is immediately undone by the sibling thread.
>>> - * Use the "safe" versions of rdmsr/wrmsr here because although code
>>> - * checks CPUID and MSR bits to make sure the TEST_CTRL MSR should
>>> - * exist, there may be glitches in virtualization that leave a guest
>>> - * with an incorrect view of real h/w capabilities.
>>>    */
>>> -static bool __sld_msr_set(bool on)
>>> +static void __sld_msr_set(bool on)
>>>   {
>>>   	u64 test_ctrl_val;
>>>   
>>> -	if (rdmsrl_safe(MSR_TEST_CTRL, &test_ctrl_val))
>>> -		return false;
>>> +	rdmsrl(MSR_TEST_CTRL, test_ctrl_val);
>>>   
>>>   	if (on)
>>>   		test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
>>>   	else
>>>   		test_ctrl_val &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
>>>   
>>> -	return !wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val);
>>> +	wrmsrl(MSR_TEST_CTRL, test_ctrl_val);
>>>   }
>>>   
>>>   static void split_lock_init(void)
>>> @@ -1047,15 +1060,7 @@ static void split_lock_init(void)
>>>   	if (sld_state == sld_off)
>>>   		return;
>>>   
>>> -	if (__sld_msr_set(true))
>>> -		return;
>>> -
>>> -	/*
>>> -	 * If this is anything other than the boot-cpu, you've done
>>> -	 * funny things and you get to keep whatever pieces.
>>> -	 */
>>> -	pr_warn("MSR fail -- disabled\n");
>>> -	sld_state = sld_off;
>>> +	__sld_msr_set(true);
>>>   }
>>>   
>>>   bool handle_user_split_lock(unsigned long ip)
>>> -- 
>>> 2.23.0
>>>


  reply	other threads:[~2020-03-04  1:49 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-06  7:04 [PATCH v3 0/8] kvm/split_lock: Add feature split lock detection support in kvm Xiaoyao Li
2020-02-06  7:04 ` [PATCH v3 1/8] x86/split_lock: Export handle_user_split_lock() Xiaoyao Li
2020-03-03 18:42   ` Sean Christopherson
2020-02-06  7:04 ` [PATCH v3 2/8] x86/split_lock: Ensure X86_FEATURE_SPLIT_LOCK_DETECT means the existence of feature Xiaoyao Li
2020-03-03 18:55   ` Sean Christopherson
2020-03-03 19:41     ` Sean Christopherson
2020-03-04  1:49       ` Xiaoyao Li [this message]
2020-03-05 16:23         ` Sean Christopherson
2020-03-06  2:15           ` Xiaoyao Li
2020-03-04  2:20     ` Xiaoyao Li
2020-02-06  7:04 ` [PATCH v3 3/8] x86/split_lock: Cache the value of MSR_TEST_CTRL in percpu data Xiaoyao Li
2020-02-06 20:23   ` Arvind Sankar
2020-02-07  4:18     ` Xiaoyao Li
2020-03-03 19:18   ` Sean Christopherson
2020-03-05  6:48     ` Xiaoyao Li
2020-02-06  7:04 ` [PATCH v3 4/8] x86/split_lock: Add and export split_lock_detect_enabled() and split_lock_detect_fatal() Xiaoyao Li
2020-03-03 18:59   ` Sean Christopherson
2020-02-06  7:04 ` [PATCH v3 5/8] kvm: x86: Emulate split-lock access as a write Xiaoyao Li
2020-02-06  7:04 ` [PATCH v3 6/8] kvm: vmx: Extend VMX's #AC interceptor to handle split lock #AC happens in guest Xiaoyao Li
2020-03-03 19:08   ` Sean Christopherson
2020-02-06  7:04 ` [PATCH v3 7/8] kvm: x86: Emulate MSR IA32_CORE_CAPABILITIES Xiaoyao Li
2020-02-06  7:04 ` [PATCH v3 8/8] x86: vmx: virtualize split lock detection Xiaoyao Li
2020-02-07 18:27   ` Arvind Sankar
2020-02-08  4:51     ` Xiaoyao Li
2020-03-03 19:30   ` Sean Christopherson
2020-03-05 14:16     ` Xiaoyao Li
2020-03-05 16:49       ` Sean Christopherson
2020-03-06  0:29         ` Xiaoyao Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ab2a83e1-8ae4-b471-1968-7f6baaac602e@intel.com \
    --to=xiaoyao.li@intel.com \
    --cc=bp@alien8.de \
    --cc=fenghua.yu@intel.com \
    --cc=hpa@zytor.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=sean.j.christopherson@intel.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.