From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, HK_RANDOM_FROM,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B85EAC33CAC for ; Tue, 4 Feb 2020 02:55:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 95EE920732 for ; Tue, 4 Feb 2020 02:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727119AbgBDCzp (ORCPT ); Mon, 3 Feb 2020 21:55:45 -0500 Received: from mga09.intel.com ([134.134.136.24]:4551 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726984AbgBDCzo (ORCPT ); Mon, 3 Feb 2020 21:55:44 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Feb 2020 18:55:44 -0800 X-IronPort-AV: E=Sophos;i="5.70,398,1574150400"; d="scan'208";a="224154057" Received: from xiaoyaol-mobl.ccr.corp.intel.com (HELO [10.255.30.164]) ([10.255.30.164]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 03 Feb 2020 18:55:41 -0800 Subject: Re: [PATCH v2 3/6] kvm: x86: Emulate split-lock access as a write To: Sean Christopherson Cc: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Laight References: <20200203151608.28053-1-xiaoyao.li@intel.com> <20200203151608.28053-4-xiaoyao.li@intel.com> <20200203205426.GF19638@linux.intel.com> From: Xiaoyao Li Message-ID: Date: Tue, 4 Feb 2020 10:55:40 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.4.2 MIME-Version: 1.0 In-Reply-To: <20200203205426.GF19638@linux.intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/4/2020 4:54 AM, Sean Christopherson wrote: > On Mon, Feb 03, 2020 at 11:16:05PM +0800, Xiaoyao Li wrote: >> If split lock detect is enabled (warn/fatal), #AC handler calls die() >> when split lock happens in kernel. >> >> A sane guest should never tigger emulation on a split-lock access, but >> it cannot prevent malicous guest from doing this. So just emulating the >> access as a write if it's a split-lock access to avoid malicous guest >> polluting the kernel log. >> >> More detail analysis can be found: >> https://lkml.kernel.org/r/20200131200134.GD18946@linux.intel.com >> >> Signed-off-by: Xiaoyao Li >> --- >> arch/x86/kvm/x86.c | 11 +++++++++++ >> 1 file changed, 11 insertions(+) >> >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index 2d3be7f3ad67..821b7404c0fd 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -5847,6 +5847,13 @@ static int emulator_write_emulated(struct x86_emulate_ctxt *ctxt, >> (cmpxchg64((u64 *)(ptr), *(u64 *)(old), *(u64 *)(new)) == *(u64 *)(old)) >> #endif >> >> +static inline bool across_cache_line_access(gpa_t gpa, unsigned int bytes) > > s/across/split so as not to introduce another name. > >> +{ >> + unsigned int cache_line_size = cache_line_size(); >> + >> + return (gpa & (cache_line_size - 1)) + bytes > cache_line_size; > > I'd prefer to use the same logic as the page-split to avoid having to > reason about the correctness of two different algorithms. > >> +} >> + >> static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, >> unsigned long addr, >> const void *old, >> @@ -5873,6 +5880,10 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, >> if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK)) >> goto emul_write; >> >> + if (get_split_lock_detect_state() != sld_off && >> + across_cache_line_access(gpa, bytes)) >> + goto emul_write; > > As an alternative to the above, the page/line splits can be handled in a > single check, e.g. > > page_line_mask = PAGE_MASK; > if (is_split_lock_detect_enabled()) > page_line_mask = cache_line_size() - 1; > if (((gpa + bytes - 1) & page_line_mask) != (gpa & page_line_mask)) > goto emul_write; It's better, will use your suggestion. Thanks! >> + >> if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), &map)) >> goto emul_write; >> >> -- >> 2.23.0 >>