From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC479C433E0 for ; Wed, 27 May 2020 13:19:57 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C51B520C09 for ; Wed, 27 May 2020 13:19:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C51B520C09 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jdvxw-0003xc-CK; Wed, 27 May 2020 13:19:28 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jdvxv-0003xX-An for xen-devel@lists.xenproject.org; Wed, 27 May 2020 13:19:27 +0000 X-Inumbo-ID: aecb59e3-a01c-11ea-a745-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id aecb59e3-a01c-11ea-a745-12813bfff9fa; Wed, 27 May 2020 13:19:25 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 03E9BAFED; Wed, 27 May 2020 13:19:26 +0000 (UTC) Subject: Re: [PATCH] x86/boot: Fix load_system_tables() to be NMI/#MC-safe To: Andrew Cooper References: <20200527130607.32069-1-andrew.cooper3@citrix.com> From: Jan Beulich Message-ID: <50f66504-ab7b-2f3e-1695-003ad69ae37a@suse.com> Date: Wed, 27 May 2020 15:19:22 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: <20200527130607.32069-1-andrew.cooper3@citrix.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Xen-devel , Wei Liu , =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" On 27.05.2020 15:06, Andrew Cooper wrote: > @@ -720,30 +721,26 @@ void load_system_tables(void) > .limit = (IDT_ENTRIES * sizeof(idt_entry_t)) - 1, > }; > > - *tss = (struct tss64){ > - /* Main stack for interrupts/exceptions. */ > - .rsp0 = stack_bottom, > - > - /* Ring 1 and 2 stacks poisoned. */ > - .rsp1 = 0x8600111111111111ul, > - .rsp2 = 0x8600111111111111ul, > - > - /* > - * MCE, NMI and Double Fault handlers get their own stacks. > - * All others poisoned. > - */ > - .ist = { > - [IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE, > - [IST_DF - 1] = stack_top + IST_DF * PAGE_SIZE, > - [IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE, > - [IST_DB - 1] = stack_top + IST_DB * PAGE_SIZE, > - > - [IST_MAX ... ARRAY_SIZE(tss->ist) - 1] = > - 0x8600111111111111ul, > - }, > - > - .bitmap = IOBMP_INVALID_OFFSET, > - }; > + /* > + * Set up the TSS. Warning - may be live, and the NMI/#MC must remain > + * valid on every instruction boundary. (Note: these are all > + * semantically ACCESS_ONCE() due to tss's volatile qualifier.) > + * > + * rsp0 refers to the primary stack. #MC, #DF, NMI and #DB handlers > + * each get their own stacks. No IO Bitmap. > + */ > + tss->rsp0 = stack_bottom; > + tss->ist[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE; > + tss->ist[IST_DF - 1] = stack_top + IST_DF * PAGE_SIZE; > + tss->ist[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE; > + tss->ist[IST_DB - 1] = stack_top + IST_DB * PAGE_SIZE; > + tss->bitmap = IOBMP_INVALID_OFFSET; > + > + /* All other stack pointers poisioned. */ > + for ( i = IST_MAX; i < ARRAY_SIZE(tss->ist); ++i ) > + tss->ist[i] = 0x8600111111111111ul; > + tss->rsp1 = 0x8600111111111111ul; > + tss->rsp2 = 0x8600111111111111ul; ACCESS_ONCE() unfortunately only has one of the two needed effects: It guarantees that each memory location gets accessed exactly once (which I assume can also be had with just the volatile addition, but without the moving away from using an initializer), but it does not guarantee single-insn accesses. I consider this in particular relevant here because all of the 64-bit fields are misaligned. By doing it like you do, we're setting us up to have to re-do this yet again in a couple of years time (presumably using write_atomic() instead then). Nevertheless it is a clear improvement, so if you want to leave it like this Reviewed-by: Jan Beulich Jan