From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B91A4C433E3 for ; Thu, 23 Jul 2020 16:18:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9C17922B43 for ; Thu, 23 Jul 2020 16:18:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729775AbgGWQSU (ORCPT ); Thu, 23 Jul 2020 12:18:20 -0400 Received: from mga07.intel.com ([134.134.136.100]:46002 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727052AbgGWQSU (ORCPT ); Thu, 23 Jul 2020 12:18:20 -0400 IronPort-SDR: xww9ynMAf3BamWKdU3NyMnmvhx8fzAN9TxPj51rveMlJmASpjp2Ekd91+fT/OYJWKbFbAylzyq 5KHCeG7QPEbQ== X-IronPort-AV: E=McAfee;i="6000,8403,9691"; a="215169773" X-IronPort-AV: E=Sophos;i="5.75,387,1589266800"; d="scan'208";a="215169773" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jul 2020 09:18:19 -0700 IronPort-SDR: kf2KWN7WI2p4vd74GFsxQMiXONFSvdTEj6bfjU/HaqJRxh3PGZm95T03EkTMNRYV+oRWiqE8bl Q0ShhWAOk+TQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,387,1589266800"; d="scan'208";a="311087983" Received: from romley-ivt3.sc.intel.com ([172.25.110.60]) by fmsmga004.fm.intel.com with ESMTP; 23 Jul 2020 09:18:18 -0700 Date: Thu, 23 Jul 2020 09:18:18 -0700 From: Fenghua Yu To: Andy Lutomirski Cc: Weiny Ira , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Dave Hansen , X86 ML , Dan Williams , Vishal Verma , Andrew Morton , "open list:DOCUMENTATION" , LKML , linux-nvdimm , Linux FS Devel , Linux-MM , "open list:KERNEL SELFTEST FRAMEWORK" Subject: Re: [PATCH RFC V2 17/17] x86/entry: Preserve PKRS MSR across exceptions Message-ID: <20200723161818.GA77434@romley-ivt3.sc.intel.com> References: <20200717072056.73134-1-ira.weiny@intel.com> <20200717072056.73134-18-ira.weiny@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 22, 2020 at 09:21:43AM -0700, Andy Lutomirski wrote: > On Fri, Jul 17, 2020 at 12:21 AM wrote: > > > > From: Ira Weiny > > > > The PKRS MSR is not managed by XSAVE. It is already preserved through a > > context switch but this support leaves exception handling code open to > > memory accesses which the interrupted process has allowed. > > > > Close this hole by preserve the current task's PKRS MSR, reset the PKRS > > MSR value on exception entry, and then restore the state on exception > > exit. > > Should this live in pt_regs? The PKRS MSR has been preserved in thread_info during kernel entry. We don't need to preserve it in another place (i.e. idtentry_state). To avoid confusion, I think we need to change the above commit message to: "Exception handling code is open to memory accesses which the interrupted process has allowed. Close this hole by reset the PKRS MSR value on exception entry and restore the state on exception exit. The MSR was preserved in thread_info." The patch needs to be changed accordingly, I think: 1. No need to define "pks" in struct idtentry_state because the MSR is already preserved in thread_info. 2. idt_save_pkrs() could be renamed as idt_reset_pkrs() to reset the MSR (no need to save it). "state.pkrs" can be replaced by "current->thread_info.pkrs" now. 3. The "pkrs_ref" could be defined in thread_info as well. But I'm not sure if it's better than defined in idtentry_state. Thanks. -Fenghua