From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B236C43382 for ; Tue, 25 Sep 2018 13:12:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D2F1A214AB for ; Tue, 25 Sep 2018 13:12:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D2F1A214AB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729342AbeIYTUY (ORCPT ); Tue, 25 Sep 2018 15:20:24 -0400 Received: from mga02.intel.com ([134.134.136.20]:39145 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729112AbeIYTUW (ORCPT ); Tue, 25 Sep 2018 15:20:22 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Sep 2018 06:12:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,302,1534834800"; d="scan'208";a="93546815" Received: from thomasvo-mobl2.ger.corp.intel.com (HELO localhost) ([10.252.53.212]) by orsmga001.jf.intel.com with ESMTP; 25 Sep 2018 06:10:41 -0700 From: Jarkko Sakkinen To: x86@kernel.org, platform-driver-x86@vger.kernel.org Cc: dave.hansen@intel.com, sean.j.christopherson@intel.com, nhorman@redhat.com, npmccallum@redhat.com, serge.ayoun@intel.com, shay.katz-zamir@intel.com, linux-sgx@vger.kernel.org, andriy.shevchenko@linux.intel.com, Dave Hansen , Jarkko Sakkinen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org (open list:X86 MM) Subject: [PATCH v14 09/19] x86/mm: x86/sgx: Signal SEGV_SGXERR for #PFs w/ PF_SGX Date: Tue, 25 Sep 2018 16:06:46 +0300 Message-Id: <20180925130845.9962-10-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180925130845.9962-1-jarkko.sakkinen@linux.intel.com> References: <20180925130845.9962-1-jarkko.sakkinen@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Christopherson Signal SIGSEGV(SEGV_SGXERR) for all faults with PF_SGX set in the error code. The PF_SGX bit is set if and only if the #PF is detected by the Enclave Page Cache Map (EPCM), which is consulted only after an access walks the kernel's page tables, i.e.: a. the access was allowed by the kernel b. the kernel's tables have become less restrictive than the EPCM c. the kernel cannot fixup the cause of the fault Noteably, (b) implies that either the kernel has botched the EPC mappings or the EPCM has been invalidated due to a power event. In either case, userspace needs to be alerted so that it can take appropriate action, e.g. restart the enclave. This is reinforced by (c) as the kernel doesn't really have any other reasonable option, e.g. we could kill the task or panic, but neither is warranted. Signed-off-by: Sean Christopherson Cc: Dave Hansen Signed-off-by: Jarkko Sakkinen --- arch/x86/mm/fault.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 85d20516b2f3..3fb2b2838d6c 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -960,10 +960,13 @@ static noinline void bad_area_access_error(struct pt_regs *regs, unsigned long error_code, unsigned long address, struct vm_area_struct *vma) { + int si_code = SEGV_ACCERR; + if (bad_area_access_from_pkeys(error_code, vma)) - __bad_area(regs, error_code, address, vma, SEGV_PKUERR); - else - __bad_area(regs, error_code, address, vma, SEGV_ACCERR); + si_code = SEGV_PKUERR; + else if (unlikely(error_code & X86_PF_SGX)) + si_code = SEGV_SGXERR; + __bad_area(regs, error_code, address, vma, si_code); } static void @@ -1153,6 +1156,17 @@ access_error(unsigned long error_code, struct vm_area_struct *vma) if (error_code & X86_PF_PK) return 1; + /* + * Access is blocked by the Enclave Page Cache Map (EPCM), + * i.e. the access is allowed by the PTE but not the EPCM. + * This usually happens when the EPCM is yanked out from + * under us, e.g. by hardware after a suspend/resume cycle. + * In any case, there is nothing that can be done by the + * kernel to resolve the fault (short of killing the task). + */ + if (unlikely(error_code & X86_PF_SGX)) + return 1; + /* * Make sure to check the VMA so that we do not perform * faults just to hit a X86_PF_PK as soon as we fill in a -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jarkko Sakkinen To: , CC: , , , , , , , , Dave Hansen , Jarkko Sakkinen , Andy Lutomirski , "Peter Zijlstra" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "open list:X86 MM" Subject: [PATCH v14 09/19] x86/mm: x86/sgx: Signal SEGV_SGXERR for #PFs w/ PF_SGX Date: Tue, 25 Sep 2018 16:06:46 +0300 Message-ID: <20180925130845.9962-10-jarkko.sakkinen@linux.intel.com> In-Reply-To: <20180925130845.9962-1-jarkko.sakkinen@linux.intel.com> References: <20180925130845.9962-1-jarkko.sakkinen@linux.intel.com> Content-Type: text/plain Return-Path: jarkko.sakkinen@intel.com MIME-Version: 1.0 List-ID: From: Sean Christopherson Signal SIGSEGV(SEGV_SGXERR) for all faults with PF_SGX set in the error code. The PF_SGX bit is set if and only if the #PF is detected by the Enclave Page Cache Map (EPCM), which is consulted only after an access walks the kernel's page tables, i.e.: a. the access was allowed by the kernel b. the kernel's tables have become less restrictive than the EPCM c. the kernel cannot fixup the cause of the fault Noteably, (b) implies that either the kernel has botched the EPC mappings or the EPCM has been invalidated due to a power event. In either case, userspace needs to be alerted so that it can take appropriate action, e.g. restart the enclave. This is reinforced by (c) as the kernel doesn't really have any other reasonable option, e.g. we could kill the task or panic, but neither is warranted. Signed-off-by: Sean Christopherson Cc: Dave Hansen Signed-off-by: Jarkko Sakkinen --- arch/x86/mm/fault.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 85d20516b2f3..3fb2b2838d6c 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -960,10 +960,13 @@ static noinline void bad_area_access_error(struct pt_regs *regs, unsigned long error_code, unsigned long address, struct vm_area_struct *vma) { + int si_code = SEGV_ACCERR; + if (bad_area_access_from_pkeys(error_code, vma)) - __bad_area(regs, error_code, address, vma, SEGV_PKUERR); - else - __bad_area(regs, error_code, address, vma, SEGV_ACCERR); + si_code = SEGV_PKUERR; + else if (unlikely(error_code & X86_PF_SGX)) + si_code = SEGV_SGXERR; + __bad_area(regs, error_code, address, vma, si_code); } static void @@ -1153,6 +1156,17 @@ access_error(unsigned long error_code, struct vm_area_struct *vma) if (error_code & X86_PF_PK) return 1; + /* + * Access is blocked by the Enclave Page Cache Map (EPCM), + * i.e. the access is allowed by the PTE but not the EPCM. + * This usually happens when the EPCM is yanked out from + * under us, e.g. by hardware after a suspend/resume cycle. + * In any case, there is nothing that can be done by the + * kernel to resolve the fault (short of killing the task). + */ + if (unlikely(error_code & X86_PF_SGX)) + return 1; + /* * Make sure to check the VMA so that we do not perform * faults just to hit a X86_PF_PK as soon as we fill in a -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jarkko Sakkinen Subject: [PATCH v14 09/19] x86/mm: x86/sgx: Signal SEGV_SGXERR for #PFs w/ PF_SGX Date: Tue, 25 Sep 2018 16:06:46 +0300 Message-ID: <20180925130845.9962-10-jarkko.sakkinen@linux.intel.com> References: <20180925130845.9962-1-jarkko.sakkinen@linux.intel.com> Return-path: In-Reply-To: <20180925130845.9962-1-jarkko.sakkinen@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org To: x86@kernel.org, platform-driver-x86@vger.kernel.org Cc: dave.hansen@intel.com, sean.j.christopherson@intel.com, nhorman@redhat.com, npmccallum@redhat.com, serge.ayoun@intel.com, shay.katz-zamir@intel.com, linux-sgx@vger.kernel.org, andriy.shevchenko@linux.intel.com, Dave Hansen , Jarkko Sakkinen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "open list:X86 MM" List-Id: platform-driver-x86.vger.kernel.org From: Sean Christopherson Signal SIGSEGV(SEGV_SGXERR) for all faults with PF_SGX set in the error code. The PF_SGX bit is set if and only if the #PF is detected by the Enclave Page Cache Map (EPCM), which is consulted only after an access walks the kernel's page tables, i.e.: a. the access was allowed by the kernel b. the kernel's tables have become less restrictive than the EPCM c. the kernel cannot fixup the cause of the fault Noteably, (b) implies that either the kernel has botched the EPC mappings or the EPCM has been invalidated due to a power event. In either case, userspace needs to be alerted so that it can take appropriate action, e.g. restart the enclave. This is reinforced by (c) as the kernel doesn't really have any other reasonable option, e.g. we could kill the task or panic, but neither is warranted. Signed-off-by: Sean Christopherson Cc: Dave Hansen Signed-off-by: Jarkko Sakkinen --- arch/x86/mm/fault.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 85d20516b2f3..3fb2b2838d6c 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -960,10 +960,13 @@ static noinline void bad_area_access_error(struct pt_regs *regs, unsigned long error_code, unsigned long address, struct vm_area_struct *vma) { + int si_code = SEGV_ACCERR; + if (bad_area_access_from_pkeys(error_code, vma)) - __bad_area(regs, error_code, address, vma, SEGV_PKUERR); - else - __bad_area(regs, error_code, address, vma, SEGV_ACCERR); + si_code = SEGV_PKUERR; + else if (unlikely(error_code & X86_PF_SGX)) + si_code = SEGV_SGXERR; + __bad_area(regs, error_code, address, vma, si_code); } static void @@ -1153,6 +1156,17 @@ access_error(unsigned long error_code, struct vm_area_struct *vma) if (error_code & X86_PF_PK) return 1; + /* + * Access is blocked by the Enclave Page Cache Map (EPCM), + * i.e. the access is allowed by the PTE but not the EPCM. + * This usually happens when the EPCM is yanked out from + * under us, e.g. by hardware after a suspend/resume cycle. + * In any case, there is nothing that can be done by the + * kernel to resolve the fault (short of killing the task). + */ + if (unlikely(error_code & X86_PF_SGX)) + return 1; + /* * Make sure to check the VMA so that we do not perform * faults just to hit a X86_PF_PK as soon as we fill in a -- 2.17.1