From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752660AbdLLSD3 (ORCPT ); Tue, 12 Dec 2017 13:03:29 -0500 Received: from mail.kernel.org ([198.145.29.99]:40656 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752600AbdLLSDX (ORCPT ); Tue, 12 Dec 2017 13:03:23 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4CC91218DC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: ACJfBouSe6zkkmn31aS5xyia9AJx8DD8dicBjSUSbqeRPTz2iNz9c2UAOMuSefxStqQhJvRZcQIS2ZEMlZiipUSwTkg= MIME-Version: 1.0 In-Reply-To: <20171212173334.176469949@linutronix.de> References: <20171212173221.496222173@linutronix.de> <20171212173334.176469949@linutronix.de> From: Andy Lutomirski Date: Tue, 12 Dec 2017 10:03:02 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [patch 11/16] x86/ldt: Force access bit for CS/SS To: Thomas Gleixner Cc: LKML , X86 ML , Linus Torvalds , Andy Lutomirsky , Peter Zijlstra , Dave Hansen , Borislav Petkov , Greg KH , Kees Cook , Hugh Dickins , Brian Gerst , Josh Poimboeuf , Denys Vlasenko , Boris Ostrovsky , Juergen Gross , David Laight , Eduardo Valentin , aliguori@amazon.com, Will Deacon , "linux-mm@kvack.org" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 12, 2017 at 9:32 AM, Thomas Gleixner wrote: > From: Peter Zijlstra > > When mapping the LDT RO the hardware will typically generate write faults > on first use. These faults can be trapped and the backing pages can be > modified by the kernel. > > There is one exception; IRET will immediately load CS/SS and unrecoverably > #GP. To avoid this issue access the LDT descriptors used by CS/SS before > the IRET to userspace. > > For this use LAR, which is a safe operation in that it will happily consume > an invalid LDT descriptor without traps. It gets the CPU to load the > descriptor and observes the (preset) ACCESS bit. > > So far none of the obvious candidates like dosemu/wine/etc. do care about > the ACCESS bit at all, so it should be rather safe to enforce it. > > Signed-off-by: Peter Zijlstra (Intel) > Signed-off-by: Thomas Gleixner > --- > arch/x86/entry/common.c | 8 ++++- > arch/x86/include/asm/desc.h | 2 + > arch/x86/include/asm/mmu_context.h | 53 +++++++++++++++++++++++-------------- > arch/x86/include/asm/thread_info.h | 4 ++ > arch/x86/kernel/cpu/common.c | 4 +- > arch/x86/kernel/ldt.c | 30 ++++++++++++++++++++ > arch/x86/mm/tlb.c | 2 - > arch/x86/power/cpu.c | 2 - > 8 files changed, 78 insertions(+), 27 deletions(-) > > --- a/arch/x86/entry/common.c > +++ b/arch/x86/entry/common.c > @@ -30,6 +30,7 @@ > #include > #include > #include > +#include > > #define CREATE_TRACE_POINTS > #include > @@ -130,8 +131,8 @@ static long syscall_trace_enter(struct p > return ret ?: regs->orig_ax; > } > > -#define EXIT_TO_USERMODE_LOOP_FLAGS \ > - (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \ > +#define EXIT_TO_USERMODE_LOOP_FLAGS \ > + (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | _TIF_LDT |\ > _TIF_NEED_RESCHED | _TIF_USER_RETURN_NOTIFY | _TIF_PATCH_PENDING) > > static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags) > @@ -171,6 +172,9 @@ static void exit_to_usermode_loop(struct > /* Disable IRQs and retry */ > local_irq_disable(); > > + if (cached_flags & _TIF_LDT) > + ldt_exit_user(regs); Nope. To the extent that this code actually does anything (which it shouldn't since you already forced the access bit), it's racy against flush_ldt() from another thread, and that race will be exploitable for privilege escalation. It needs to be outside the loopy part. --Andy From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id 8ED946B0266 for ; Tue, 12 Dec 2017 13:03:25 -0500 (EST) Received: by mail-pg0-f69.google.com with SMTP id a10so16288499pgq.3 for ; Tue, 12 Dec 2017 10:03:25 -0800 (PST) Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id l17si12032378pgn.160.2017.12.12.10.03.24 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Dec 2017 10:03:24 -0800 (PST) Received: from mail-it0-f45.google.com (mail-it0-f45.google.com [209.85.214.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EA31420671 for ; Tue, 12 Dec 2017 18:03:23 +0000 (UTC) Received: by mail-it0-f45.google.com with SMTP id b5so547195itc.3 for ; Tue, 12 Dec 2017 10:03:23 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <20171212173334.176469949@linutronix.de> References: <20171212173221.496222173@linutronix.de> <20171212173334.176469949@linutronix.de> From: Andy Lutomirski Date: Tue, 12 Dec 2017 10:03:02 -0800 Message-ID: Subject: Re: [patch 11/16] x86/ldt: Force access bit for CS/SS Content-Type: text/plain; charset="UTF-8" Sender: owner-linux-mm@kvack.org List-ID: To: Thomas Gleixner Cc: LKML , X86 ML , Linus Torvalds , Andy Lutomirsky , Peter Zijlstra , Dave Hansen , Borislav Petkov , Greg KH , Kees Cook , Hugh Dickins , Brian Gerst , Josh Poimboeuf , Denys Vlasenko , Boris Ostrovsky , Juergen Gross , David Laight , Eduardo Valentin , aliguori@amazon.com, Will Deacon , "linux-mm@kvack.org" On Tue, Dec 12, 2017 at 9:32 AM, Thomas Gleixner wrote: > From: Peter Zijlstra > > When mapping the LDT RO the hardware will typically generate write faults > on first use. These faults can be trapped and the backing pages can be > modified by the kernel. > > There is one exception; IRET will immediately load CS/SS and unrecoverably > #GP. To avoid this issue access the LDT descriptors used by CS/SS before > the IRET to userspace. > > For this use LAR, which is a safe operation in that it will happily consume > an invalid LDT descriptor without traps. It gets the CPU to load the > descriptor and observes the (preset) ACCESS bit. > > So far none of the obvious candidates like dosemu/wine/etc. do care about > the ACCESS bit at all, so it should be rather safe to enforce it. > > Signed-off-by: Peter Zijlstra (Intel) > Signed-off-by: Thomas Gleixner > --- > arch/x86/entry/common.c | 8 ++++- > arch/x86/include/asm/desc.h | 2 + > arch/x86/include/asm/mmu_context.h | 53 +++++++++++++++++++++++-------------- > arch/x86/include/asm/thread_info.h | 4 ++ > arch/x86/kernel/cpu/common.c | 4 +- > arch/x86/kernel/ldt.c | 30 ++++++++++++++++++++ > arch/x86/mm/tlb.c | 2 - > arch/x86/power/cpu.c | 2 - > 8 files changed, 78 insertions(+), 27 deletions(-) > > --- a/arch/x86/entry/common.c > +++ b/arch/x86/entry/common.c > @@ -30,6 +30,7 @@ > #include > #include > #include > +#include > > #define CREATE_TRACE_POINTS > #include > @@ -130,8 +131,8 @@ static long syscall_trace_enter(struct p > return ret ?: regs->orig_ax; > } > > -#define EXIT_TO_USERMODE_LOOP_FLAGS \ > - (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \ > +#define EXIT_TO_USERMODE_LOOP_FLAGS \ > + (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | _TIF_LDT |\ > _TIF_NEED_RESCHED | _TIF_USER_RETURN_NOTIFY | _TIF_PATCH_PENDING) > > static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags) > @@ -171,6 +172,9 @@ static void exit_to_usermode_loop(struct > /* Disable IRQs and retry */ > local_irq_disable(); > > + if (cached_flags & _TIF_LDT) > + ldt_exit_user(regs); Nope. To the extent that this code actually does anything (which it shouldn't since you already forced the access bit), it's racy against flush_ldt() from another thread, and that race will be exploitable for privilege escalation. It needs to be outside the loopy part. --Andy -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org