From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Cyrus-Session-Id: sloti22d1t05-3506469-1516993353-2-8038798055772597645 X-Sieve: CMU Sieve 3.0 X-Spam-known-sender: no X-Spam-score: 0.0 X-Spam-hits: BAYES_00 -1.9, ME_NOAUTH 0.01, RCVD_IN_DNSWL_HI -5, T_RP_MATCHES_RCVD -0.01, LANGUAGES en, BAYES_USED global, SA_VERSION 3.4.0 X-Spam-source: IP='209.132.180.67', Host='vger.kernel.org', Country='US', FromHeader='org', MailFrom='org' X-Spam-charsets: plain='UTF-8' X-Resolved-to: greg@kroah.com X-Delivered-to: greg@kroah.com X-Mail-from: stable-owner@vger.kernel.org ARC-Seal: i=1; a=rsa-sha256; cv=none; d=messagingengine.com; s=arctest; t=1516993352; b=ZCMosypRJLada5oAQ1NlX+Us/4vFpGNw1gmCgyrdrDARShK Zx6AVI9AROEhQp/1ejzKtV9zLeQSmZFfZAWazjBGKnuye+D887XFrOwYXcZ+m9Vi ANg3pJddDl/51X6dx1P3XVPHhOagIzyWl+OaYwNpHnDw47vdBf34XzC5SJTrN50m 69ivEvAJdfz+UacIM7esITnFhkMGUnnlsAkzF4c6+hLKS+u/Y6las05LNnY6S+a0 /TOOJJmhsQt9n3zurJTsOLqQRDmYXJ8UtE6/j3eNATRQSq6hhAEehO/z5a0c6yNz wK3JPm7JR1EG415+ODO7SvHygtEfncmOJoNSmYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=mime-version:in-reply-to:references:from :date:message-id:subject:to:cc:content-type:sender:list-id; s= arctest; t=1516993352; bh=BgHt2l++Lnb0hgqBM0ODCbek0vG4XDxxBJ/TJ6 5KCU4=; b=r1e/VgDW+d3K+d8WKSAygtFXEtQcC42sJJk6I+wxiOg1kyKEzv5v9i v2xvPPgF9wUjOJgegp4GahUAITq63iskx7x5tkGcj/G/fYjjVnqkKNWU0qv/XzvS z6LXkyzZzCqq6WJFDqMEPvDppIO/psrYpal/Wym5ZCVoX7tgOiehlRAlZDVvmbw9 ki+nCWx+EUn3Wn2MuJ3Bx6UCyZ4/sDgkWFsLjjrbUyVJQu6T0+Dxj/OSjYD6+4Cu apUwV2z4dmAIaIYa7yHeE/xwEbXtjyeFgUbutX45O3SoF77bptyIX14OmkSKWbgL Pb5JQAQxIfvtaU6ZZqztmzfmQWZmYQ4Q== ARC-Authentication-Results: i=1; mx2.messagingengine.com; arc=none (no signatures found); dkim=none (no signatures found); dmarc=none (p=none,has-list-id=yes,d=none) header.from=kernel.org; iprev=pass policy.iprev=209.132.180.67 (vger.kernel.org); spf=none smtp.mailfrom=stable-owner@vger.kernel.org smtp.helo=vger.kernel.org; x-aligned-from=orgdomain_pass; x-ptr=pass x-ptr-helo=vger.kernel.org x-ptr-lookup=vger.kernel.org; x-return-mx=pass smtp.domain=vger.kernel.org smtp.result=pass smtp_org.domain=kernel.org smtp_org.result=pass smtp_is_org_domain=no header.domain=kernel.org header.result=pass header_is_org_domain=yes Authentication-Results: mx2.messagingengine.com; arc=none (no signatures found); dkim=none (no signatures found); dmarc=none (p=none,has-list-id=yes,d=none) header.from=kernel.org; iprev=pass policy.iprev=209.132.180.67 (vger.kernel.org); spf=none smtp.mailfrom=stable-owner@vger.kernel.org smtp.helo=vger.kernel.org; x-aligned-from=orgdomain_pass; x-ptr=pass x-ptr-helo=vger.kernel.org x-ptr-lookup=vger.kernel.org; x-return-mx=pass smtp.domain=vger.kernel.org smtp.result=pass smtp_org.domain=kernel.org smtp_org.result=pass smtp_is_org_domain=no header.domain=kernel.org header.result=pass header_is_org_domain=yes Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752006AbeAZTCa (ORCPT ); Fri, 26 Jan 2018 14:02:30 -0500 Received: from mail.kernel.org ([198.145.29.99]:60226 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751889AbeAZTC3 (ORCPT ); Fri, 26 Jan 2018 14:02:29 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A47D21834 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: AH8x227QLEAT7ldvm96vcM88ZDwXTkK0zG75alR9FEc71z0qDTJczFLevlaQCg3GVbOszxvCae7w2Th//t3tLh0oev8= MIME-Version: 1.0 In-Reply-To: <20180126185143.dx7emh7cq5pbrkxn@node.shutemov.name> References: <346541c56caed61abbe693d7d2742b4a380c5001.1516914529.git.luto@kernel.org> <20180126185143.dx7emh7cq5pbrkxn@node.shutemov.name> From: Andy Lutomirski Date: Fri, 26 Jan 2018 11:02:08 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2 1/2] x86/mm/64: Fix vmapped stack syncing on very-large-memory 4-level systems To: "Kirill A. Shutemov" Cc: Andy Lutomirski , Konstantin Khlebnikov , Dave Hansen , X86 ML , Borislav Petkov , Neil Berrington , LKML , stable Content-Type: text/plain; charset="UTF-8" Sender: stable-owner@vger.kernel.org X-Mailing-List: stable@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-Mailing-List: linux-kernel@vger.kernel.org List-ID: On Fri, Jan 26, 2018 at 10:51 AM, Kirill A. Shutemov wrote: > On Thu, Jan 25, 2018 at 01:12:14PM -0800, Andy Lutomirski wrote: >> Neil Berrington reported a double-fault on a VM with 768GB of RAM that >> uses large amounts of vmalloc space with PTI enabled. >> >> The cause is that load_new_mm_cr3() was never fixed to take the >> 5-level pgd folding code into account, so, on a 4-level kernel, the >> pgd synchronization logic compiles away to exactly nothing. > > Ouch. Sorry for this. > >> >> Interestingly, the problem doesn't trigger with nopti. I assume this >> is because the kernel is mapped with global pages if we boot with >> nopti. The sequence of operations when we create a new task is that >> we first load its mm while still running on the old stack (which >> crashes if the old stack is unmapped in the new mm unless the TLB >> saves us), then we call prepare_switch_to(), and then we switch to the >> new stack. prepare_switch_to() pokes the new stack directly, which >> will populate the mapping through vmalloc_fault(). I assume that >> we're getting lucky on non-PTI systems -- the old stack's TLB entry >> stays alive long enough to make it all the way through >> prepare_switch_to() and switch_to() so that we make it to a valid >> stack. >> >> Fixes: b50858ce3e2a ("x86/mm/vmalloc: Add 5-level paging support") >> Cc: stable@vger.kernel.org >> Reported-and-tested-by: Neil Berrington >> Signed-off-by: Andy Lutomirski >> --- >> arch/x86/mm/tlb.c | 34 +++++++++++++++++++++++++++++----- >> 1 file changed, 29 insertions(+), 5 deletions(-) >> >> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c >> index a1561957dccb..5bfe61a5e8e3 100644 >> --- a/arch/x86/mm/tlb.c >> +++ b/arch/x86/mm/tlb.c >> @@ -151,6 +151,34 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, >> local_irq_restore(flags); >> } >> >> +static void sync_current_stack_to_mm(struct mm_struct *mm) >> +{ >> + unsigned long sp = current_stack_pointer; >> + pgd_t *pgd = pgd_offset(mm, sp); >> + >> + if (CONFIG_PGTABLE_LEVELS > 4) { > > Can we have > > if (PTRS_PER_P4D > 1) > > here instead? This way I wouldn't need to touch the code again for > boot-time switching support. Want to send a patch? (Also, I haven't noticed a patch to fix up the SYSRET checking for boot-time switching. Have I just missed it?) --Andy