From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54DB5C4CECC for ; Mon, 27 Apr 2020 17:40:00 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id A05FB206D9 for ; Mon, 27 Apr 2020 17:39:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="OEG6mbnA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A05FB206D9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18653-kernel-hardening=archiver.kernel.org@lists.openwall.com Received: (qmail 28427 invoked by uid 550); 27 Apr 2020 17:39:52 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Received: (qmail 28407 invoked from network); 27 Apr 2020 17:39:51 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1588009180; bh=hrpwDQjnNosOZuDVeZVm9aVGNZZMloN7/cGIPwaOS2E=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=OEG6mbnAPk9Yk6G26yTR/TVVBKEZvPHXDTXfYgvruVQNacNQiflszdd+0GVvZRYy2 rHV46TRK4sOsK5sEMulbd30Ou3kNNmzHcQfrDJhyprXiLmr6/v0Azt6qbGniF3D4Qm mpIkDvC4ZJOa4fBih6gCRXDIAHa8Yz36RoEypDhA= X-Gm-Message-State: AGi0PuYbMhyyHCSJIkChEty5LIJR6Ppdl3m+O0lx5bmMN8pN0kMKD1xC JGVhv8RU1KDV3ddSDdNRWBDrbxGZNwsFpq5+lAE= X-Google-Smtp-Source: APiQypKwgbBJU8vwKjnBG/W2/hGRkTtjHMSQFFtwdBypUd4jq0xAuehjbqPR+7Prha81wkO7jJ+QOnSaEJO928BJk1c= X-Received: by 2002:a6b:5904:: with SMTP id n4mr22515735iob.142.1588009179242; Mon, 27 Apr 2020 10:39:39 -0700 (PDT) MIME-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200427160018.243569-1-samitolvanen@google.com> In-Reply-To: <20200427160018.243569-1-samitolvanen@google.com> From: Ard Biesheuvel Date: Mon, 27 Apr 2020 19:39:28 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v13 00/12] add support for Clang's Shadow Call Stack To: Sami Tolvanen Cc: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Kees Cook , Jann Horn , Marc Zyngier , kernel-hardening@lists.openwall.com, Nick Desaulniers , Linux Kernel Mailing List , Miguel Ojeda , Masami Hiramatsu , clang-built-linux , Laura Abbott , Dave Martin , Linux ARM Content-Type: text/plain; charset="UTF-8" On Mon, 27 Apr 2020 at 18:00, Sami Tolvanen wrote: > > This patch series adds support for Clang's Shadow Call Stack > (SCS) mitigation, which uses a separately allocated shadow stack > to protect against return address overwrites. More information > can be found here: > > https://clang.llvm.org/docs/ShadowCallStack.html > > SCS provides better protection against traditional buffer > overflows than CONFIG_STACKPROTECTOR_*, but it should be noted > that SCS security guarantees in the kernel differ from the ones > documented for user space. The kernel must store addresses of > shadow stacks in memory, which means an attacker capable of > reading and writing arbitrary memory may be able to locate them > and hijack control flow by modifying the shadow stacks. > > SCS is currently supported only on arm64, where the compiler > requires the x18 register to be reserved for holding the current > task's shadow stack pointer. > > With -fsanitize=shadow-call-stack, the compiler injects > instructions to all non-leaf C functions to store the return > address to the shadow stack, and unconditionally load it again > before returning. As a result, SCS is incompatible with features > that rely on modifying function return addresses in the kernel > stack to alter control flow. A copy of the return address is > still kept in the kernel stack for compatibility with stack > unwinding, for example. > > SCS has a minimal performance overhead, but allocating > shadow stacks increases kernel memory usage. The feature is > therefore mostly useful on hardware that lacks support for PAC > instructions. > > Changes in v13: > - Changed thread_info::shadow_call_stack to a base address and > an offset instead, and removed the now unneeded __scs_base() > and scs_save(). > - Removed alignment from the kmem_cache and static allocations. > - Removed the task_set_scs() helper function. > - Moved the assembly code for loading and storing the offset in > thread_info to scs_load/save macros. > - Added offset checking to scs_corrupted(). > - Switched to cmpxchg_relaxed() in scs_check_usage(). > OK, so one thing that came up in an offline discussion about SCS is the way it interacts with the vmap'ed stack. The vmap'ed stack is great for robustness, but it only works if things don't explode for other reasons in the mean time. This means the ordinary-to-shadow-call-stack size ratio should be chosen such that it is *really* unlikely you could ever overflow the shadow call stack and corrupt another task's call stack before hitting the vmap stack's guard region. Alternatively, I wonder if there is a way we could let the SCS and ordinary stack share the [bottom of] the vmap'ed region. That would give rather nasty results if the ordinary stack overflows into the SCS, but for cases where we really recurse out of control, we could catch this occurrence on either stack, whichever one occurs first. And the nastiness -when it does occur- will not corrupt any state beyond the stack of the current task.