From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752842AbcFURRw (ORCPT ); Tue, 21 Jun 2016 13:17:52 -0400 Received: from mail-ob0-f170.google.com ([209.85.214.170]:34997 "EHLO mail-ob0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752239AbcFURRs (ORCPT ); Tue, 21 Jun 2016 13:17:48 -0400 MIME-Version: 1.0 In-Reply-To: References: From: Linus Torvalds Date: Tue, 21 Jun 2016 10:16:46 -0700 X-Google-Sender-Auth: UHFFYmWs_CgAX3UBeJmdpD-07YA Message-ID: Subject: Re: [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86, core) To: Andy Lutomirski Cc: Andy Lutomirski , "the arch/x86 maintainers" , Linux Kernel Mailing List , "linux-arch@vger.kernel.org" , Borislav Petkov , Nadav Amit , Kees Cook , Brian Gerst , "kernel-hardening@lists.openwall.com" , Josh Poimboeuf , Jann Horn , Heiko Carstens Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 21, 2016 at 9:45 AM, Andy Lutomirski wrote: > > So I'm leaning toward fewer cache entries per cpu, maybe just one. > I'm all for making it a bit faster, but I think we should weigh that > against increasing memory usage too much and thus scaring away the > embedded folks. I don't think the embedded folks will be scared by a per-cpu cache, if it's just one or two entries. And I really do think that even just one or two entries will indeed catch a lot of the cases. And yes, fork+execve() is too damn expensive in page table build-up and tear-down. I'm not sure why bash doesn't do vfork+exec for when it has to wait for the process anyway, but it doesn't seem to do that. Linus