From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754614AbbGFQ70 (ORCPT ); Mon, 6 Jul 2015 12:59:26 -0400 Received: from mail-la0-f54.google.com ([209.85.215.54]:35228 "EHLO mail-la0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753248AbbGFQ7X (ORCPT ); Mon, 6 Jul 2015 12:59:23 -0400 MIME-Version: 1.0 In-Reply-To: <20150706134423.GA8094@gmail.com> References: <20150706134423.GA8094@gmail.com> From: Andy Lutomirski Date: Mon, 6 Jul 2015 09:59:02 -0700 Message-ID: Subject: Re: [PATCH] x86: Fix detection of GCC -mpreferred-stack-boundary support To: Ingo Molnar Cc: Andy Lutomirski , "linux-kernel@vger.kernel.org" , X86 ML , Linus Torvalds , Jan Kara , Borislav Petkov , Denys Vlasenko Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 6, 2015 at 6:44 AM, Ingo Molnar wrote: > > * Andy Lutomirski wrote: > >> As per https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53383, GCC only >> allows -mpreferred-stack-boundary=3 on x86_64 if -mno-sse is set. >> That means that cc-option will not detect >> -mpreferred-stack-boundary=3 support, because we test for it before >> setting -mno-sse. >> >> Fix it by reordering the Makefile bits. ... > > So the 'stack boundary' is the RSP that GCC generates before it calls another > function from within an existing function, right? > I think so. Certainly the "incoming stack boundary" (which is exactly the same as the preferred stack boundary unless explicitly changed) is the RSP alignment that GCC expects on entry. > So looking at this I question the choice of -mpreferred-stack-boundary=3. Why not > do -mpreferred-stack-boundary=2? > Easy answer: we can't: $ gcc -c -mno-sse -mpreferred-stack-boundary=2 empty.c empty.c:1:0: error: -mpreferred-stack-boundary=2 is not between 3 and 12 > My reasoning: on modern uarchs there's no penalty for 32-bit misalignment of > 64-bit variables, only if they cross 64-byte cache lines, which should be rare > with a chance of 1:16. This small penalty (of at most +1 cycle in some > circumstances IIRC) should be more than counterbalanced by the compression of the > stack by 5% on average. > I'll counter with: what's the benefit? There are no operations that will naturally change RSP by anything that isn't a multiple of 8 (there's no pushl in 64-bit mode, or at least not on AMD chips -- the Intel manual is a bit vague on this point), so we'll end up with RSP being a multiple of 8 regardless. Even if we somehow shaved 4 bytes off in asm, that still wouldn't buy us anything, as a dangling 4 bytes at the bottom of the stack isn't useful for anything. --Andy