From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753239AbbDJSyf (ORCPT ); Fri, 10 Apr 2015 14:54:35 -0400 Received: from mail-ie0-f178.google.com ([209.85.223.178]:34921 "EHLO mail-ie0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751028AbbDJSyc (ORCPT ); Fri, 10 Apr 2015 14:54:32 -0400 MIME-Version: 1.0 In-Reply-To: <5527E3E9.7010608@redhat.com> References: <20150409183926.GM6464@linux.vnet.ibm.com> <20150410090051.GA28549@gmail.com> <20150410091252.GA27630@gmail.com> <20150410092152.GA21332@gmail.com> <20150410111427.GA30477@gmail.com> <20150410112748.GB30477@gmail.com> <20150410120846.GA17101@gmail.com> <20150410131929.GE28074@pd.tnic> <5527D631.4090905@redhat.com> <20150410140141.GI28074@pd.tnic> <5527E3E9.7010608@redhat.com> Date: Fri, 10 Apr 2015 11:54:31 -0700 X-Google-Sender-Auth: kdBKqT5n2k3BaVdB0-lRuh1tm2Q Message-ID: Subject: Re: [PATCH] x86: Align jump targets to 1 byte boundaries From: Linus Torvalds To: Denys Vlasenko Cc: Borislav Petkov , Ingo Molnar , "Paul E. McKenney" , Jason Low , Peter Zijlstra , Davidlohr Bueso , Tim Chen , Aswin Chandramouleeswaran , LKML , Andy Lutomirski , Brian Gerst , "H. Peter Anvin" , Thomas Gleixner , Peter Zijlstra Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 10, 2015 at 7:53 AM, Denys Vlasenko wrote: > > There are people who experimentally researched this. > According to this guy: > > http://www.agner.org/optimize/microarchitecture.pdf > > Intel CPUs can decode only up to 16 bytes at a time Indeed. For intel decoding, the old "4-1-1-1" decode patterns are almost entirely immaterial these days. Even the "single uop" (the "1"s int he 4-1-1-1) cover the vast majority of cases. So for Intel decoders, the biggest limit - especially for x86-64 instructions - tends to be the 16-byte decode window. The problem with x86 decoding isn't that individual instructions are complicated, but the fact that when you try to decode multiple instructions at once, finding the start of each instruction is somewhat painful. What I *think* Intel does is have this rather complex net of logic that basically decodes 16 bytes in parallel, but has this rippling thing that just disables the incorrect decodes. That said, the fetch boundary from L2 is probably an issue too, especially if the front-end hasn't had time to run ahead of the execution engine. That's likely where the "32 byte alignment" comes from. Linus