From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755862AbcLOQrY (ORCPT ); Thu, 15 Dec 2016 11:47:24 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:47769 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751902AbcLOQrW (ORCPT ); Thu, 15 Dec 2016 11:47:22 -0500 Message-Id: <20161215162648.061449202@linutronix.de> User-Agent: quilt/0.63-1 Date: Thu, 15 Dec 2016 16:44:01 -0000 From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Peter Zijlstra , Kyle Huey , Andy Lutomirski Subject: [patch 0/3] x86/process: Optimize __switch_to_extra() Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org GCC generates lousy code in __switch_to_extra(). Aside of that some of the operations there are implemented suboptimal. This series, inspired by a patch from Kyle, helps the compiler to be less stupid by explicitely giving the hints to optimize and replaces the open coded bit toggle mechanisms with proper helper functions. The resulting change in text size: 64bit 32bit Before: 3726 9388 After: 3646 9324 Delta: 80 152 The number of conditional jumps is also reduced: 64bit 32bit Before: 8 13 After: 5 10 Thanks, tglx --- include/asm/processor.h | 12 ++++++++ include/asm/tlbflush.h | 10 ++++++ kernel/process.c | 70 +++++++++++++++++++----------------------------- 3 files changed, 51 insertions(+), 41 deletions(-)