From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A10E9C433E0 for ; Sun, 17 May 2020 15:29:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 79145207BB for ; Sun, 17 May 2020 15:29:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eedzPAAV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727981AbgEQP3m (ORCPT ); Sun, 17 May 2020 11:29:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728119AbgEQP3f (ORCPT ); Sun, 17 May 2020 11:29:35 -0400 Received: from mail-qt1-x841.google.com (mail-qt1-x841.google.com [IPv6:2607:f8b0:4864:20::841]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8288C061A0C for ; Sun, 17 May 2020 08:29:34 -0700 (PDT) Received: by mail-qt1-x841.google.com with SMTP id c24so6107093qtw.7 for ; Sun, 17 May 2020 08:29:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6JuBSjDx29vsYhZrmyW66E11kqJB9K4dbw4lcd0VNWg=; b=eedzPAAVncbnhu/h6P3VkkCI0hUfEQtOsPbgWLYOKyVeM8J+P5OsZn0/CbFzbgN57Q Rvmcoh442s0l616JXOCfoqYn58kIxqfafnokcLjYv4OI/VKyJtbPt65ApvSOTL9qwfD8 RPEeYqKYxQmgbwFG9zXi7jFmpB97iVYWXPzqspx+k7NgkmcPENXPbyxJmI9YRfHX0JSs g66KmZjCNJxZjYmV7GIqQ8kj7He/jfRB+LORqYwErYRIg/JIqblAZrRB3gbOvU+ByvMZ Ndk+V7bu1jLM1X8FwgQrxnbDsqbcYsBLb5r5VUgDjRXvNbDHTI8eo41a9kbEq0d0w4r0 +dgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6JuBSjDx29vsYhZrmyW66E11kqJB9K4dbw4lcd0VNWg=; b=DS9TcKZGQq702WzRhrqLFcU7H4cbHgu2nfnLCdVhIx0FoI8k8pMTzw0xZnoQEHacIn 49Fnh+cuMB7d3leXLieQQAvoi5Qijgsk4JrrcsuWa2aTvFgp8IHAGmf7F1uXPGUOXyOK c6BdU2cXUCiQ5CyVWfuC+dmb1cyhS4s6YTl/H5wf2mwx6Vh4Fsbh9t9d8sCayi2KzRgL MN7sgdvxAIGVOZWkxTs/ro1aahycBvFgEP10mCZyYl8p/WY8kACRm82N2FjakF53Gs/C di5FyVWi6Ju4g9AvAc9EtyT5ZDDtvQd/ByDia2YKfQzu0Cvl7VDGAryS0t6aEetkGNH6 fj9g== X-Gm-Message-State: AOAM532JanEb0ZLJ6iNSPH380GxqETHExOc/VwtYLwEtpypiSeEfumBH lv3dy5KPlHCgOOVLLtLlTczc4teq/Q== X-Google-Smtp-Source: ABdhPJw+YNY0y8y3W0juLeumxHR6N0PoiWq8s9yTbuopGwrV54Vk9UyR3W1xETKsJ+6v7v6OTv6pgQ== X-Received: by 2002:aed:35a1:: with SMTP id c30mr11055107qte.228.1589729373972; Sun, 17 May 2020 08:29:33 -0700 (PDT) Received: from localhost.localdomain (174-084-153-250.res.spectrum.com. [174.84.153.250]) by smtp.gmail.com with ESMTPSA id 63sm6234335qkh.120.2020.05.17.08.29.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 17 May 2020 08:29:33 -0700 (PDT) From: Brian Gerst To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Nick Desaulniers , Brian Gerst Subject: [PATCH 7/7] x86/percpu: Clean up percpu_cmpxchg_op() Date: Sun, 17 May 2020 11:29:16 -0400 Message-Id: <20200517152916.3146539-8-brgerst@gmail.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200517152916.3146539-1-brgerst@gmail.com> References: <20200517152916.3146539-1-brgerst@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The core percpu macros already have a switch on the data size, so the switch in the x86 code is redundant and produces more dead code. Also use appropriate types for the width of the instructions. This avoids errors when compiling with Clang. Signed-off-by: Brian Gerst --- arch/x86/include/asm/percpu.h | 58 +++++++++++------------------------ 1 file changed, 18 insertions(+), 40 deletions(-) diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h index 3c95ab3c99cd..b61d4fc5568e 100644 --- a/arch/x86/include/asm/percpu.h +++ b/arch/x86/include/asm/percpu.h @@ -236,39 +236,17 @@ do { \ * cmpxchg has no such implied lock semantics as a result it is much * more efficient for cpu local operations. */ -#define percpu_cmpxchg_op(qual, var, oval, nval) \ +#define percpu_cmpxchg_op(size, qual, _var, _oval, _nval) \ ({ \ - typeof(var) pco_ret__; \ - typeof(var) pco_old__ = (oval); \ - typeof(var) pco_new__ = (nval); \ - switch (sizeof(var)) { \ - case 1: \ - asm qual ("cmpxchgb %2, "__percpu_arg(1) \ - : "=a" (pco_ret__), "+m" (var) \ - : "q" (pco_new__), "0" (pco_old__) \ - : "memory"); \ - break; \ - case 2: \ - asm qual ("cmpxchgw %2, "__percpu_arg(1) \ - : "=a" (pco_ret__), "+m" (var) \ - : "r" (pco_new__), "0" (pco_old__) \ - : "memory"); \ - break; \ - case 4: \ - asm qual ("cmpxchgl %2, "__percpu_arg(1) \ - : "=a" (pco_ret__), "+m" (var) \ - : "r" (pco_new__), "0" (pco_old__) \ - : "memory"); \ - break; \ - case 8: \ - asm qual ("cmpxchgq %2, "__percpu_arg(1) \ - : "=a" (pco_ret__), "+m" (var) \ - : "r" (pco_new__), "0" (pco_old__) \ - : "memory"); \ - break; \ - default: __bad_percpu_size(); \ - } \ - pco_ret__; \ + __pcpu_type_##size pco_old__ = __pcpu_cast_##size(_oval); \ + __pcpu_type_##size pco_new__ = __pcpu_cast_##size(_nval); \ + asm qual (__pcpu_op2_##size("cmpxchg", "%[nval]", \ + __percpu_arg([var])) \ + : [oval] "+a" (pco_old__), \ + [var] "+m" (_var) \ + : [nval] __pcpu_reg_##size(, pco_new__) \ + : "memory"); \ + (typeof(_var))(unsigned long) pco_old__; \ }) /* @@ -336,16 +314,16 @@ do { \ #define raw_cpu_add_return_1(pcp, val) percpu_add_return_op(1, , pcp, val) #define raw_cpu_add_return_2(pcp, val) percpu_add_return_op(2, , pcp, val) #define raw_cpu_add_return_4(pcp, val) percpu_add_return_op(4, , pcp, val) -#define raw_cpu_cmpxchg_1(pcp, oval, nval) percpu_cmpxchg_op(, pcp, oval, nval) -#define raw_cpu_cmpxchg_2(pcp, oval, nval) percpu_cmpxchg_op(, pcp, oval, nval) -#define raw_cpu_cmpxchg_4(pcp, oval, nval) percpu_cmpxchg_op(, pcp, oval, nval) +#define raw_cpu_cmpxchg_1(pcp, oval, nval) percpu_cmpxchg_op(1, , pcp, oval, nval) +#define raw_cpu_cmpxchg_2(pcp, oval, nval) percpu_cmpxchg_op(2, , pcp, oval, nval) +#define raw_cpu_cmpxchg_4(pcp, oval, nval) percpu_cmpxchg_op(4, , pcp, oval, nval) #define this_cpu_add_return_1(pcp, val) percpu_add_return_op(1, volatile, pcp, val) #define this_cpu_add_return_2(pcp, val) percpu_add_return_op(2, volatile, pcp, val) #define this_cpu_add_return_4(pcp, val) percpu_add_return_op(4, volatile, pcp, val) -#define this_cpu_cmpxchg_1(pcp, oval, nval) percpu_cmpxchg_op(volatile, pcp, oval, nval) -#define this_cpu_cmpxchg_2(pcp, oval, nval) percpu_cmpxchg_op(volatile, pcp, oval, nval) -#define this_cpu_cmpxchg_4(pcp, oval, nval) percpu_cmpxchg_op(volatile, pcp, oval, nval) +#define this_cpu_cmpxchg_1(pcp, oval, nval) percpu_cmpxchg_op(1, volatile, pcp, oval, nval) +#define this_cpu_cmpxchg_2(pcp, oval, nval) percpu_cmpxchg_op(2, volatile, pcp, oval, nval) +#define this_cpu_cmpxchg_4(pcp, oval, nval) percpu_cmpxchg_op(4, volatile, pcp, oval, nval) #ifdef CONFIG_X86_CMPXCHG64 #define percpu_cmpxchg8b_double(pcp1, pcp2, o1, o2, n1, n2) \ @@ -376,7 +354,7 @@ do { \ #define raw_cpu_or_8(pcp, val) percpu_to_op(8, , "or", (pcp), val) #define raw_cpu_add_return_8(pcp, val) percpu_add_return_op(8, , pcp, val) #define raw_cpu_xchg_8(pcp, nval) raw_percpu_xchg_op(pcp, nval) -#define raw_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(, pcp, oval, nval) +#define raw_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(8, , pcp, oval, nval) #define this_cpu_read_8(pcp) percpu_from_op(8, volatile, "mov", pcp) #define this_cpu_write_8(pcp, val) percpu_to_op(8, volatile, "mov", (pcp), val) @@ -385,7 +363,7 @@ do { \ #define this_cpu_or_8(pcp, val) percpu_to_op(8, volatile, "or", (pcp), val) #define this_cpu_add_return_8(pcp, val) percpu_add_return_op(8, volatile, pcp, val) #define this_cpu_xchg_8(pcp, nval) percpu_xchg_op(8, volatile, pcp, nval) -#define this_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(volatile, pcp, oval, nval) +#define this_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(8, volatile, pcp, oval, nval) /* * Pretty complex macro to generate cmpxchg16 instruction. The instruction -- 2.25.4