From mboxrd@z Thu Jan 1 00:00:00 1970 From: Steven Rostedt Subject: [RFC][PATCH] kernel.h: Add generic roundup_64() macro Date: Thu, 23 May 2019 10:00:13 -0400 Message-ID: <20190523100013.52a8d2a6@gandalf.local.home> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: nouveau-bounces-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org Sender: "Nouveau" To: LKML Cc: Leon Romanovsky , "Darrick J. Wong" , David Airlie , nouveau-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org, dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org, linux-xfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Jason Gunthorpe , Doug Ledford , Ben Skeggs , Daniel Vetter , Andrew Morton , Linus Torvalds , linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: linux-rdma@vger.kernel.org CkZyb206IFN0ZXZlbiBSb3N0ZWR0IChWTXdhcmUpIDxyb3N0ZWR0QGdvb2RtaXMub3JnPgoKSW4g ZGlzY3Vzc2luZyBhIGJ1aWxkIGZhaWx1cmUgb24geDg2XzMyIGR1ZSB0byB0aGUgdXNlIG9mIHJv dW5kdXAoKSBvbgphIDY0IGJpdCBudW1iZXIsIEkgcmVhbGl6ZWQgdGhhdCB0aGVyZSdzIG5vIGdl bmVyaWMgZXF1aXZhbGVudApyb3VuZHVwXzY0KCkuIEl0IGlzIGltcGxlbWVudGVkIGluIHR3byBz ZXBhcmF0ZSBwbGFjZXMgaW4gdGhlIGtlcm5lbCwKYnV0IHRoZXJlIHJlYWxseSBzaG91bGQgYmUg anVzdCBvbmUgdGhhdCBhbGwgY2FuIHVzZS4KCkFsdGhvdWdoIHRoZSBvdGhlciBpbXBsZW1lbnRh dGlvbnMgYXJlIGEgc3RhdGljIGlubGluZSBmdW5jdGlvbiwgdGhpcwppbXBsZW1lbnRhdGlvbiBp cyBhIG1hY3JvIHRvIGFsbG93IHRoZSB1c2Ugb2YgdHlwZW9mKHgpIHRvIGRlbm90ZSB0aGUKdHlw ZSB0aGF0IGlzIGJlaW5nIHVzZWQuIElmIHRoZSBidWlsZCBpcyBvbiBhIDY0IGJpdCBtYWNoaW5l LCB0aGVuIHRoZQpyb3VuZHVwXzY0KCkgbWFjcm8gd2lsbCBqdXN0IGRlZmF1bHQgYmFjayB0byBy b3VuZHVwKCkuIEJ1dCBmb3IgMzIgYml0Cm1hY2hpbmVzLCBpdCB3aWxsIHVzZSB0aGUgdmVyc2lv biB0aGF0IGlzIHdpbGwgbm90IGNhdXNlIGlzc3VlcyB3aXRoCmRpdmlkaW5nIGEgNjQgYml0IG51 bWJlciBvbiBhIDMyIGJpdCBtYWNoaW5lLgoKTGluazogaHR0cDovL2xrbWwua2VybmVsLm9yZy9y LzIwMTkwNTIyMTQ1NDUwLjI1ZmY0ODNkQGdhbmRhbGYubG9jYWwuaG9tZQoKU2lnbmVkLW9mZi1i eTogU3RldmVuIFJvc3RlZHQgKFZNd2FyZSkgPHJvc3RlZHRAZ29vZG1pcy5vcmc+Ci0tLQpkaWZm IC0tZ2l0IGEvZHJpdmVycy9ncHUvZHJtL25vdXZlYXUvbm91dmVhdV9iby5jIGIvZHJpdmVycy9n cHUvZHJtL25vdXZlYXUvbm91dmVhdV9iby5jCmluZGV4IDM0YTk5ODAxMmJmNi4uY2RhY2ZlMWY3 MzJjIDEwMDY0NAotLS0gYS9kcml2ZXJzL2dwdS9kcm0vbm91dmVhdS9ub3V2ZWF1X2JvLmMKKysr IGIvZHJpdmVycy9ncHUvZHJtL25vdXZlYXUvbm91dmVhdV9iby5jCkBAIC0xNDMsMTQgKzE0Myw2 IEBAIG5vdXZlYXVfYm9fZGVsX3R0bShzdHJ1Y3QgdHRtX2J1ZmZlcl9vYmplY3QgKmJvKQogCWtm cmVlKG52Ym8pOwogfQogCi1zdGF0aWMgaW5saW5lIHU2NAotcm91bmR1cF82NCh1NjQgeCwgdTMy IHkpCi17Ci0JeCArPSB5IC0gMTsKLQlkb19kaXYoeCwgeSk7Ci0JcmV0dXJuIHggKiB5OwotfQot CiBzdGF0aWMgdm9pZAogbm91dmVhdV9ib19maXh1cF9hbGlnbihzdHJ1Y3Qgbm91dmVhdV9ibyAq bnZibywgdTMyIGZsYWdzLAogCQkgICAgICAgaW50ICphbGlnbiwgdTY0ICpzaXplKQpkaWZmIC0t Z2l0IGEvZnMveGZzL3hmc19saW51eC5oIGIvZnMveGZzL3hmc19saW51eC5oCmluZGV4IGVkYmQ1 YTIxMGRmMi4uMTNkZTlkNDliZDUyIDEwMDY0NAotLS0gYS9mcy94ZnMveGZzX2xpbnV4LmgKKysr IGIvZnMveGZzL3hmc19saW51eC5oCkBAIC0yMDcsMTMgKzIwNyw2IEBAIHN0YXRpYyBpbmxpbmUg eGZzX2Rldl90IGxpbnV4X3RvX3hmc19kZXZfdChkZXZfdCBkZXYpCiAjZGVmaW5lIHhmc19zb3J0 KGEsbixzLGZuKQlzb3J0KGEsbixzLGZuLE5VTEwpCiAjZGVmaW5lIHhmc19zdGFja190cmFjZSgp CWR1bXBfc3RhY2soKQogCi1zdGF0aWMgaW5saW5lIHVpbnQ2NF90IHJvdW5kdXBfNjQodWludDY0 X3QgeCwgdWludDMyX3QgeSkKLXsKLQl4ICs9IHkgLSAxOwotCWRvX2Rpdih4LCB5KTsKLQlyZXR1 cm4geCAqIHk7Ci19Ci0KIHN0YXRpYyBpbmxpbmUgdWludDY0X3QgaG93bWFueV82NCh1aW50NjRf dCB4LCB1aW50MzJfdCB5KQogewogCXggKz0geSAtIDE7CmRpZmYgLS1naXQgYS9pbmNsdWRlL2xp bnV4L2tlcm5lbC5oIGIvaW5jbHVkZS9saW51eC9rZXJuZWwuaAppbmRleCA3NGIxZWU5MDI3ZjUu LmNkMDA2MzYyOTM1NyAxMDA2NDQKLS0tIGEvaW5jbHVkZS9saW51eC9rZXJuZWwuaAorKysgYi9p bmNsdWRlL2xpbnV4L2tlcm5lbC5oCkBAIC0xMTUsNiArMTE1LDIwIEBACiAJKCgoeCkgKyAoX195 IC0gMSkpIC8gX195KSAqIF9feTsJCVwKIH0JCQkJCQkJXAogKQorCisjaWYgQklUU19QRVJfTE9O RyA9PSAzMgorIyBkZWZpbmUgcm91bmR1cF82NCh4LCB5KSAoCQkJCVwKK3sJCQkJCQkJXAorCXR5 cGVvZih5KSBfX3kgPSB5OwkJCQlcCisJdHlwZW9mKHgpIF9feCA9ICh4KSArIChfX3kgLSAxKTsJ CVwKKwlkb19kaXYoX194LCBfX3kpOwkJCQlcCisJX194ICogX195OwkJCQkJXAorfQkJCQkJCQlc CispCisjZWxzZQorIyBkZWZpbmUgcm91bmR1cF82NCh4LCB5KQlyb3VuZHVwKHgsIHkpCisjZW5k aWYKKwogLyoqCiAgKiByb3VuZGRvd24gLSByb3VuZCBkb3duIHRvIG5leHQgc3BlY2lmaWVkIG11 bHRpcGxlCiAgKiBAeDogdGhlIHZhbHVlIHRvIHJvdW5kCl9fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fCk5vdXZlYXUgbWFpbGluZyBsaXN0Ck5vdXZlYXVAbGlz dHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4v bGlzdGluZm8vbm91dmVhdQ== From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E63C8C282DD for ; Thu, 23 May 2019 14:00:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C39452177E for ; Thu, 23 May 2019 14:00:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730774AbfEWOAQ (ORCPT ); Thu, 23 May 2019 10:00:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:52550 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730323AbfEWOAQ (ORCPT ); Thu, 23 May 2019 10:00:16 -0400 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B777F2133D; Thu, 23 May 2019 14:00:14 +0000 (UTC) Date: Thu, 23 May 2019 10:00:13 -0400 From: Steven Rostedt To: LKML Cc: Ben Skeggs , David Airlie , Daniel Vetter , Leon Romanovsky , Doug Ledford , Jason Gunthorpe , "Darrick J. Wong" , linux-xfs@vger.kernel.org, dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-rdma@vger.kernel.org, Linus Torvalds , Andrew Morton Subject: [RFC][PATCH] kernel.h: Add generic roundup_64() macro Message-ID: <20190523100013.52a8d2a6@gandalf.local.home> X-Mailer: Claws Mail 3.17.3 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Steven Rostedt (VMware) In discussing a build failure on x86_32 due to the use of roundup() on a 64 bit number, I realized that there's no generic equivalent roundup_64(). It is implemented in two separate places in the kernel, but there really should be just one that all can use. Although the other implementations are a static inline function, this implementation is a macro to allow the use of typeof(x) to denote the type that is being used. If the build is on a 64 bit machine, then the roundup_64() macro will just default back to roundup(). But for 32 bit machines, it will use the version that is will not cause issues with dividing a 64 bit number on a 32 bit machine. Link: http://lkml.kernel.org/r/20190522145450.25ff483d@gandalf.local.home Signed-off-by: Steven Rostedt (VMware) --- diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 34a998012bf6..cdacfe1f732c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -143,14 +143,6 @@ nouveau_bo_del_ttm(struct ttm_buffer_object *bo) kfree(nvbo); } -static inline u64 -roundup_64(u64 x, u32 y) -{ - x += y - 1; - do_div(x, y); - return x * y; -} - static void nouveau_bo_fixup_align(struct nouveau_bo *nvbo, u32 flags, int *align, u64 *size) diff --git a/fs/xfs/xfs_linux.h b/fs/xfs/xfs_linux.h index edbd5a210df2..13de9d49bd52 100644 --- a/fs/xfs/xfs_linux.h +++ b/fs/xfs/xfs_linux.h @@ -207,13 +207,6 @@ static inline xfs_dev_t linux_to_xfs_dev_t(dev_t dev) #define xfs_sort(a,n,s,fn) sort(a,n,s,fn,NULL) #define xfs_stack_trace() dump_stack() -static inline uint64_t roundup_64(uint64_t x, uint32_t y) -{ - x += y - 1; - do_div(x, y); - return x * y; -} - static inline uint64_t howmany_64(uint64_t x, uint32_t y) { x += y - 1; diff --git a/include/linux/kernel.h b/include/linux/kernel.h index 74b1ee9027f5..cd0063629357 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -115,6 +115,20 @@ (((x) + (__y - 1)) / __y) * __y; \ } \ ) + +#if BITS_PER_LONG == 32 +# define roundup_64(x, y) ( \ +{ \ + typeof(y) __y = y; \ + typeof(x) __x = (x) + (__y - 1); \ + do_div(__x, __y); \ + __x * __y; \ +} \ +) +#else +# define roundup_64(x, y) roundup(x, y) +#endif + /** * rounddown - round down to next specified multiple * @x: the value to round