From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751591AbdFHM3x (ORCPT ); Thu, 8 Jun 2017 08:29:53 -0400 Received: from frisell.zx2c4.com ([192.95.5.64]:35301 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751566AbdFHM3v (ORCPT ); Thu, 8 Jun 2017 08:29:51 -0400 From: "Jason A. Donenfeld" To: "Theodore Ts'o" , LKML Cc: "Jason A. Donenfeld" Subject: [PATCH cleanups/resubmit 2/4] random: squelch sh compiler warning and ensure safe optimization Date: Thu, 8 Jun 2017 14:29:30 +0200 Message-Id: <20170608122932.23769-3-Jason@zx2c4.com> In-Reply-To: <20170608122932.23769-1-Jason@zx2c4.com> References: <20170608122932.23769-1-Jason@zx2c4.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Odd versions of gcc for the sh4 architecture will actually warn about flags being used while uninitialized, so we set them to zero. Non crazy gccs will optimize that out again, so it doesn't make a difference. Next, over aggressive gccs could inline the expression that defines use_lock, which could then introduce a race resulting in a lock imbalance. By using READ_ONCE, we prevent that fate. Finally, we make that assignment const, so that gcc can still optimize a nice amount. This patch was previously submitted as part of v5, but v4 was picked up, so this represents a delta between v4 and v5. Signed-off-by: Jason A. Donenfeld --- drivers/char/random.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 3bdeef13afda..cef9358ecf20 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2066,8 +2066,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); u64 get_random_u64(void) { u64 ret; - bool use_lock = crng_init < 2; - unsigned long flags; + const bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; #ifdef CONFIG_WARN_UNSEEDED_RANDOM static void *previous = NULL; @@ -2110,8 +2110,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32); u32 get_random_u32(void) { u32 ret; - bool use_lock = crng_init < 2; - unsigned long flags; + const bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; #ifdef CONFIG_WARN_UNSEEDED_RANDOM static void *previous = NULL; -- 2.13.0