From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752386AbcBJXCB (ORCPT ); Wed, 10 Feb 2016 18:02:01 -0500 Received: from mga14.intel.com ([192.55.52.115]:51680 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752304AbcBJXB6 (ORCPT ); Wed, 10 Feb 2016 18:01:58 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,427,1449561600"; d="scan'208";a="900393896" From: Andi Kleen To: tytso@mit.edu Cc: linux-kernel@vger.kernel.org, Andi Kleen Subject: [PATCH 2/3] random: Make input to output pool balancing per cpu Date: Wed, 10 Feb 2016 15:01:36 -0800 Message-Id: <1455145297-30897-3-git-send-email-andi@firstfloor.org> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1455145297-30897-1-git-send-email-andi@firstfloor.org> References: <1455145297-30897-1-git-send-email-andi@firstfloor.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andi Kleen The load balancing from input pool to output pools was essentially unlocked. Before it didn't matter much because there were only two choices (blocking and non blocking). But now with the distributed non blocking pools we have a lot more pools, and unlocked access of the counters may systematically deprive some nodes from their deserved entropy. Turn the round-robin state into per CPU variables to avoid any possibility of races. This code already runs with preemption disabled. v2: Check for non initialized pools. Signed-off-by: Andi Kleen --- drivers/char/random.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e7e02c0..a395f783 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -774,15 +774,20 @@ retry: if (entropy_bits > random_write_wakeup_bits && r->initialized && r->entropy_total >= 2*random_read_wakeup_bits) { - static struct entropy_store *last = &blocking_pool; - static int next_pool = -1; - struct entropy_store *other = &blocking_pool; + static DEFINE_PER_CPU(struct entropy_store *, lastp) = + &blocking_pool; + static DEFINE_PER_CPU(int, next_pool); + struct entropy_store *other = &blocking_pool, *last; + int np; /* -1: use blocking pool, 0<=max_node: node nb pool */ - if (next_pool > -1) - other = nonblocking_node_pool[next_pool]; - if (++next_pool >= num_possible_nodes()) - next_pool = -1; + np = __this_cpu_read(next_pool); + if (np > -1 && nonblocking_node_pool) + other = nonblocking_node_pool[np]; + if (++np >= num_possible_nodes()) + np = -1; + __this_cpu_write(next_pool, np); + last = __this_cpu_read(lastp); if (other->entropy_count <= 3 * other->poolinfo->poolfracbits / 4) last = other; @@ -791,6 +796,7 @@ retry: schedule_work(&last->push_work); r->entropy_total = 0; } + __this_cpu_write(lastp, last); } } } -- 2.4.3