From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935127AbbIVXRa (ORCPT ); Tue, 22 Sep 2015 19:17:30 -0400 Received: from mga01.intel.com ([192.55.52.88]:16188 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935093AbbIVXR1 (ORCPT ); Tue, 22 Sep 2015 19:17:27 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,575,1437462000"; d="scan'208";a="810839682" From: Andi Kleen To: tytso@mit.edu Cc: linux-kernel@vger.kernel.org, kirill.shutemov@linux.intel.com, herbert@gondor.apana.org.au, Andi Kleen Subject: [PATCH 2/3] random: Make input to output pool balancing per cpu Date: Tue, 22 Sep 2015 16:16:06 -0700 Message-Id: <1442963767-14945-2-git-send-email-andi@firstfloor.org> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1442963767-14945-1-git-send-email-andi@firstfloor.org> References: <1442963767-14945-1-git-send-email-andi@firstfloor.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andi Kleen The load balancing from input pool to output pools was essentially unlocked. Before it didn't matter much because there were only two choices (blocking and non blocking). But now with the distributed non blocking pools we have a lot more pools, and unlocked access of the counters may systematically deprive some nodes from their deserved entropy. Turn the round-robin state into per CPU variables to avoid any possibility of races. This code already runs with preemption disabled. Signed-off-by: Andi Kleen --- drivers/char/random.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index d0302be..b74919a 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -743,15 +743,20 @@ retry: if (entropy_bits > random_write_wakeup_bits && r->initialized && r->entropy_total >= 2*random_read_wakeup_bits) { - static struct entropy_store *last = &blocking_pool; - static int next_pool = -1; - struct entropy_store *other = &blocking_pool; + static DEFINE_PER_CPU(struct entropy_store *, lastp) = + &blocking_pool; + static DEFINE_PER_CPU(int, next_pool); + struct entropy_store *other = &blocking_pool, *last; + int np; /* -1: use blocking pool, 0<=max_node: node nb pool */ - if (next_pool > -1) - other = nonblocking_node_pool[next_pool]; - if (++next_pool >= num_possible_nodes()) - next_pool = -1; + np = __this_cpu_read(next_pool); + if (np > -1) + other = nonblocking_node_pool[np]; + if (++np >= num_possible_nodes()) + np = -1; + __this_cpu_write(next_pool, np); + last = __this_cpu_read(lastp); if (other->entropy_count <= 3 * other->poolinfo->poolfracbits / 4) last = other; @@ -760,6 +765,7 @@ retry: schedule_work(&last->push_work); r->entropy_total = 0; } + __this_cpu_write(lastp, last); } } } -- 2.4.3