From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32007C4CECF for ; Mon, 16 Sep 2019 04:54:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 12180214AF for ; Mon, 16 Sep 2019 04:54:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728172AbfIPEx7 (ORCPT ); Mon, 16 Sep 2019 00:53:59 -0400 Received: from wtarreau.pck.nerim.net ([62.212.114.60]:45673 "EHLO 1wt.eu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726128AbfIPEx6 (ORCPT ); Mon, 16 Sep 2019 00:53:58 -0400 Received: (from willy@localhost) by pcw.home.local (8.15.2/8.15.2/Submit) id x8G4rVbY023928; Mon, 16 Sep 2019 06:53:31 +0200 Date: Mon, 16 Sep 2019 06:53:31 +0200 From: Willy Tarreau To: Linus Torvalds Cc: Herbert Xu , "Theodore Y. Ts'o" , "Ahmed S. Darwish" , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , linux-ext4@vger.kernel.org, Linux List Kernel Mailing Subject: Re: Linux 5.3-rc8 Message-ID: <20190916045331.GC23719@1wt.eu> References: <20190911160729.GF2740@mit.edu> <20190916035228.GA1767@gondor.apana.org.au> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.6.1 (2016-04-27) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Sep 15, 2019 at 09:21:06PM -0700, Linus Torvalds wrote: > The timer interrupt could be somewhat interesting if you are also > CPU-bound on a non-trivial load, because then "what program counter > got interrupted" ends up being possibly unpredictable - even with a > very stable timer interrupt source - and effectively stand in for a > cycle counter even on hardware that doesn't have a native TSC. Lots of > possible low-level jitter there to use for entropy. But especially if > you're just idly _waiting_ for entropy, you won't be "CPU-bound on an > interesting load" - you'll just hit the CPU idle loop all the time so > even that wouldn't work. In the old DOS era, I used to produce randoms by measuring the time it took for some devices to reset themselves (typically 8250 UARTs could take in the order of milliseconds). And reading their status registers during the reset phase used to show various sequences of flags at approximate timings. I suspect this method is still usable, even with SoCs full of peripherals, in part because not all clocks are synchronous, so we can retrieve a little bit of entropy by measuring edge transitions. I don't know how we can assess the number of bits provided by such method (probably log2(card(discrete values))) but maybe this is something we should progressively encourage drivers authors to do in the various device probing functions once we figure the best way to do it. The idea is around this. Instead of : probe(dev) { (...) while (timeout && !(status_reg & STATUS_RDY)) timeout--; (...) } We could do something like this (assuming 1 bit of randomness here) : probe(dev) { (...) prev_timeout = timeout; prev_reg = status_reg; while (timeout && !(status_reg & STATUS_RDY)) { if (status_reg != prev_reg) { add_device_randomness_bits(timeout - prev_timeout, 1); prev_timeout = timeout; prev_reg = status_reg; } timeout--; } (...) } It's also interesting to note that on many motherboards there are still multiple crystal oscillators (typically one per ethernet port) and that such types of independent, free-running clocks do present unpredictable edges compared to the CPU's clock, so when they affect the device's setup time, this does help quite a bit. Willy