From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59B27C433EF for ; Wed, 18 May 2022 13:44:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238098AbiERNoC (ORCPT ); Wed, 18 May 2022 09:44:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238141AbiERNoA (ORCPT ); Wed, 18 May 2022 09:44:00 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D35919CEE4; Wed, 18 May 2022 06:43:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Ev3tPtzlYMQf8tLGdACd44jMIDv9Fh/Rr/emg2hh5uA=; b=P8KAPphVTpNM7ibcjB0zCxCAAc /pKUNIeIKhRVF/WoU4WxzyGEGQ2H5/mhcMn4lWud9j8Y8XnJ6+iWc6W3cBC4ay+dq4mspkGM4CIpE lJMPAeitQvXOpqGijyzSBwibbKde8yX2kqzEX/zsOz3j9qaIa5pjW1ZVJDZy+S686rciinjp09Nmf F0GwNoetfoCQBWe+VWL5j4ZYOjaN3sFuhccs1M/J+QsfLFZG1VDojNocXRcgQbmuJPD2pTJ3LPG5b XQeP8NbSdCjCSCUdL9rSOEPzF6N2NoO0zquXmzqxO4nH3s4cPSXYqN+wQgqBnGP1/tny1XK7EnGqZ UKE0ABrw==; Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrJxj-002Lef-Dp; Wed, 18 May 2022 13:43:39 +0000 Date: Wed, 18 May 2022 06:43:39 -0700 From: Christoph Hellwig To: Lukas Wunner Cc: Dan Williams , Jonathan Cameron , Gavin Hindman , Linuxarm , "Weiny, Ira" , Linux PCI , linux-cxl@vger.kernel.org, CHUCK_LEVER Subject: Re: [RFC PATCH 0/1] DOE usage with pcie/portdrv Message-ID: References: <20220503153449.4088-1-Jonathan.Cameron@huawei.com> <20220507101848.GB31314@wunner.de> <20220509104806.00007c61@Huawei.com> <20220511191345.GA26623@wunner.de> <20220511191943.GB26623@wunner.de> <20220514135521.GB14833@wunner.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220514135521.GB14833@wunner.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org On Sat, May 14, 2022 at 03:55:21PM +0200, Lukas Wunner wrote: > Circling back to the SPDM/IDE topic, while NVMe is now capable of > reliably recovering from errors, it does expect the kernel to handle > recovery within a few seconds. I'm not sure we can continue to > guarantee that if the kernel depends on user space to perform > re-authentication with SPDM after reset. That's another headache > that we could avoid with in-kernel SPDM authentication. I wonder if we need kernel bundled and tightly controlled userspace code for these kinds of things (also for NVMe/NFS TLS). That is, bundle a userspace ELF file or files with a module which is unpacked to or accessible by a ramfs-style file systems. Then allow executing it without any interaction with the normal userspace, and non-pagable memory. That way we can reuse existing userspace code, have really nice address space isolation but avoid all the deadlock potential of normal userspace code. And I don't think it would be too hard to implement either.