From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7716C433FE for ; Tue, 19 Oct 2021 17:37:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A00FB60F8F for ; Tue, 19 Oct 2021 17:37:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234738AbhJSRj0 (ORCPT ); Tue, 19 Oct 2021 13:39:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231235AbhJSRjZ (ORCPT ); Tue, 19 Oct 2021 13:39:25 -0400 Received: from mail-qv1-xf32.google.com (mail-qv1-xf32.google.com [IPv6:2607:f8b0:4864:20::f32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EF93C061746 for ; Tue, 19 Oct 2021 10:37:12 -0700 (PDT) Received: by mail-qv1-xf32.google.com with SMTP id v10so430140qvb.10 for ; Tue, 19 Oct 2021 10:37:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=DDNcSaWi4PeJUv65MFP/RIGeIRJgEuWKda7nGMDceGo=; b=k6DqTzZMVb56PMbP6dhTEzv807yXtBKtSHK9voYdsh/iNyU+p6graFaSzdYI7M8qrV vUb9prw7dddQ+FJS/PWGSXmQTWNP//pI6Gn+F+s118xKRmSgT8TWSJCyww49K6/6rh4k 4H8KeeCy0d/vl4WDKueyBIQyF8CbffACXmR2txHvBuhM9nhOhxnN2pB1XnsGUvZJqRWy Wq30s1RgF9g77GoilJOAZOZiok4vS+qmQ/Hl0nat1UfB46ddipvi2uhtRXKsJc4YCySL x0bYYzoWIYjQyCinEf5MI7/ujz6SBRWlt2yEujel/8jKqtlcszL1XUxIPh/OZUndZaTh SrtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=DDNcSaWi4PeJUv65MFP/RIGeIRJgEuWKda7nGMDceGo=; b=feNvvWKFCokVklo1P0vncNnvpeU/Wa/EcpnMiDvqGNjQImG/4O9iYuftTu+DIGwyuE f/En35dSmtiews1od6uCIKU2hSI6Q8jAld5MpLhR4Efs4kxKxnncVhn9R6vzySy2udCK z0pdIE1dQ5BWnstzzybDxiVFPfeRWvXaOaODGPIduLYIPxdht1HLk3lHXdYPB41Ds948 +lTNVCoR4YWg7LKFjqQsStIFBj34FRvgA9eJl8WYa8ZscRv4F4u/0p0o0PWUcWJVbjw3 /B0kTZxBG7Qfh2Vc56d1px7CEhvq+ba36K8tGJlEP0IH3SGRDkxDxXiz64e5FAMsnbYj hpPw== X-Gm-Message-State: AOAM532fiDIbIjDEXaCvmmBz/wY0yjk4KWi5O/28aB/cojuEJApcND97 4OXAWcJZ5ZRQoQxze5w2O23OZQ== X-Google-Smtp-Source: ABdhPJyqJwDw6epqpXpG73lSY7+KwBLvn+msGtZ7q+PCo3+V0I8r/sjIIYZ4fJS6nEEZ+4BY3ExJQg== X-Received: by 2002:a0c:ffa9:: with SMTP id d9mr1231035qvv.53.1634665031269; Tue, 19 Oct 2021 10:37:11 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-142-162-113-129.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.129]) by smtp.gmail.com with ESMTPSA id t26sm7975721qkg.40.2021.10.19.10.37.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Oct 2021 10:37:10 -0700 (PDT) Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1mct30-00GtVo-A1; Tue, 19 Oct 2021 14:37:10 -0300 Date: Tue, 19 Oct 2021 14:37:10 -0300 From: Jason Gunthorpe To: Kent Overstreet Cc: Johannes Weiner , Matthew Wilcox , Linus Torvalds , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Andrew Morton , "Darrick J. Wong" , Christoph Hellwig , David Howells Subject: Re: Splitting struct page into multiple types - Was: re: Folio discussion recap - Message-ID: <20211019173710.GI3686969@ziepe.ca> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 19, 2021 at 12:11:35PM -0400, Kent Overstreet wrote: > I have no idea if this approach works for network pool pages or how those would > be used, I haven't gotten that far - if someone can chime in about those that Generally the driver goal is to create a shared memory buffer between kernel and user space. The broadly two common patterns are to have userspace call mmap() and the kernel side returns the kernel pages from there - getting them from some kernel allocator. Or, userspace allocates the buffer and the kernel driver does pin_user_pages() to import them to its address space. I think it is quite feasible to provide some simple library API to manage the shared buffer through mmap approach, and if that library wants to allocate inodes, folios and what not it should be possible. It would help this idea to see Christoph's cleanup series go forward: https://lore.kernel.org/all/20200508153634.249933-1-hch@lst.de/ As it makes it alot easier for drivers to get inodes in the first place. > would be great. But, the end goal I'm envisioning is a world where _only_ bog > standard file & anonymous pages are mapped to userspace - then _mapcount can be > deleted from struct page and only needs to live in struct folio. There is a lot of work in the past years on ZONE_DEVICE pages into userspace. Today FSDAX is kind of a mashup of a file and device page, but other stuff is less obvious, especially DEVICE_COHERENT. Jason